Presentation: Abracadata!

I gave a microtalk at GDC 2018 as part of a session at the Artificial Intelligence Summit called 'Turing Tantrums: Devs Rant!'. I shared a thought experiment about exploring the possibility space of abstracted data relationships that cross disciplinary boundaries. Unlikely data marriages!

Transcript:

As a bit of an outsider I thought instead of ranting, it might be better to > share a thought experiment around an area of interest for me lately ...

AbracaDATA!

Games are a treasure trove of data.

A lot of what happens in games has something to do with numbers and math, and this stuff is great for creating and reinforcing the internal relationships in a game.

The most common relationship is player input, and how it drives just about everything. But let’s focus elsewhere.

Maybe your game has an enemy. A blue rectangle OH NO! It’s being shuffled around the world with some movement instructions. And you spruce it up with an animation, and maybe it feels good with some tweaking, but if physics and animation share their data, maybe they make even better decisions. Also that bearded man is now a giant dog thing.

Of course, you may not want all yours systems to share, and sometimes a hand-authored, isolated thing might be what you need. But tying camera movements to explosions or using text to drive a gibberish voiceover are examples of ways that tentacular relationships can improve the way a game feels.

Speaking of gibberish, I spend a lot of time thinking about sound, and lately I’ve been thinking about data sonification: using a data input to generate a sonic output. When a gameplay event occurs, like a footstep, we like to trigger a sound. This is a very useful and simple form of data sonification.

Another common practice is to map an object’s spatial position visually, to a relative spatial position, sonically. For instance, mapping the X position on screen to the stereo pan position of a sound, or mapping an object’s distance from the camera to a sound’s volume and brightness.

These mimic the way we hear things in the real world and are simple victories. But the examples I’ve given so far are well known and commonly employed. They’re perfect for clarifying, giving the player more coordinated feedback about what they’re interacting with.

But I want to talk about some of the less utilitarian places these relationships can go. Why not do more to re-contextualize this data instead? We could springboard ourselves into explorations of relationships that are weird, novel, counterintuitive and wonderfully asymmetric.

Here’s a silly one. There’s a game called ‘Sonic Dreams Collection’, where changing the size of the game window on the main menu changes the pitch and speed of the music. But what if it went beyond that? Suddenly you might care about window sizes in this strange new context, and it might elicit a reaction normally not reserved for the size of your window…

Or what if you tied the movement pattern of the ripples of a nearby river to the hair physics of your player, but only inject the data as you move away from it? What is this environment trying to evoke? Negative magnetism? (what does that even mean)

Finding meaning here can be a bit like trying to parse through a tarot card reading. You draw some random sources and try to map meaning onto their relationship.

Maybe the gag about window size didn’t inspire a deeper search for meaning, but what about a more opaque & esoteric data abstraction ? You might experience it as a kind of intelligence. And we could employ these relationships in subtle but cumulative ways.

Bees, for instance, perform a figure-8 called the ‘waggle dance’ that relays important locational info to other bees. You could create a dumb version of this cooperative relationship using abstracted data, and employ it within a system of similar objects. Maybe the relationship relies on distance between the objects, so that when they get close, they appear to share information with each other through sound or movement.

As worldbuilders, we could hint at a deeper ecology, through layers of data abstraction that might seem cooperative, adversarial, emergent, or mysterious and difficult to verbalize. We can suggest that with or without the player, the actors in this ecosystem are hopelessly entangled, and will carry on with their ebb and flow, just like we all do. Could be cool ...

The nice thing is that unlike the "waggle dance", you don’t need to prove it out with science. Maybe even the most arbitrary data relationship could feel like real intelligence if it’s been sufficiently abstracted. Players will conjure up their own interpretations, they like to do that. So you just need to convince them that they are experiencing something meaningful.

In other words, I think that by making different parts of a game communicate and share information in non-traditional ways, we can emulate the vitality we experience from real intelligence, and as a result it may be possible to manufacture a deeper sense of meaning and causality.

And the more liberal the different parts are in communicating with unlikely partners, the more things may start to get downright ecological.

And an interesting ecology of data relationships would probably have different kinds at play ... opaque ones, transparent ones, those that seem arbitrary, those that are rational, the esoteric (opaque+arbitrary), the absurd (transparent+arbitrary), the accessible (transparent+rational), the intelligent? (opaque+rational) (i totally made this up)

I think the abstraction and recontextualization of data can lead to all sorts of results. But if we sense there are meaningful relationships of cause and effect at play, that could lead us to suppose there is intelligence, and that could bring more depth to our experience.

So give it a shot! Things will definitely happen if you let your systems co-mingle.

You could let the volume level of a creature’s mating call drive the probability that other creatures respond in kind.

You could light a room using the average color of the last 60 frames.

You could take the wave propagation system used to drive visual wind FX and map it to the size of an NPC’s shoes.

But in any case,

AbracaDATA!

Or perhaps … Abraca...dada?(ism)

Link: GDC Vault: 'Turing Tantrums! AI Devs Rant'

Presentation: Serialism & Sonification in Mini Metro

I gave a talk at GDC 2018 as part of a session at the Artificial Intelligence Summit called 'Beyond Procedural Horizons'. I talk about how we combined data sonification and concepts taken from the musical approach known as Serialism to build a soundscape for Mini Metro.

Transcript:

Serialism & Sonification in Mini Metro

or, how we avoided using looping music tracks completely,by using sequential sets of data to generate music and sound.

Serialism

In music, there is a technique called Serialism that usessequential sets of data (known as series), set about ondifferent axes of sound (pitch, volume, rhythm, note duration, etc),working together to create music.

In Mini Metro, we apply this concept by using internal data from the game and externally authored data in tandem to generate the music.

You might have noticed that the game has a clock - the game is broken up into time increments that are represented as hours, days and weeks. (though of course faster) And before diving in, it’s important to know that we derive our music tempo from the duration of...

1 In-Game Hour = 0.8 secs = 1 beat @ 72 bpm = our master pulse, one in-game hour.

We use this as our standard unit of measurement for when to trigger sounds.In other words, most of the sounds in the game are triggered periodically, using fractional durations of 0.8 seconds.

In Mini Metro, the primary mode of authorship lies in drawing and modifying metro lines. They’re also the means in which everything is connected.they serve as the foundation upon which the soundscape of pitches and rhythms is designed. The lines are represented by a unique stream of music generated using data from different sources.

The simplest way to describe this system is that each metro line is represented by a musical sequence of pulses, triggered at a constant rate with a constant pitch. This rate and pitch is constant until they are shifted by a change in gameplay. Each metro station represents one pulse in that sequence. Each pulse has some unique properties, such as volume, timbre, and panning, and these are calculated using game data. Other properties still are inherited from lower-levels of abstraction, namely unique loadouts for each game level. Some levels tell the pulses to fade in gradually, other levels might tell the pulses to trigger using a swing groove instead of a constant rate, and the levels are differentiated in other ways as well.

[1, 2, 3, 4, 6]

All of this musical generation is done using sets of data. And referring back to Serialism, the data is quite often sequential, in series. These numbers actually represent multiples of time fragments. Or, to put it more simply, rhythms. In the case of rhythms and pitches, the data is authored, so we have more control over what kind of mood we’d like to evoke. This data is cycled through in different ways during gameplay to generate music.

So we’ve got some authored data generating musical sequences, but what about using game data? Ideally we could sonify it to give the player some useful feedback about what is happening.

Combinations of game and authored data are found throughout Mini Metro’s audio system. There’s lots of game data and authored data being fed into the system, working in tandem. Authored data is often used to steer things in a musical direction, while game data is used to more closely marry things to gameplay. In some cases, authored data even made its way into other areas of the game. Certain game behaviors like the passenger spawning were retrofitted to fire using rhythm assignments specified by the sound system.

You might ask why go through the trouble to do things this way? Well, it is really fun. But beyond that there are a variety of answers and I could go into a lot of depth about it, but I think the most important reasons are:

Immediacy & Embodiment

Immediate feedback is often reserved for sound effects and not music, and immediate feedback in music can feel forced if not handled correctly. This type of system allows us to bring this idea of immediacy into the music in a way that feels natural.

The granularity of the system allows the soundscape to respond to the game state instantaneously and evolve with the metro system as it grows, and a holistic system handles all of the gradation for you. When your metro is smaller and less busy, the sound is smaller and less busy. As your metro grows and gets more complex, so does the music and sound to reflect that. When you accelerate the simulation, the music and sound of your metro accelerates. When something new is introduced into your system, you’re not only notified up front, but also regularly over time as its sonic representation becomes a part of the ambient tapestry.

Embodiment

And this all (hopefully) ties into a sense of embodiment. Because all of these game objects have sounds that trigger in a musical way, and all use a shared rhythmic language that is cognizant of the game clock, and use game data to further tie them to what is actually happening in the game, things start to feel communal and unified.

It’s an ideal more than a guarantee, but if executed well, I think you can start to approach something akin to a holistic experience for the player.

Thanks!

Link: GDC Vault: 'Beyond Procedural Horizons'