Hey there folks.
I’ve decided it would probably be a great idea to start doing more hands-on stuff with both Wwise and Unity. So, I’m going to start by showing you how to integrate the incredibly powerful Wwise middleware engine into Unity. I first got my hands on Wwise three years ago, and boy, was that program a nightmare at first. I didn’t understand how you could use sliders and graphs made of arbitrary parameters to “code” sound and then make them somehow fit into a game. I mean, the concept was there, but trying to practice Wwise in the capacity of designing sound – before I even knew how to use Unity, let alone integrate the two tools together, no less – was an incredibly huge challenge that took much longer than it rightfully should have. Keep in mind, back then, the Unity Integration Tool wasn’t immediately accessible the way it is today; you had to actually ask for it from AudioKinetic, and the documentation was not as good as it is now, either. As of the past few versions of Wwise, that particular barrier of entry has effectively been eliminated. It’s now easier than ever to implement high-quality, nuanced audio into your games using Wwise and only a handful of necessary commands.
So today, we’re going to set aside using both programs, and focus on making them talk to one another. We’ll come back to doing stuff with them another day.
Why do you want to integrate Wwise into your Unity project?
There’s a plethora of reasons, actually. First off, it’s cheap. A Level-A license is only $600 for the first platform you release your project on. A Level-A License means that your project’s budget reaches up to $150,000. Level-B (up to $1,500,000) is still only $6000, while Level-C (Over $1,500,000) is $15,000 a license (which is always to equate to <1% of the project budget. And what would you get out of it? A program that allows you to debug audio, lay it out in logical, event-based sequences, dynamically mix your game in real-time, as it runs in Unity (or whatever engine you’ve plugged it into), and create complex automation curves intuitively. All of this playback is handled by the Wwise Audio Engine and it frees up your programmers to program game features, instead of spending unnecessary effort on the audio tricks the sound designer(s) want. Even an audio fade-in requires three or four lines at best, that a sound designer might not know. But he would know how to map it in from Wwise! In other words, the greatest feature that Wwise has, is that it removes dependencies. A sound designer who learns Wwise and how to call it’s few functions and methods in code will have very little reason to bother programmers, allowing both of them to work more efficiently.
How do you do it?
Glad you asked! Like I said above, it’s easier than ever to get these two programs talking to one another – without having to write the code yourself as may have had to do in the past. Just remember that plug-ins for Unity are a PRO feature, so if you’re on Unity Free, you’re out of luck. The methods described here also only work with Unity 3.4 and above. Luckily, 3.4 was released in 2011, so you should hopefully have no problem on being up to date. The very first step is to make sure you’ve got Unity3D, Wwise, and the proper Unity Integration Package downloaded. You can find the latest version of Unity easily at http://www.unity3D.com, and the latest version of Wwise, with a selection of Unity Packages available for it here: https://www.audiokinetic.com/downloads/ – just pick which platform you need. Windows should suffice for now.
Open a Unity Project.
If all you are interested in is getting the integration running, you can skip this step, but if you want to play around with it and follow the next set of tutorials, add at least the default Character Controller package and give yourself a cube to walk on (GameObject > Create Other > Cube). Add a few objects (I like translucent spheres!) and set them to triggers. If you’re feeling fancy, you can even toss in a directional light.
Now, take that Integration package you downloaded from the Audiokinetic site (the Windows version of the Unity Integration) and extract it somewhere easy, like your desktop. In Unity go to Assets > Import Package > Custom Package. Browse to your extracted file and import the “WwiseUnityIntegrationDemo_Windows.unitypackage” unity package. You should wind up with a new “AK” folder in your heirarchy (AK for AudioKinetic), which includes an API folder, some demo scenes and scripts (TAKE A LOOK AT THESE LATER!), and most importantly, an “Examples” folder.
Go to GameObject > Create Empty, and insert an empty game object to your scene. Call it “SoundEngine” or something convenient to yourself and add two scripts to it from that “AK > Examples” folder in your hierarchy: “AKGlobalSoundEngineInitializer.cs”, and “AKGlobalSoundEngineTerminator.cs” – these are going to be the nuts and bolts that make your sound engine work. Double-check to ensure the “Base Path” property of the Initializer is set to Generated Sound Banks. that will be extremely important later (It should be fine if everything imported correctly, and then go to Edit > Project Settings > Script Execution Order to set the following order (again, this should be done already for you):
AKGlobalSoundEngineInitializer: -100 (top of the list)
This is a custom execution order to ensure that your sound engine gets fired up before anything else. The included documentation offers a little more information about how and why you should modify this, but for now, just know that it makes everything work properly.
The very last thing you want to do is add a listener to whatever you you want to “listen” to the game world. It sometimes takes a little more thought than you might think. An FPS, for example, obviously should have a listener on the main camera/player. But what about in a third-person game? It might depend on the pace of the game, for example. A more relaxed and open adventure game would probably benefit from the consistency of attaching the listener to the camera, even if the listener was placed nearer to the player. Another option might be to place it on the main-camera itself, as I’ve done in my current project Illuminate. The rationale was that the player character, Lampy, would always be flipping orientation and this would disorient the player if sounds were constantly flipping between the left/right channels (in relation to the Lampy), even if they had not moved anywhere on the screen positionally.
So, with that in mind, add your listener by adding the “AKListener.cs” to wherever you feel it makes the most sense.
One last note is that anything with positional sound that moves has to have “AKGameObjectTracker” added to it. We’ll worry about that some other time.
Anyway, if you’ve made it this far, then congratulations: You have successfully integrated Wwise and Unity! It really is that easy!
You should now play your scene to make sure you don’t have any errors. If you do, go back and recheck.