Jo Plaete

Visual Effects – 3D – Interactive Design

Archive for the ‘simulation’ Category

Behavioural Particles

leave a comment »

Lately in my (fairly sparse) spare time I had some fun with particles. Merely playing around a bit with different kinds and in several 3d packages but I thought I’d share the following combo with you.

The simulation basically consists of a pretty simple flocking setup where some wandering masters are chased by a bunch of predator boids, all of them spawning a good amount of trailing particles which then get affected by a bunch of other forces. I kinda liked some of the patterns it formed, here are some stills:

It started out as a few tests and a bit of playing with a particle renderer I had heard good things about: Krakatoa, trying out its fast point rendering ability. And if it says fast, it is fast (!) rendering 7+ million particles with self-shadowing and 4 steps moblur at 720p resolution in just under 1 minute/frame on my 2.93 dualcore laptop. It can also do voxel rendering which gets (obviously) a bit slower but for sure the better choice for more dense/smoky effects. Krakatoa is currently only available as a plugin for 3dsmax but let that not hold you back (unless you’re a max user – win!) from giving it a go as there are some kindly written exporters available for packages like Softimage (thanks Steven Caron) and Maya. This simulation was done using Softimage ICE.

I uploaded it to vimeo, best viewed in HD but still losing a good amount of detail so if you want the real deal you can download it here (720p – 370MB).


Written by Jo Plaete

March 3, 2010 at 2:29 pm

Posted in 3D, motion, Rendering, simulation, XSI

Notes on a Massive Crowd Simulation Pipeline

with 41 comments

As promised, in this post I will talk about how to use Massive Crowd Simulation software in a project pipeline based upon the experience I built up doing the Path To War group project where I had to solve several crowd shots. I will not go in depth into how massive works, agent brain building or explain every step I took in detail. Rather I will pick out some specific issues and techniques I came along using the tool. This document assumes you know what Massive is, what it does and its internal workflow. Watch the Film.

Before I address some more specific issues I’d like to sketch out the overall pipeline we’ve set up to pull off the project as after all one of the biggest challenges in using Massive Crowd Software was to get it integrated into our university pipeline and set up a decent workflow with other 3d tools. We had quite a lot of software involved on the project with Softimage XSI at the core, Massive Prime for crowd simulation, E-On Vue for the natural environment creation/vegetation and Apple Shake as 2d compositing tool. Next to those there was also Autodesk Maya and Motion builder for the motion capture pipeline and Z-Brush for the character artists as sculpting/texturing tool. Because of the various software we were rendering out of three different renderers being Mental Ray for xsi shots, Renderman for crowds and the Vue internal renderer for the environments. The overall pipeline is shown schematically in figure 1.
Figure 1. Path To War overall pipeline.

In general, the core of the pipeline was XSI, this is where the previs was done and the cameras and terrains got placed to be then transferred over to other tools. Also all the modeling, texturing and animation of the close up hero shots were done using XSI. However, the initial idea of having straight a way communication between XSI and Massive did not work out as well as planned. Multiple prove of concepts of bringing over rig and animation data had various positive but also negative results and proved in the end that unfortunately we could not rely on this. Also as the project was on a very tight (8 week) deadline (including learning the tool, setting up the pipeline and finishing off), there was no time to develop custom tools to solve certain issues. Therefore we brought in Autodesk Maya where Massive was initially written for (pipeline wise) and used this as a platform in between Massive and other software.

Models and Textures
A fairly straight-forward part of the pipeline was the transfer of the character/props models and textures as this worked perfectly fine by getting OBJ files with uv information for the accompanying textures out of XSI straight into Massive. To prevent crashing the engine or overloading the renderer with a big amount of agents we had the models in 3 levels of detail. For long shots we had a low polygon version that contained around 4000 polygons. More close up/medium shots had a slightly more detailed model and the hero shots (rendered out of XSI) had the full high polygon model with z-brush displacement maps, etc. It was important though to keep the models fairly low poly for Massive but also to have the possibility to switch to a higher level of detail as agents came closer to the camera. This could be accomplished using the level of detail functionality inside Massive.

Motion Capture
As the project progressed we quickly found out we definitely needed to bring in motion capture to fill up the Massive motion trees with realistic animation and let our animators concentrate on the hero acting shots in XSI. This brings me to an interesting bit which is how the actual motion capture pipeline worked. After a few tests with our xsi-based rigs I decided to leave them for solely XSI purposes and build a new rig that would fit better into the Maya pipeline. After various testing and looking at the motion capture pipeline it seemed most stable to reverse engineer the process and build the actual crowd rig in Massive itself and export it back out to Maya (.ma) from which it could then go back and forth to Motion Builder to apply the motion capture animation. Bringing that animation back into Maya and export it as a .ma for Massive made it very convenient to import actions for the agents. Once imported into Massive, I used the action editor to prepare the actions for use in the brain. Something I’d like to point out that kept me busy for a bit is to remove the global translation and rotation curves from the action in the action editor to make the brain able to drive those based upon its calculations.
Motion Tree

Further, to get some hand animated actions into massive, we’ve built some FK shadow rigs in XSI which mimicked the Massive rigs we had in Maya. This way we were able to FK-wise transfer XSI animation data on the XSI rig through the shadow rigs into Maya and from there into Massive. Which we didn’t really use in the end as we chose to render all the hero characters out of XSI and composite them in with the crowds.

Because the terrain got rendered out of Vue, hero characters out of XSI and the crowd characters out of Massive, it was very important to get all geometry and cameras lined up. The best way to do this appeared to be to take the camera from XSI to Maya via the FBX format and then import it into Massive out of a .ma file (figure 3). Using an xsi-vue plug-in we were then able to take that same XSI camera into Vue and render out the background plates from the exact same perspective. The terrain geometry was imported into Massive using the OBJ format to have the agent interact with it but only the agents were rendered out from Massive. To keep the simulation as light as possible it was good to cut up the terrain for different shots and only import the parts needed for certain shots.

Figure 3. Camera pipeline xsi->Maya->Massive.

For the rendering I took advantage of the Massive passes based set up approach towards Renderman compliant renderers. We exported the RIB files out of Massive and rendered them out with Prman. We did have to do some post processing on the RIB files though to get them to render out as we wanted by adding attributes and changing them towards our linux based renderfarm. Several python scripts were developed to remove windows path names and change the massive.dll reference into a .so one.

The passes produced out of Massive – Prman were a diffuse pass (figure 4), specular pass, ambient occlusion and a variation pass. Originally, we tried depth map shadows but it was hard to get a decent pass out so an ambient occlusion pass was needed to stick the agents to the ground when composited onto the background plates. Since the ray traced ambient occlusion on a lot of agents (+300) crashed the renderer, a script had to be developed to split up the AO RIB files which were then rendered out to multiple AO files per frame (figure 5) and combined back together in compositing.

Render Passes

Figure 4. Passes.

Another pass we’ve called the variation pass (figure 6) was pretty interesting and helped the final output a lot. Since we didn’t have much time to create a big amount of character variations, we used this pass to pull more variation in the crowd in compositing. Technically it’s just a constant shader which get passed a random argument value between 0.2 and 0.8 defined by a massive agent variable. Make sure to pass on the random value from the agent and so on a RIB level as randomizing inside the shader would give you ‘noise’ agents which is not what you want. This way we had a pass of all agents having a random grayscale value which the compositers could use to pull more variation into the crowds. In the end a lot depended on the compositors to make it all work together since the terrains were rendered with the Vue renderer, the hero characters with Mental Ray out of XSI and the crowds with Prman. So out of massive we had only the crowd agents which were then composited into the terrain which was rendered out in separate layers to achieve the best integration. Another thing to take into account when rendering out using this approach is not to render the terrain out of Massive but do use it as a masking object to occlude agents that are behind the terrain.

Variation pass
Figure 6. Variation pass.

Randomized Agent Setup
When doing a crowd simulation one of the most important things is to take care not to give away similarity or patterns in your crowd. Next to the additional variation pass at render/compositing stage we tried to address this problem as much as possible by making variations of the character models and textures. We ended up having 2 different Viking base models which both got 4 modeling sub variations and 4 texture variations plus also 4 different weapon props. This was all combined into 1 agent which had some randomized agent variables tied into a massive option node.
Looking at the movie you might join my opinion that the Viking crowd looks a bit more dynamic and randomized then the Egyptian one. This is solely due to the fact that we had 2 base models in that Viking crowd and only in the Egyptian crowd. Though there were 2 model variations, 4 texture variations and different props assigned to those Egyptian warriors, this variation could not live up to having 2 different source models also having this sub variation.

Of course, randomizing your animation is vital as well. Maybe even more then visual pattern appearances, movement patterns are spot very easily and give away your crowd quickly. So it’s good to keep this in mind when preparing/directing your motion capture session to go for as much various motion as possible. For example capturing multiple different run cycles for the same run motion and randomize those in between your agents is a good way to go. Next to that putting a slightly random variable onto your action playback speed gives really nice results too. As we hadn’t much time to prepare our motion capture session we came out of it with good but slightly not enough data so we went in and manually changed some of the basic run loops slightly and randomized that playback rate in the agents brains to get a more natural feel to the overall movement.

Directing the Agents Movement and Flow
The most interesting interaction that had to be taken care off inside of Massive was the terrain. We had trees on the terrain rendered out of Vue which had to be avoided by the agents but obviously were not apparent inside massive. Something simple but which helped a lot for this was the feature in Massive to put an image as your viewports background. So I brought in the terrain and camera for the shot and putted the Vue rendered out backplate image as viewport background to start matching up and to find out how to place/direct the agents. Next to that some dummy geometry was placed on the terrain where the trees supposed to be to get them avoided nicely by the agents vision.

In the end the brain made the agents do the following: follow the terrain topology including banking and tilting them to the tangent; avoid each other based upon sound emission and flock them back together when wandering to far off; follow a flow field for the general running direction; getting random actions applied and blended onto the agents; randomize the playback rates of the actions for variety. Of course, lots of tweaking and testing was involved to get the crowd behaviour to work well but I was very happy in the end to use Massive for this which definitely provided some very good features to combine all these behaviours and keep a lot of control over what you are doing without the need to go into heavy programming. Figure 5 gives an overview of all forces working on the agents and as you see it can get quite tricky as you get opposite forces pushing them away from each other and pulling them back together which makes it necessary to go in and tweak the behaviour on a per shot basis.

Figure5. Agent Forces.

One thing I did miss in Massive though is the ability to go in and direct a single agent or a ‘make agent independent and tweak brain’ function. As all your agents, or at least groups of them, share the same brain it is sometimes hard to deal with that annoying one that is not doing what it should do. I ended up deleting those agents out or changing their start position to attempt to adjust their behaviour which is a pretty tedious process and finally some were even masked out in compositing.

As I mentioned before, integrating Massive into our project pipeline was one of the biggest challenges on the project. It is a great tool for what it does with artificial intelligence and crowd simulation but it requires some investigation and testing to get it to work properly. I hope reading this might give you some directions on how to integrate it on your project. I do not assume this is the ultimate way but it was sure close to the best way with the out of the box solutions. Contacting some industry companies and speaking to other Massive users it appeared that lots of them develop in house tools around it to fit it into their pipelines. Any comments or suggestions on this are very welcome.


Written by Jo Plaete

June 25, 2008 at 12:10 am

Path To War

with one comment

After 8 weeks of hard work, this is the result:

hi res:

I was responsible for:
– All crowd simulation using Massive Prime
– The crowd rendering using pixar Renderman
– The creation, lighting and rendering of the natural environments
– The general pipeline setup for getting assets flowing inbetween software
– The overal direction on the project

Software used:
Softimage XSI, Massive Prime, Autodesk Maya,
Pixar Renderman, Pixologic ZBrush, Apple Shake,
Adobe After Effects, e-on Vue, Adobe Photoshop.

People on the project and their roles:
Jo Plaete
– general direction, pipeline, crowds, environments, prman rendering
Nic Groot Bluemink
– modeling, rigging, animation
Lee Baskerville
– modeling, texturing, concept design
Pedram Eatebarzadeh
– modeling, texturing, animation, concept design
Alkis Karaolis
– modeling, xsi rendering, cloth, animation
Inci Vatansever
– animation
Lars van der Bijl
– visual effects, compositing
Christopher Hoare

– visual effects, compositing

Brian Gair(music composer), Jon Mann(sound designer), Angel Perez Grandi(foley artist)

I’ll be writing a more in depth post on this later but I have some other projects to tackle first.


Written by Jo Plaete

March 18, 2008 at 9:56 am

Battle Field Crowd Project (in Progress)

leave a comment »

Hello there,

Sorry for not posting for a while but I’m extremely busy at the moment on my 3D animation course. Besides some essays and research projects, I’m working on our term 2 group project in which we do an attempt to create a full cg battle scene (in 8 weeks). At the beginning of this term we all had to pitch some ideas to form groups around. Since I’m very interested in crowd simulations I pitched the idea of a medieval epic battle, which would give me the opportunity to dig a little deeper into this subject. After my pitch some people seemed interested and we formed an 8 people group around the project.

My role on the project is mainly technical direction, up until now I’ve been learning Massive Software to try solve the crowd simulation on a high level. Massive was written at WETA digital for generating the crowds of the Lord Of The Rings feature films and has been used on a lot of films after that. It is a great tool for what it does in terms of crowd simulations but it’s not the easiest tool to integrate in a pipeline (definitely not if you’re new to it). Getting rig/animation data in there from xsi for example is not an easy task. On the other hand the fuzzy logic it uses to form agents individual (artificial intelligent) brains and let them make decisions using various inputs, is very clever. The crowd pass shading and rendering out of massive will be done using pixar’s renderer prman on our renderfarm. Another thing to get familiar with but very interesting of course..

In addition to the crowd, I’ve been doing some pipeline setup for the project, getting geometry, cameras, rigs, animation data, etc.. transfered in between various 3D packages; some cloth simulation using maya nCloth for flags and clothes; terrain texturing; and a bit of general management over the project together with my colleagues.

Yesterday we also did a 5 hours motion capture session in the access mocap studio of our university which I directed together with some other team members. Great to have those facilities at hand (Bournemouth NCCA!). This will provide me with animation data to fill up the massive crowd motion tree and give our animators some good reference data/backup.

In total, we are 8 people doing this project. We have 3 character artists/animators, 1 fulltime animator, 1 rigger, 2 compositors/effect artists and 1 Technical Director(me). It’s superb working with all such talented people on this kind of project. Even though it’s a ‘massive’ project to pull off in 8 weeks (and we are learning a hell of a lot at the same time), it is a great exercise and let’s hope we can get a decent result out of it.

I can’t give more info about it at the moment but I can share a terrain test we did with. The terrain is modeled by Alkis Karaolis and I textured, vegetated, lit and rendered it myself. And a first very simple massive test with agents following a terrain and bump into each other.


So, I hope this gave you a little update about what I’m doing at the moment and be sure to check for more in the future.
14th of March is the deadline, if we pulled it off quiet well;) I will (of course) post the final version here and maybe try to write a little bit about how we did things.



Written by Jo Plaete

February 8, 2008 at 4:40 am

Crowd Simulation

with 2 comments

A crowd simulation script for XSI. An export script made it easy for the animators to export their animations in the point oven format to a general database. The crowd simulation script then generates the crowd using the various animation. Some testrenders:

Written by Jo Plaete

December 20, 2007 at 4:14 pm

Learning Houdini

leave a comment »

I recently started learning sidefx Houdini, a very powerful 3D application for visual effects.

Some things I tried out using various learning resources:

Written by Jo Plaete

October 24, 2007 at 2:16 am

XSI script for placing geometry

leave a comment »

During a scripting lesson I wrote a little xsi script for easily placing (lots of) geometry in a scene and put it into a simulation.

A little rendered out test that is generated by the script is attached:

Written by Jo Plaete

October 24, 2007 at 1:36 am

Posted in scripting, simulation, XSI

Tagged with