Jo Plaete

Visual Effects – 3D – Interactive Design

Archive for the ‘animation’ Category

ICE Kinematics Helper Operator

leave a comment »

Quick post to share something possibly useful when you’re working with Softimage and ICE.

As known ICE is currently still limited to pointclouds/particles only and doesn’t officially support the setting of kinematic properties on objects in your scene yet (reading them works). This could come in handy when using ICE for things like rigging, etc. You can play already with this by enabling a variable in your xsi env.
set XSI_UNSUPPORTED_ICE_KINEMATICS=1 , more info about this:
For the official support we’ll have to await a future release, maybe Softimage 2011 (?)

In the meantime, if you do want to get some ICE data to drive object kinematics without enabling the unsupported feature, this operator might help you with that.

What it basically does is reverse the process, instead of writing position and rotation data from the ICEtree onto the object’s kinematics the c++ operator will sit on the object and query those attributes from the pointcloud’s data array. This way the object will constantly update it’s position and rotation mimicking the behaviour of a certain point in your pointcloud. Back in ICE you can do whatever you want with the point/particle including reading kinematics from other objects in your scene, etc.

I’ve compiled the operator for both Softimage 7.x and 2010 64Bit (windows)
link 7.x
link 2010

What do you need ?
1. a pointcloud with at least 1 point
2. an object
3. the ICEkine operator installed (just load it as a usual plugin/put it in your workgroup)

How does it work ?
To bind the operator to an object use the following bit of python code which:
1. finds and defines both the pointcloud and the object
2. applies the operator

# Add/Find object (e.g. Null) / Find object
myObject = Application.ActiveSceneRoot.AddNull( "null" )
# myObject = Application.ActiveSceneRoot.FindChild( "myObject" )
# Provide the pointcloud primitive ( .ActivePrimitive ! ) you want to remap from
pointcloudPrim = Application.ActiveSceneRoot.FindChild("pointcloud").ActivePrimitive
# Bind The Opertor to Global or Local Kinematics
op = Application.AddCustomOp( "ICEkine", myObject.Kinematics.Global, [pointcloudPrim], "ICEkineOp" )

Once this has successfully connected you should find your object traveling along with the first point in your pointcloud. To connect the operator to another point of that cloud you can open up the operators property page and specify your desired point ID. Don’t forget to initialize self.Orientation on your cloud as the operator will look for it to drive the rotation.

As far as performance goes I get 100 objects remapped running still at 60fps (on my laptop) so for remapping ‘a few nulls’ it should be alright.

Again, this is just a temporary helper but it might be useful until the ICE kine beast itself is unleashed, use at own risk.

Pls let me know how you get on it with it, especially in the (unlikely;) case of buggy behavior. I used it for a few applications/test and it worked fine for me so far, also gotta thank my friend Nic Groot Bluemink for testing it !

have fun !



Written by Jo Plaete

December 12, 2009 at 4:06 pm

Posted in 3D, animation, scripting, TD, XSI

Speaking at Multi-Mania 2009

with 3 comments

Hi there!

Been a while since my last decent post with the excuse of being crazy busy over the last few months (I know it’s not an excuse at all:) but anyway, news! I just arrived in lovely Flanders on my way from London to Sydney where I will start in June working at Animal Logic.

So I (sadly) have to leave Framestore and London for a while.. It has been a great time working there on cool projects like “Where The Wild Things Are”.

Where The Wild Things Are

Where The Wild Things Are

Coming to the point of the post, I will be speaking at Multi-Mania 2009, which is a (free) new media conference hosting all kinds of topics/speakers going from web, interactive design, flash/flex to 3D and game development. In my lecture I will talk about film visual effects pipeline covering some of the work I’ve been doing over the last few months and further on I’ll introduce the new Softimage ICE technology a bit to the 3d folks in there. Last week I gave 2 days of lectures in Bournemouth University NCCA introducing ICE to the masters 3D computer animation and (I think) they absolutely loved it. As such I’m really looking forward to this one too!

It’s in Kortrijk EXPO, Belgium on the 18-19th of May, you can subscribe to the lectures via:

Well, if you’re around, hope to see you there !


Written by Jo Plaete

May 7, 2009 at 2:59 pm

Posted in 3D, animation, TD, vfx, visual fx, XSI

Wing Feather Tool Version II

with 33 comments

Working further on the wing feather tool for xsi, I decided to rewrite the tool partly and enhance it in terms of approach and usability. The next version of the tool introduces a guide feather approach to control the feather behaviour. It is still procedural in terms of generating your feathers and now gives you easier access to switch the interface inbetween various feather layers.

Quick Tour:

In detail overview:

All is written in Python(XSI), Actionscript 3.0(GUI) and Vbscript as link inbetween those two.
Any feedback would be very welcome 🙂 !


Written by Jo Plaete

August 8, 2008 at 12:00 am

Posted in 3D, animation, scripting, TD, XSI

Notes on a Massive Crowd Simulation Pipeline

with 41 comments

As promised, in this post I will talk about how to use Massive Crowd Simulation software in a project pipeline based upon the experience I built up doing the Path To War group project where I had to solve several crowd shots. I will not go in depth into how massive works, agent brain building or explain every step I took in detail. Rather I will pick out some specific issues and techniques I came along using the tool. This document assumes you know what Massive is, what it does and its internal workflow. Watch the Film.

Before I address some more specific issues I’d like to sketch out the overall pipeline we’ve set up to pull off the project as after all one of the biggest challenges in using Massive Crowd Software was to get it integrated into our university pipeline and set up a decent workflow with other 3d tools. We had quite a lot of software involved on the project with Softimage XSI at the core, Massive Prime for crowd simulation, E-On Vue for the natural environment creation/vegetation and Apple Shake as 2d compositing tool. Next to those there was also Autodesk Maya and Motion builder for the motion capture pipeline and Z-Brush for the character artists as sculpting/texturing tool. Because of the various software we were rendering out of three different renderers being Mental Ray for xsi shots, Renderman for crowds and the Vue internal renderer for the environments. The overall pipeline is shown schematically in figure 1.
Figure 1. Path To War overall pipeline.

In general, the core of the pipeline was XSI, this is where the previs was done and the cameras and terrains got placed to be then transferred over to other tools. Also all the modeling, texturing and animation of the close up hero shots were done using XSI. However, the initial idea of having straight a way communication between XSI and Massive did not work out as well as planned. Multiple prove of concepts of bringing over rig and animation data had various positive but also negative results and proved in the end that unfortunately we could not rely on this. Also as the project was on a very tight (8 week) deadline (including learning the tool, setting up the pipeline and finishing off), there was no time to develop custom tools to solve certain issues. Therefore we brought in Autodesk Maya where Massive was initially written for (pipeline wise) and used this as a platform in between Massive and other software.

Models and Textures
A fairly straight-forward part of the pipeline was the transfer of the character/props models and textures as this worked perfectly fine by getting OBJ files with uv information for the accompanying textures out of XSI straight into Massive. To prevent crashing the engine or overloading the renderer with a big amount of agents we had the models in 3 levels of detail. For long shots we had a low polygon version that contained around 4000 polygons. More close up/medium shots had a slightly more detailed model and the hero shots (rendered out of XSI) had the full high polygon model with z-brush displacement maps, etc. It was important though to keep the models fairly low poly for Massive but also to have the possibility to switch to a higher level of detail as agents came closer to the camera. This could be accomplished using the level of detail functionality inside Massive.

Motion Capture
As the project progressed we quickly found out we definitely needed to bring in motion capture to fill up the Massive motion trees with realistic animation and let our animators concentrate on the hero acting shots in XSI. This brings me to an interesting bit which is how the actual motion capture pipeline worked. After a few tests with our xsi-based rigs I decided to leave them for solely XSI purposes and build a new rig that would fit better into the Maya pipeline. After various testing and looking at the motion capture pipeline it seemed most stable to reverse engineer the process and build the actual crowd rig in Massive itself and export it back out to Maya (.ma) from which it could then go back and forth to Motion Builder to apply the motion capture animation. Bringing that animation back into Maya and export it as a .ma for Massive made it very convenient to import actions for the agents. Once imported into Massive, I used the action editor to prepare the actions for use in the brain. Something I’d like to point out that kept me busy for a bit is to remove the global translation and rotation curves from the action in the action editor to make the brain able to drive those based upon its calculations.
Motion Tree

Further, to get some hand animated actions into massive, we’ve built some FK shadow rigs in XSI which mimicked the Massive rigs we had in Maya. This way we were able to FK-wise transfer XSI animation data on the XSI rig through the shadow rigs into Maya and from there into Massive. Which we didn’t really use in the end as we chose to render all the hero characters out of XSI and composite them in with the crowds.

Because the terrain got rendered out of Vue, hero characters out of XSI and the crowd characters out of Massive, it was very important to get all geometry and cameras lined up. The best way to do this appeared to be to take the camera from XSI to Maya via the FBX format and then import it into Massive out of a .ma file (figure 3). Using an xsi-vue plug-in we were then able to take that same XSI camera into Vue and render out the background plates from the exact same perspective. The terrain geometry was imported into Massive using the OBJ format to have the agent interact with it but only the agents were rendered out from Massive. To keep the simulation as light as possible it was good to cut up the terrain for different shots and only import the parts needed for certain shots.

Figure 3. Camera pipeline xsi->Maya->Massive.

For the rendering I took advantage of the Massive passes based set up approach towards Renderman compliant renderers. We exported the RIB files out of Massive and rendered them out with Prman. We did have to do some post processing on the RIB files though to get them to render out as we wanted by adding attributes and changing them towards our linux based renderfarm. Several python scripts were developed to remove windows path names and change the massive.dll reference into a .so one.

The passes produced out of Massive – Prman were a diffuse pass (figure 4), specular pass, ambient occlusion and a variation pass. Originally, we tried depth map shadows but it was hard to get a decent pass out so an ambient occlusion pass was needed to stick the agents to the ground when composited onto the background plates. Since the ray traced ambient occlusion on a lot of agents (+300) crashed the renderer, a script had to be developed to split up the AO RIB files which were then rendered out to multiple AO files per frame (figure 5) and combined back together in compositing.

Render Passes

Figure 4. Passes.

Another pass we’ve called the variation pass (figure 6) was pretty interesting and helped the final output a lot. Since we didn’t have much time to create a big amount of character variations, we used this pass to pull more variation in the crowd in compositing. Technically it’s just a constant shader which get passed a random argument value between 0.2 and 0.8 defined by a massive agent variable. Make sure to pass on the random value from the agent and so on a RIB level as randomizing inside the shader would give you ‘noise’ agents which is not what you want. This way we had a pass of all agents having a random grayscale value which the compositers could use to pull more variation into the crowds. In the end a lot depended on the compositors to make it all work together since the terrains were rendered with the Vue renderer, the hero characters with Mental Ray out of XSI and the crowds with Prman. So out of massive we had only the crowd agents which were then composited into the terrain which was rendered out in separate layers to achieve the best integration. Another thing to take into account when rendering out using this approach is not to render the terrain out of Massive but do use it as a masking object to occlude agents that are behind the terrain.

Variation pass
Figure 6. Variation pass.

Randomized Agent Setup
When doing a crowd simulation one of the most important things is to take care not to give away similarity or patterns in your crowd. Next to the additional variation pass at render/compositing stage we tried to address this problem as much as possible by making variations of the character models and textures. We ended up having 2 different Viking base models which both got 4 modeling sub variations and 4 texture variations plus also 4 different weapon props. This was all combined into 1 agent which had some randomized agent variables tied into a massive option node.
Looking at the movie you might join my opinion that the Viking crowd looks a bit more dynamic and randomized then the Egyptian one. This is solely due to the fact that we had 2 base models in that Viking crowd and only in the Egyptian crowd. Though there were 2 model variations, 4 texture variations and different props assigned to those Egyptian warriors, this variation could not live up to having 2 different source models also having this sub variation.

Of course, randomizing your animation is vital as well. Maybe even more then visual pattern appearances, movement patterns are spot very easily and give away your crowd quickly. So it’s good to keep this in mind when preparing/directing your motion capture session to go for as much various motion as possible. For example capturing multiple different run cycles for the same run motion and randomize those in between your agents is a good way to go. Next to that putting a slightly random variable onto your action playback speed gives really nice results too. As we hadn’t much time to prepare our motion capture session we came out of it with good but slightly not enough data so we went in and manually changed some of the basic run loops slightly and randomized that playback rate in the agents brains to get a more natural feel to the overall movement.

Directing the Agents Movement and Flow
The most interesting interaction that had to be taken care off inside of Massive was the terrain. We had trees on the terrain rendered out of Vue which had to be avoided by the agents but obviously were not apparent inside massive. Something simple but which helped a lot for this was the feature in Massive to put an image as your viewports background. So I brought in the terrain and camera for the shot and putted the Vue rendered out backplate image as viewport background to start matching up and to find out how to place/direct the agents. Next to that some dummy geometry was placed on the terrain where the trees supposed to be to get them avoided nicely by the agents vision.

In the end the brain made the agents do the following: follow the terrain topology including banking and tilting them to the tangent; avoid each other based upon sound emission and flock them back together when wandering to far off; follow a flow field for the general running direction; getting random actions applied and blended onto the agents; randomize the playback rates of the actions for variety. Of course, lots of tweaking and testing was involved to get the crowd behaviour to work well but I was very happy in the end to use Massive for this which definitely provided some very good features to combine all these behaviours and keep a lot of control over what you are doing without the need to go into heavy programming. Figure 5 gives an overview of all forces working on the agents and as you see it can get quite tricky as you get opposite forces pushing them away from each other and pulling them back together which makes it necessary to go in and tweak the behaviour on a per shot basis.

Figure5. Agent Forces.

One thing I did miss in Massive though is the ability to go in and direct a single agent or a ‘make agent independent and tweak brain’ function. As all your agents, or at least groups of them, share the same brain it is sometimes hard to deal with that annoying one that is not doing what it should do. I ended up deleting those agents out or changing their start position to attempt to adjust their behaviour which is a pretty tedious process and finally some were even masked out in compositing.

As I mentioned before, integrating Massive into our project pipeline was one of the biggest challenges on the project. It is a great tool for what it does with artificial intelligence and crowd simulation but it requires some investigation and testing to get it to work properly. I hope reading this might give you some directions on how to integrate it on your project. I do not assume this is the ultimate way but it was sure close to the best way with the out of the box solutions. Contacting some industry companies and speaking to other Massive users it appeared that lots of them develop in house tools around it to fit it into their pipelines. Any comments or suggestions on this are very welcome.


Written by Jo Plaete

June 25, 2008 at 12:10 am

FMX 2008

leave a comment »

Just got back from a week of sunny Stuttgart (aka the city where every car is a Porsche, Mercedes or BMW) where I attended the FMX 2008 conference on animation, effects and games together with some of my coursemates.

left to right: Pedram, Me, Dapeng

We had a great time over there with lots of interesting lectures, speakers and the opportunity to network and get to know people/companies and their visions. Definitely worth the travel! But, now, we’re back in Bournemouth, back for more!


Written by Jo Plaete

May 14, 2008 at 2:38 am

Posted in 3D, animation, vfx

Path To War

with one comment

After 8 weeks of hard work, this is the result:

hi res:

I was responsible for:
– All crowd simulation using Massive Prime
– The crowd rendering using pixar Renderman
– The creation, lighting and rendering of the natural environments
– The general pipeline setup for getting assets flowing inbetween software
– The overal direction on the project

Software used:
Softimage XSI, Massive Prime, Autodesk Maya,
Pixar Renderman, Pixologic ZBrush, Apple Shake,
Adobe After Effects, e-on Vue, Adobe Photoshop.

People on the project and their roles:
Jo Plaete
– general direction, pipeline, crowds, environments, prman rendering
Nic Groot Bluemink
– modeling, rigging, animation
Lee Baskerville
– modeling, texturing, concept design
Pedram Eatebarzadeh
– modeling, texturing, animation, concept design
Alkis Karaolis
– modeling, xsi rendering, cloth, animation
Inci Vatansever
– animation
Lars van der Bijl
– visual effects, compositing
Christopher Hoare

– visual effects, compositing

Brian Gair(music composer), Jon Mann(sound designer), Angel Perez Grandi(foley artist)

I’ll be writing a more in depth post on this later but I have some other projects to tackle first.


Written by Jo Plaete

March 18, 2008 at 9:56 am

Battle Field Crowd Project (in Progress)

leave a comment »

Hello there,

Sorry for not posting for a while but I’m extremely busy at the moment on my 3D animation course. Besides some essays and research projects, I’m working on our term 2 group project in which we do an attempt to create a full cg battle scene (in 8 weeks). At the beginning of this term we all had to pitch some ideas to form groups around. Since I’m very interested in crowd simulations I pitched the idea of a medieval epic battle, which would give me the opportunity to dig a little deeper into this subject. After my pitch some people seemed interested and we formed an 8 people group around the project.

My role on the project is mainly technical direction, up until now I’ve been learning Massive Software to try solve the crowd simulation on a high level. Massive was written at WETA digital for generating the crowds of the Lord Of The Rings feature films and has been used on a lot of films after that. It is a great tool for what it does in terms of crowd simulations but it’s not the easiest tool to integrate in a pipeline (definitely not if you’re new to it). Getting rig/animation data in there from xsi for example is not an easy task. On the other hand the fuzzy logic it uses to form agents individual (artificial intelligent) brains and let them make decisions using various inputs, is very clever. The crowd pass shading and rendering out of massive will be done using pixar’s renderer prman on our renderfarm. Another thing to get familiar with but very interesting of course..

In addition to the crowd, I’ve been doing some pipeline setup for the project, getting geometry, cameras, rigs, animation data, etc.. transfered in between various 3D packages; some cloth simulation using maya nCloth for flags and clothes; terrain texturing; and a bit of general management over the project together with my colleagues.

Yesterday we also did a 5 hours motion capture session in the access mocap studio of our university which I directed together with some other team members. Great to have those facilities at hand (Bournemouth NCCA!). This will provide me with animation data to fill up the massive crowd motion tree and give our animators some good reference data/backup.

In total, we are 8 people doing this project. We have 3 character artists/animators, 1 fulltime animator, 1 rigger, 2 compositors/effect artists and 1 Technical Director(me). It’s superb working with all such talented people on this kind of project. Even though it’s a ‘massive’ project to pull off in 8 weeks (and we are learning a hell of a lot at the same time), it is a great exercise and let’s hope we can get a decent result out of it.

I can’t give more info about it at the moment but I can share a terrain test we did with. The terrain is modeled by Alkis Karaolis and I textured, vegetated, lit and rendered it myself. And a first very simple massive test with agents following a terrain and bump into each other.


So, I hope this gave you a little update about what I’m doing at the moment and be sure to check for more in the future.
14th of March is the deadline, if we pulled it off quiet well;) I will (of course) post the final version here and maybe try to write a little bit about how we did things.



Written by Jo Plaete

February 8, 2008 at 4:40 am