Jo Plaete

Visual Effects – 3D – Interactive Design

Archive for the ‘3D’ Category

Behavioural Particles

leave a comment »

Lately in my (fairly sparse) spare time I had some fun with particles. Merely playing around a bit with different kinds and in several 3d packages but I thought I’d share the following combo with you.

The simulation basically consists of a pretty simple flocking setup where some wandering masters are chased by a bunch of predator boids, all of them spawning a good amount of trailing particles which then get affected by a bunch of other forces. I kinda liked some of the patterns it formed, here are some stills:

It started out as a few tests and a bit of playing with a particle renderer I had heard good things about: Krakatoa, trying out its fast point rendering ability. And if it says fast, it is fast (!) rendering 7+ million particles with self-shadowing and 4 steps moblur at 720p resolution in just under 1 minute/frame on my 2.93 dualcore laptop. It can also do voxel rendering which gets (obviously) a bit slower but for sure the better choice for more dense/smoky effects. Krakatoa is currently only available as a plugin for 3dsmax but let that not hold you back (unless you’re a max user – win!) from giving it a go as there are some kindly written exporters available for packages like Softimage (thanks Steven Caron) and Maya. This simulation was done using Softimage ICE.

I uploaded it to vimeo, best viewed in HD but still losing a good amount of detail so if you want the real deal you can download it here (720p – 370MB).

Jo

Written by Jo Plaete

March 3, 2010 at 2:29 pm

Posted in 3D, motion, Rendering, simulation, XSI

ICE Kinematics Helper Operator

leave a comment »

Quick post to share something possibly useful when you’re working with Softimage and ICE.

As known ICE is currently still limited to pointclouds/particles only and doesn’t officially support the setting of kinematic properties on objects in your scene yet (reading them works). This could come in handy when using ICE for things like rigging, etc. You can play already with this by enabling a variable in your xsi env.
set XSI_UNSUPPORTED_ICE_KINEMATICS=1 , more info about this: http://www.xsi-blog.com/archives/280
For the official support we’ll have to await a future release, maybe Softimage 2011 (?)

In the meantime, if you do want to get some ICE data to drive object kinematics without enabling the unsupported feature, this operator might help you with that.

What it basically does is reverse the process, instead of writing position and rotation data from the ICEtree onto the object’s kinematics the c++ operator will sit on the object and query those attributes from the pointcloud’s data array. This way the object will constantly update it’s position and rotation mimicking the behaviour of a certain point in your pointcloud. Back in ICE you can do whatever you want with the point/particle including reading kinematics from other objects in your scene, etc.

I’ve compiled the operator for both Softimage 7.x and 2010 64Bit (windows)
link 7.x
link 2010

What do you need ?
1. a pointcloud with at least 1 point
2. an object
3. the ICEkine operator installed (just load it as a usual plugin/put it in your workgroup)

How does it work ?
To bind the operator to an object use the following bit of python code which:
1. finds and defines both the pointcloud and the object
2. applies the operator

# Add/Find object (e.g. Null) / Find object
myObject = Application.ActiveSceneRoot.AddNull( "null" )
# myObject = Application.ActiveSceneRoot.FindChild( "myObject" )
# Provide the pointcloud primitive ( .ActivePrimitive ! ) you want to remap from
pointcloudPrim = Application.ActiveSceneRoot.FindChild("pointcloud").ActivePrimitive
# Bind The Opertor to Global or Local Kinematics
op = Application.AddCustomOp( "ICEkine", myObject.Kinematics.Global, [pointcloudPrim], "ICEkineOp" )

Once this has successfully connected you should find your object traveling along with the first point in your pointcloud. To connect the operator to another point of that cloud you can open up the operators property page and specify your desired point ID. Don’t forget to initialize self.Orientation on your cloud as the operator will look for it to drive the rotation.

As far as performance goes I get 100 objects remapped running still at 60fps (on my laptop) so for remapping ‘a few nulls’ it should be alright.

Again, this is just a temporary helper but it might be useful until the ICE kine beast itself is unleashed, use at own risk.

Pls let me know how you get on it with it, especially in the (unlikely;) case of buggy behavior. I used it for a few applications/test and it worked fine for me so far, also gotta thank my friend Nic Groot Bluemink for testing it !

have fun !

cheers

Jo

Written by Jo Plaete

December 12, 2009 at 4:06 pm

Posted in 3D, animation, scripting, TD, XSI

Speaking at Multi-Mania 2009

with 3 comments

Hi there!

Been a while since my last decent post with the excuse of being crazy busy over the last few months (I know it’s not an excuse at all:) but anyway, news! I just arrived in lovely Flanders on my way from London to Sydney where I will start in June working at Animal Logic.

So I (sadly) have to leave Framestore and London for a while.. It has been a great time working there on cool projects like “Where The Wild Things Are”.

Where The Wild Things Are

Where The Wild Things Are

Coming to the point of the post, I will be speaking at Multi-Mania 2009, which is a (free) new media conference hosting all kinds of topics/speakers going from web, interactive design, flash/flex to 3D and game development. In my lecture I will talk about film visual effects pipeline covering some of the work I’ve been doing over the last few months and further on I’ll introduce the new Softimage ICE technology a bit to the 3d folks in there. Last week I gave 2 days of lectures in Bournemouth University NCCA introducing ICE to the masters 3D computer animation and (I think) they absolutely loved it. As such I’m really looking forward to this one too!

It’s in Kortrijk EXPO, Belgium on the 18-19th of May, you can subscribe to the lectures via: http://www.multi-mania.be/2009/#/schedule/lecture/40

Well, if you’re around, hope to see you there !

Jo

Written by Jo Plaete

May 7, 2009 at 2:59 pm

Posted in 3D, animation, TD, vfx, visual fx, XSI

Using Adobe Flash inside Softimage XSI

with 13 comments

In this post I will try to explain how to set up a connection between Softimage XSI and Adobe Flash. I came to this idea when I was writing a procedural feather tool for Softimage XSI and wanted some more dynamic interface to control the feather system. To understand this post I assume you already have a good understanding of how to do basic scripting in Softimage XSI using python or vbscript, know what embedded javascript/vbscript in an html is and also have notion about how Adobe Flash and its actionscript scripting language works.

Now, before we start, what can we use this for? In its simplest form, we can use this for synoptic views. Using flash, we can make them more interactive, dynamic and probably also more user friendly. Further on, we can build interactive user interfaces to control custom tools. I do believe these interfaces can be a step up from the regular synoptic as they give much more freedom at the design stage and can interchange data from and towards the xsi scene. Figure 1 shows a test of an embedded flash synoptic to control a creature rig. A nice extra about these types of synoptic is that as these interfaces are built up with vector images, these synoptic views are (re)sizeable.

See this in action in my xsi wing feather tool:
https://joplaete.wordpress.com/2008/08/08/wing-feather-tool-version-ii/

Flash Synoptic 1

Flash Synoptic for griffin rig.

For setting up this connection, the basic prerequisite obviously is getting communication in between the flash movie (living in the xsi netview web browser) and the xsi scene itself (using a scripting language). Ultimately you want to set up a sort of remoting inbetween flash actionscript code and your xsi scripting language which in my case is preferably python. After researching deeper into this I tried a few various approaches to get this to work as html javascript links, socket communication flashxmlsocket-python, etc.

After some research the best option to me is to use the Actionscript 3.0 ExternalInterface class which enables you to call external interfaces outside the flash movie but embedded inside the html page which contains the flash movie. The nice thing about this is that next to javascript it can also call vbscript functions inside the html page which brought a solution to the problem as this way I could have xsi based vbscript code straight embedded inside the html page and called up directly through the external interface. It gets even better finding out that the interface works both ways so it enables you to call functions from flash inside xsi but those can also return values and you can even register a function callback inside vbscript which would make event based callbacks possible.

Practically, to make this work, all you have to do out of your AS3.0 code is call the external interface using:

ExternalInterface.call('functionName', arguments)

Inside the html page, you can then catch this call using a regular vbscript function included inside of a vbscript block of code:

<script language=”VBScript”>

Function functionname (argument)
'your xsi code
End Function

</script>

Now, to make this work we cannot just insert xsi commands as we would do inside of the regular script editor. Instead we need to create an object of the XSI Application to be able to execute commands on:

Set XSI = CreateObject('XSI.Application')
Set app = XSI.Application

You can then execute your regular xsi functions like this:

app.logmessage('--print out of flash: ' & argument)

All right, so far so good, this prints data from flash into xsi. Cool.

Further on, as I like the python scripting language, what I actually ended up doing is using the vbscript embedded bit to link through to a python script living somewhere inside the project. This can be achieved by using the ExecuteScript method. Interesting is that from the external interface call through the vbscript in between bit we can still deliver all basic data types as strings, ints,.. as well as arrays into python. In the following example I pass on an array filled up with object names from flash to python through vbs. See the inline code comments for detailed explanations for each line of code:

Flash:

//Make your interface and add code for its interaction
//make array
var selectionArray = ['sphere1', 'sphere2','sphere3']
//call external interface
ExternalInterface.call('toPython ', selectionArray)

Vbscript embedded in html page:

<script language=”VBScript”>

Set XSI = CreateObject('XSI.Application')
Set app = XSI.Application
'vbscript function which takes in the selection array
Function toPython(elementArray)
' put incoming array into vbs array
dim arrParams
arrParams = Array(elementArray)
' call external python function
app.ExecuteScript 'pythonScript.py', 'Python', 'selectElements', arrParams
End Function

</script>

Python:

import string
app = Application
oRoot = Application.ActiveProject.ActiveScene.Root
def selectElements(elementsList):
    #join the array into a ',' separated string to pass on to the selection function
    selectString = string.join( elementsList, ',' )
    #call the selection function using that string
    app.SelectObj(selectString, '', 1)

If you add some save key functionality which builds upon the same principles this gives you already almost all you need to build a synoptic view having all the features an ‘old-style’ synoptic would give you. Off course, the possibilities in interface building in flash are endless and when we add functionality that retrieves values from within the xsi scene into flash and is able to adjust values inside xsi based upon sliders we build in our flash interface, we already take a step up from the normal synoptic. Requesting a value out of xsi is possible this way:

Flash:

function getValue(parameter){
return ExternalInterface.call('getValue',parameter)
}

Vbscript:

Function getValue(parameter)
'return the value collected by GetValue
getValue = app.GetValue(parameter)
End Function

One thing that does not work is to implement this functionality into a normal xsi synoptic window as apparently this is a stripped down browser which does not provide support of the flash player. Luckily, the netview browser, which seems to be an implemented internet explorer does this without any problem and so it’s as simple as dragging up one of those and getting it located to where your html page lives which can be done as follows inside a scripted operator or just an xsi button:

Python:

app = Application
projectPath = app.ActiveProject.Path
synopticPath = projectPath + '\Synoptic\index.html'
app.LogMessage('OPEN NETVIEW SYNOPTIC' + synopticPath)
app.OpenNetView(synopticPath,True,True)

As you probably noticed, this will use the active project path to find the synoptic inside of it.

Flash Synoptic 2
Figure 2. More advanced flash user interface for xsi feather tool. A good example of how powerful this can get as this one dynamically builds up the interface around the feather system you generated/are working on.

We can conclude that there are definitely possibilities in using this technology to build more rich user interfaces. Also, in my examples I’ve been running this locally but off course nothing stops you from running this from an online web source and that way make your tool/script easily available to your users. Downside of the story might be that you do need to know flash pretty well if you want to create nice and highly interactive interfaces and the development time of these interfaces might go up but if this results in more interactive and user friendly interfaces, some people might definitely consider it.

Jo

Written by Jo Plaete

September 27, 2008 at 11:08 pm

Posted in 3D, scripting, TD, XSI

Wing Feather Tool Version II

with 33 comments

Working further on the wing feather tool for xsi, I decided to rewrite the tool partly and enhance it in terms of approach and usability. The next version of the tool introduces a guide feather approach to control the feather behaviour. It is still procedural in terms of generating your feathers and now gives you easier access to switch the interface inbetween various feather layers.

Quick Tour:

In detail overview:
http://www.edjstudios.be/rigging/j_feather_tool_V2_inDetail/j_feather_tool_V2_inDetail.html

All is written in Python(XSI), Actionscript 3.0(GUI) and Vbscript as link inbetween those two.
Any feedback would be very welcome 🙂 !

Jo

Written by Jo Plaete

August 8, 2008 at 12:00 am

Posted in 3D, animation, scripting, TD, XSI

Notes on a Massive Crowd Simulation Pipeline

with 41 comments

As promised, in this post I will talk about how to use Massive Crowd Simulation software in a project pipeline based upon the experience I built up doing the Path To War group project where I had to solve several crowd shots. I will not go in depth into how massive works, agent brain building or explain every step I took in detail. Rather I will pick out some specific issues and techniques I came along using the tool. This document assumes you know what Massive is, what it does and its internal workflow. Watch the Film.

Pipeline
Before I address some more specific issues I’d like to sketch out the overall pipeline we’ve set up to pull off the project as after all one of the biggest challenges in using Massive Crowd Software was to get it integrated into our university pipeline and set up a decent workflow with other 3d tools. We had quite a lot of software involved on the project with Softimage XSI at the core, Massive Prime for crowd simulation, E-On Vue for the natural environment creation/vegetation and Apple Shake as 2d compositing tool. Next to those there was also Autodesk Maya and Motion builder for the motion capture pipeline and Z-Brush for the character artists as sculpting/texturing tool. Because of the various software we were rendering out of three different renderers being Mental Ray for xsi shots, Renderman for crowds and the Vue internal renderer for the environments. The overall pipeline is shown schematically in figure 1.
Pipeline
Figure 1. Path To War overall pipeline.

In general, the core of the pipeline was XSI, this is where the previs was done and the cameras and terrains got placed to be then transferred over to other tools. Also all the modeling, texturing and animation of the close up hero shots were done using XSI. However, the initial idea of having straight a way communication between XSI and Massive did not work out as well as planned. Multiple prove of concepts of bringing over rig and animation data had various positive but also negative results and proved in the end that unfortunately we could not rely on this. Also as the project was on a very tight (8 week) deadline (including learning the tool, setting up the pipeline and finishing off), there was no time to develop custom tools to solve certain issues. Therefore we brought in Autodesk Maya where Massive was initially written for (pipeline wise) and used this as a platform in between Massive and other software.

Models and Textures
A fairly straight-forward part of the pipeline was the transfer of the character/props models and textures as this worked perfectly fine by getting OBJ files with uv information for the accompanying textures out of XSI straight into Massive. To prevent crashing the engine or overloading the renderer with a big amount of agents we had the models in 3 levels of detail. For long shots we had a low polygon version that contained around 4000 polygons. More close up/medium shots had a slightly more detailed model and the hero shots (rendered out of XSI) had the full high polygon model with z-brush displacement maps, etc. It was important though to keep the models fairly low poly for Massive but also to have the possibility to switch to a higher level of detail as agents came closer to the camera. This could be accomplished using the level of detail functionality inside Massive.

Motion Capture
As the project progressed we quickly found out we definitely needed to bring in motion capture to fill up the Massive motion trees with realistic animation and let our animators concentrate on the hero acting shots in XSI. This brings me to an interesting bit which is how the actual motion capture pipeline worked. After a few tests with our xsi-based rigs I decided to leave them for solely XSI purposes and build a new rig that would fit better into the Maya pipeline. After various testing and looking at the motion capture pipeline it seemed most stable to reverse engineer the process and build the actual crowd rig in Massive itself and export it back out to Maya (.ma) from which it could then go back and forth to Motion Builder to apply the motion capture animation. Bringing that animation back into Maya and export it as a .ma for Massive made it very convenient to import actions for the agents. Once imported into Massive, I used the action editor to prepare the actions for use in the brain. Something I’d like to point out that kept me busy for a bit is to remove the global translation and rotation curves from the action in the action editor to make the brain able to drive those based upon its calculations.
Motion Tree

Further, to get some hand animated actions into massive, we’ve built some FK shadow rigs in XSI which mimicked the Massive rigs we had in Maya. This way we were able to FK-wise transfer XSI animation data on the XSI rig through the shadow rigs into Maya and from there into Massive. Which we didn’t really use in the end as we chose to render all the hero characters out of XSI and composite them in with the crowds.

Camera
Because the terrain got rendered out of Vue, hero characters out of XSI and the crowd characters out of Massive, it was very important to get all geometry and cameras lined up. The best way to do this appeared to be to take the camera from XSI to Maya via the FBX format and then import it into Massive out of a .ma file (figure 3). Using an xsi-vue plug-in we were then able to take that same XSI camera into Vue and render out the background plates from the exact same perspective. The terrain geometry was imported into Massive using the OBJ format to have the agent interact with it but only the agents were rendered out from Massive. To keep the simulation as light as possible it was good to cut up the terrain for different shots and only import the parts needed for certain shots.

Camera
Figure 3. Camera pipeline xsi->Maya->Massive.

Rendering
For the rendering I took advantage of the Massive passes based set up approach towards Renderman compliant renderers. We exported the RIB files out of Massive and rendered them out with Prman. We did have to do some post processing on the RIB files though to get them to render out as we wanted by adding attributes and changing them towards our linux based renderfarm. Several python scripts were developed to remove windows path names and change the massive.dll reference into a .so one.

The passes produced out of Massive – Prman were a diffuse pass (figure 4), specular pass, ambient occlusion and a variation pass. Originally, we tried depth map shadows but it was hard to get a decent pass out so an ambient occlusion pass was needed to stick the agents to the ground when composited onto the background plates. Since the ray traced ambient occlusion on a lot of agents (+300) crashed the renderer, a script had to be developed to split up the AO RIB files which were then rendered out to multiple AO files per frame (figure 5) and combined back together in compositing.

Render Passes

Figure 4. Passes.

Another pass we’ve called the variation pass (figure 6) was pretty interesting and helped the final output a lot. Since we didn’t have much time to create a big amount of character variations, we used this pass to pull more variation in the crowd in compositing. Technically it’s just a constant shader which get passed a random argument value between 0.2 and 0.8 defined by a massive agent variable. Make sure to pass on the random value from the agent and so on a RIB level as randomizing inside the shader would give you ‘noise’ agents which is not what you want. This way we had a pass of all agents having a random grayscale value which the compositers could use to pull more variation into the crowds. In the end a lot depended on the compositors to make it all work together since the terrains were rendered with the Vue renderer, the hero characters with Mental Ray out of XSI and the crowds with Prman. So out of massive we had only the crowd agents which were then composited into the terrain which was rendered out in separate layers to achieve the best integration. Another thing to take into account when rendering out using this approach is not to render the terrain out of Massive but do use it as a masking object to occlude agents that are behind the terrain.

Variation pass
Figure 6. Variation pass.

Randomized Agent Setup
When doing a crowd simulation one of the most important things is to take care not to give away similarity or patterns in your crowd. Next to the additional variation pass at render/compositing stage we tried to address this problem as much as possible by making variations of the character models and textures. We ended up having 2 different Viking base models which both got 4 modeling sub variations and 4 texture variations plus also 4 different weapon props. This was all combined into 1 agent which had some randomized agent variables tied into a massive option node.
Looking at the movie you might join my opinion that the Viking crowd looks a bit more dynamic and randomized then the Egyptian one. This is solely due to the fact that we had 2 base models in that Viking crowd and only in the Egyptian crowd. Though there were 2 model variations, 4 texture variations and different props assigned to those Egyptian warriors, this variation could not live up to having 2 different source models also having this sub variation.

Of course, randomizing your animation is vital as well. Maybe even more then visual pattern appearances, movement patterns are spot very easily and give away your crowd quickly. So it’s good to keep this in mind when preparing/directing your motion capture session to go for as much various motion as possible. For example capturing multiple different run cycles for the same run motion and randomize those in between your agents is a good way to go. Next to that putting a slightly random variable onto your action playback speed gives really nice results too. As we hadn’t much time to prepare our motion capture session we came out of it with good but slightly not enough data so we went in and manually changed some of the basic run loops slightly and randomized that playback rate in the agents brains to get a more natural feel to the overall movement.

Directing the Agents Movement and Flow
The most interesting interaction that had to be taken care off inside of Massive was the terrain. We had trees on the terrain rendered out of Vue which had to be avoided by the agents but obviously were not apparent inside massive. Something simple but which helped a lot for this was the feature in Massive to put an image as your viewports background. So I brought in the terrain and camera for the shot and putted the Vue rendered out backplate image as viewport background to start matching up and to find out how to place/direct the agents. Next to that some dummy geometry was placed on the terrain where the trees supposed to be to get them avoided nicely by the agents vision.

In the end the brain made the agents do the following: follow the terrain topology including banking and tilting them to the tangent; avoid each other based upon sound emission and flock them back together when wandering to far off; follow a flow field for the general running direction; getting random actions applied and blended onto the agents; randomize the playback rates of the actions for variety. Of course, lots of tweaking and testing was involved to get the crowd behaviour to work well but I was very happy in the end to use Massive for this which definitely provided some very good features to combine all these behaviours and keep a lot of control over what you are doing without the need to go into heavy programming. Figure 5 gives an overview of all forces working on the agents and as you see it can get quite tricky as you get opposite forces pushing them away from each other and pulling them back together which makes it necessary to go in and tweak the behaviour on a per shot basis.

Forces
Figure5. Agent Forces.

One thing I did miss in Massive though is the ability to go in and direct a single agent or a ‘make agent independent and tweak brain’ function. As all your agents, or at least groups of them, share the same brain it is sometimes hard to deal with that annoying one that is not doing what it should do. I ended up deleting those agents out or changing their start position to attempt to adjust their behaviour which is a pretty tedious process and finally some were even masked out in compositing.

As I mentioned before, integrating Massive into our project pipeline was one of the biggest challenges on the project. It is a great tool for what it does with artificial intelligence and crowd simulation but it requires some investigation and testing to get it to work properly. I hope reading this might give you some directions on how to integrate it on your project. I do not assume this is the ultimate way but it was sure close to the best way with the out of the box solutions. Contacting some industry companies and speaking to other Massive users it appeared that lots of them develop in house tools around it to fit it into their pipelines. Any comments or suggestions on this are very welcome.

Jo

Written by Jo Plaete

June 25, 2008 at 12:10 am

NCCA Symposium – Renderman Presentation

with one comment

For the people that attended my talk at the NCCA symposium and anyone that is interested, here is my presentation together with some scenes, shaders and scripts I’ve talked about. It concerns a global overview over renderman rendering.

Presentation pdf
Additional material

Jo

Written by Jo Plaete

May 20, 2008 at 12:28 pm