Jo Plaete

Visual Effects – 3D – Interactive Design

Using Adobe Flash inside Softimage XSI

with 13 comments

In this post I will try to explain how to set up a connection between Softimage XSI and Adobe Flash. I came to this idea when I was writing a procedural feather tool for Softimage XSI and wanted some more dynamic interface to control the feather system. To understand this post I assume you already have a good understanding of how to do basic scripting in Softimage XSI using python or vbscript, know what embedded javascript/vbscript in an html is and also have notion about how Adobe Flash and its actionscript scripting language works.

Now, before we start, what can we use this for? In its simplest form, we can use this for synoptic views. Using flash, we can make them more interactive, dynamic and probably also more user friendly. Further on, we can build interactive user interfaces to control custom tools. I do believe these interfaces can be a step up from the regular synoptic as they give much more freedom at the design stage and can interchange data from and towards the xsi scene. Figure 1 shows a test of an embedded flash synoptic to control a creature rig. A nice extra about these types of synoptic is that as these interfaces are built up with vector images, these synoptic views are (re)sizeable.

See this in action in my xsi wing feather tool:
https://joplaete.wordpress.com/2008/08/08/wing-feather-tool-version-ii/

Flash Synoptic 1

Flash Synoptic for griffin rig.

For setting up this connection, the basic prerequisite obviously is getting communication in between the flash movie (living in the xsi netview web browser) and the xsi scene itself (using a scripting language). Ultimately you want to set up a sort of remoting inbetween flash actionscript code and your xsi scripting language which in my case is preferably python. After researching deeper into this I tried a few various approaches to get this to work as html javascript links, socket communication flashxmlsocket-python, etc.

After some research the best option to me is to use the Actionscript 3.0 ExternalInterface class which enables you to call external interfaces outside the flash movie but embedded inside the html page which contains the flash movie. The nice thing about this is that next to javascript it can also call vbscript functions inside the html page which brought a solution to the problem as this way I could have xsi based vbscript code straight embedded inside the html page and called up directly through the external interface. It gets even better finding out that the interface works both ways so it enables you to call functions from flash inside xsi but those can also return values and you can even register a function callback inside vbscript which would make event based callbacks possible.

Practically, to make this work, all you have to do out of your AS3.0 code is call the external interface using:

ExternalInterface.call('functionName', arguments)

Inside the html page, you can then catch this call using a regular vbscript function included inside of a vbscript block of code:

<script language=”VBScript”>

Function functionname (argument)
'your xsi code
End Function

</script>

Now, to make this work we cannot just insert xsi commands as we would do inside of the regular script editor. Instead we need to create an object of the XSI Application to be able to execute commands on:

Set XSI = CreateObject('XSI.Application')
Set app = XSI.Application

You can then execute your regular xsi functions like this:

app.logmessage('--print out of flash: ' & argument)

All right, so far so good, this prints data from flash into xsi. Cool.

Further on, as I like the python scripting language, what I actually ended up doing is using the vbscript embedded bit to link through to a python script living somewhere inside the project. This can be achieved by using the ExecuteScript method. Interesting is that from the external interface call through the vbscript in between bit we can still deliver all basic data types as strings, ints,.. as well as arrays into python. In the following example I pass on an array filled up with object names from flash to python through vbs. See the inline code comments for detailed explanations for each line of code:

Flash:

//Make your interface and add code for its interaction
//make array
var selectionArray = ['sphere1', 'sphere2','sphere3']
//call external interface
ExternalInterface.call('toPython ', selectionArray)

Vbscript embedded in html page:

<script language=”VBScript”>

Set XSI = CreateObject('XSI.Application')
Set app = XSI.Application
'vbscript function which takes in the selection array
Function toPython(elementArray)
' put incoming array into vbs array
dim arrParams
arrParams = Array(elementArray)
' call external python function
app.ExecuteScript 'pythonScript.py', 'Python', 'selectElements', arrParams
End Function

</script>

Python:

import string
app = Application
oRoot = Application.ActiveProject.ActiveScene.Root
def selectElements(elementsList):
    #join the array into a ',' separated string to pass on to the selection function
    selectString = string.join( elementsList, ',' )
    #call the selection function using that string
    app.SelectObj(selectString, '', 1)

If you add some save key functionality which builds upon the same principles this gives you already almost all you need to build a synoptic view having all the features an ‘old-style’ synoptic would give you. Off course, the possibilities in interface building in flash are endless and when we add functionality that retrieves values from within the xsi scene into flash and is able to adjust values inside xsi based upon sliders we build in our flash interface, we already take a step up from the normal synoptic. Requesting a value out of xsi is possible this way:

Flash:

function getValue(parameter){
return ExternalInterface.call('getValue',parameter)
}

Vbscript:

Function getValue(parameter)
'return the value collected by GetValue
getValue = app.GetValue(parameter)
End Function

One thing that does not work is to implement this functionality into a normal xsi synoptic window as apparently this is a stripped down browser which does not provide support of the flash player. Luckily, the netview browser, which seems to be an implemented internet explorer does this without any problem and so it’s as simple as dragging up one of those and getting it located to where your html page lives which can be done as follows inside a scripted operator or just an xsi button:

Python:

app = Application
projectPath = app.ActiveProject.Path
synopticPath = projectPath + '\Synoptic\index.html'
app.LogMessage('OPEN NETVIEW SYNOPTIC' + synopticPath)
app.OpenNetView(synopticPath,True,True)

As you probably noticed, this will use the active project path to find the synoptic inside of it.

Flash Synoptic 2
Figure 2. More advanced flash user interface for xsi feather tool. A good example of how powerful this can get as this one dynamically builds up the interface around the feather system you generated/are working on.

We can conclude that there are definitely possibilities in using this technology to build more rich user interfaces. Also, in my examples I’ve been running this locally but off course nothing stops you from running this from an online web source and that way make your tool/script easily available to your users. Downside of the story might be that you do need to know flash pretty well if you want to create nice and highly interactive interfaces and the development time of these interfaces might go up but if this results in more interactive and user friendly interfaces, some people might definitely consider it.

Jo

Advertisements

Written by Jo Plaete

September 27, 2008 at 11:08 pm

Posted in 3D, scripting, TD, XSI

Finished Master Course Bournemouth

with 2 comments

Just finished my Master Course in Bournemouth which wrapped up the end of August. It was a highly intense but super interesting year where I got to know some talented cg folks which I won’t forget quickly. Time for a quick trip home and a short break after which a new episode will start working at Framestore london.

Reel summarizing some of my work done in the year:

I’ve also been slightly involved in some other projects as for example Kevin Meade’s amazing ‘A Little Wicked’ animation short which I did some minor smoke effects for:

And Lars Van Der Bijl’s demolition of The Tower Of Pisa project on which I helped out with the crowd sim and assisted on set.

Some other remarkable projects from some of my classmates:
Dapeng Liu – Modeling Reel:
http://uk.youtube.com/watch?v=Si0gF8K17LE
Lee Baskerville – Modeling Reel:
http://uk.youtube.com/watch?v=fiVlwaGO5aA&feature=related
Alkis Karaolis – Modeling Reel:
http://uk.youtube.com/watch?v=-K7ax6fEXn4
Nic Groot Bluemink – Environmental Modeling:
http://uk.youtube.com/watch?v=BeURMI6Gmlc&NR=1
Pedram Christopher – Animation:

http://uk.youtube.com/watch?v=nyYaKpjGyt4
And many more:
http://uk.youtube.com/view_play_list?p=291138D85F1C166C

Jo

Written by Jo Plaete

September 5, 2008 at 10:28 pm

Posted in Varia

Wing Feather Tool Version II

with 33 comments

Working further on the wing feather tool for xsi, I decided to rewrite the tool partly and enhance it in terms of approach and usability. The next version of the tool introduces a guide feather approach to control the feather behaviour. It is still procedural in terms of generating your feathers and now gives you easier access to switch the interface inbetween various feather layers.

Quick Tour:

In detail overview:
http://www.edjstudios.be/rigging/j_feather_tool_V2_inDetail/j_feather_tool_V2_inDetail.html

All is written in Python(XSI), Actionscript 3.0(GUI) and Vbscript as link inbetween those two.
Any feedback would be very welcome 🙂 !

Jo

Written by Jo Plaete

August 8, 2008 at 12:00 am

Posted in 3D, animation, scripting, TD, XSI

XSI wing feather tool

with 13 comments

New version:
https://joplaete.wordpress.com/2008/08/08/wing-feather-tool-version-ii/

—–

Lately I was looking at rigging feathers on a wing in XSI and I decided to develop a small prototype tool for dealing with this.

Basically it started out as a procedural feather system which enables you to draw out a curve and attach it to your bone system (or extract from a mesh) and generate a desired amount of feathers onto that curve. For the generation you can provide your own feather you model up. On generation you can adjust a set of parameters the script will take into account as for example add in some randomness in scale variation or some offsets. After generation you have a set of custom controls to adjust your feathers as well as a flash interface that pops up to manage your feather distribution and individual placement. The idea is that you animate your bone chain and afterwards you can go in and adjust the exact feather placement and bake that in as animation too. This gives you a lot of control in tweaking your feathers.

The system is still a prototype having some pros and cons and probably still some bugs in there but it definitely was a very good exercise in scripting for rigging and putting the flash-xsi bridge I developed (more info soon) to a more advanced use. I will keep developing this as I think it might be useful for people who quickly want a wing feather setup for their character.

To see it in action you can take a look at this screencap where I briefly go over it:
http://www.edjstudios.be/rigging/j_feather_tool/j_feather_tool_v1.html

Feel free to leave any comments or ideas or drop me a line if you fancy to give it a try. I will upload the tool at a later stage.

Jo

Written by Jo Plaete

July 16, 2008 at 5:07 pm

Posted in Varia

Notes on a Massive Crowd Simulation Pipeline

with 41 comments

As promised, in this post I will talk about how to use Massive Crowd Simulation software in a project pipeline based upon the experience I built up doing the Path To War group project where I had to solve several crowd shots. I will not go in depth into how massive works, agent brain building or explain every step I took in detail. Rather I will pick out some specific issues and techniques I came along using the tool. This document assumes you know what Massive is, what it does and its internal workflow. Watch the Film.

Pipeline
Before I address some more specific issues I’d like to sketch out the overall pipeline we’ve set up to pull off the project as after all one of the biggest challenges in using Massive Crowd Software was to get it integrated into our university pipeline and set up a decent workflow with other 3d tools. We had quite a lot of software involved on the project with Softimage XSI at the core, Massive Prime for crowd simulation, E-On Vue for the natural environment creation/vegetation and Apple Shake as 2d compositing tool. Next to those there was also Autodesk Maya and Motion builder for the motion capture pipeline and Z-Brush for the character artists as sculpting/texturing tool. Because of the various software we were rendering out of three different renderers being Mental Ray for xsi shots, Renderman for crowds and the Vue internal renderer for the environments. The overall pipeline is shown schematically in figure 1.
Pipeline
Figure 1. Path To War overall pipeline.

In general, the core of the pipeline was XSI, this is where the previs was done and the cameras and terrains got placed to be then transferred over to other tools. Also all the modeling, texturing and animation of the close up hero shots were done using XSI. However, the initial idea of having straight a way communication between XSI and Massive did not work out as well as planned. Multiple prove of concepts of bringing over rig and animation data had various positive but also negative results and proved in the end that unfortunately we could not rely on this. Also as the project was on a very tight (8 week) deadline (including learning the tool, setting up the pipeline and finishing off), there was no time to develop custom tools to solve certain issues. Therefore we brought in Autodesk Maya where Massive was initially written for (pipeline wise) and used this as a platform in between Massive and other software.

Models and Textures
A fairly straight-forward part of the pipeline was the transfer of the character/props models and textures as this worked perfectly fine by getting OBJ files with uv information for the accompanying textures out of XSI straight into Massive. To prevent crashing the engine or overloading the renderer with a big amount of agents we had the models in 3 levels of detail. For long shots we had a low polygon version that contained around 4000 polygons. More close up/medium shots had a slightly more detailed model and the hero shots (rendered out of XSI) had the full high polygon model with z-brush displacement maps, etc. It was important though to keep the models fairly low poly for Massive but also to have the possibility to switch to a higher level of detail as agents came closer to the camera. This could be accomplished using the level of detail functionality inside Massive.

Motion Capture
As the project progressed we quickly found out we definitely needed to bring in motion capture to fill up the Massive motion trees with realistic animation and let our animators concentrate on the hero acting shots in XSI. This brings me to an interesting bit which is how the actual motion capture pipeline worked. After a few tests with our xsi-based rigs I decided to leave them for solely XSI purposes and build a new rig that would fit better into the Maya pipeline. After various testing and looking at the motion capture pipeline it seemed most stable to reverse engineer the process and build the actual crowd rig in Massive itself and export it back out to Maya (.ma) from which it could then go back and forth to Motion Builder to apply the motion capture animation. Bringing that animation back into Maya and export it as a .ma for Massive made it very convenient to import actions for the agents. Once imported into Massive, I used the action editor to prepare the actions for use in the brain. Something I’d like to point out that kept me busy for a bit is to remove the global translation and rotation curves from the action in the action editor to make the brain able to drive those based upon its calculations.
Motion Tree

Further, to get some hand animated actions into massive, we’ve built some FK shadow rigs in XSI which mimicked the Massive rigs we had in Maya. This way we were able to FK-wise transfer XSI animation data on the XSI rig through the shadow rigs into Maya and from there into Massive. Which we didn’t really use in the end as we chose to render all the hero characters out of XSI and composite them in with the crowds.

Camera
Because the terrain got rendered out of Vue, hero characters out of XSI and the crowd characters out of Massive, it was very important to get all geometry and cameras lined up. The best way to do this appeared to be to take the camera from XSI to Maya via the FBX format and then import it into Massive out of a .ma file (figure 3). Using an xsi-vue plug-in we were then able to take that same XSI camera into Vue and render out the background plates from the exact same perspective. The terrain geometry was imported into Massive using the OBJ format to have the agent interact with it but only the agents were rendered out from Massive. To keep the simulation as light as possible it was good to cut up the terrain for different shots and only import the parts needed for certain shots.

Camera
Figure 3. Camera pipeline xsi->Maya->Massive.

Rendering
For the rendering I took advantage of the Massive passes based set up approach towards Renderman compliant renderers. We exported the RIB files out of Massive and rendered them out with Prman. We did have to do some post processing on the RIB files though to get them to render out as we wanted by adding attributes and changing them towards our linux based renderfarm. Several python scripts were developed to remove windows path names and change the massive.dll reference into a .so one.

The passes produced out of Massive – Prman were a diffuse pass (figure 4), specular pass, ambient occlusion and a variation pass. Originally, we tried depth map shadows but it was hard to get a decent pass out so an ambient occlusion pass was needed to stick the agents to the ground when composited onto the background plates. Since the ray traced ambient occlusion on a lot of agents (+300) crashed the renderer, a script had to be developed to split up the AO RIB files which were then rendered out to multiple AO files per frame (figure 5) and combined back together in compositing.

Render Passes

Figure 4. Passes.

Another pass we’ve called the variation pass (figure 6) was pretty interesting and helped the final output a lot. Since we didn’t have much time to create a big amount of character variations, we used this pass to pull more variation in the crowd in compositing. Technically it’s just a constant shader which get passed a random argument value between 0.2 and 0.8 defined by a massive agent variable. Make sure to pass on the random value from the agent and so on a RIB level as randomizing inside the shader would give you ‘noise’ agents which is not what you want. This way we had a pass of all agents having a random grayscale value which the compositers could use to pull more variation into the crowds. In the end a lot depended on the compositors to make it all work together since the terrains were rendered with the Vue renderer, the hero characters with Mental Ray out of XSI and the crowds with Prman. So out of massive we had only the crowd agents which were then composited into the terrain which was rendered out in separate layers to achieve the best integration. Another thing to take into account when rendering out using this approach is not to render the terrain out of Massive but do use it as a masking object to occlude agents that are behind the terrain.

Variation pass
Figure 6. Variation pass.

Randomized Agent Setup
When doing a crowd simulation one of the most important things is to take care not to give away similarity or patterns in your crowd. Next to the additional variation pass at render/compositing stage we tried to address this problem as much as possible by making variations of the character models and textures. We ended up having 2 different Viking base models which both got 4 modeling sub variations and 4 texture variations plus also 4 different weapon props. This was all combined into 1 agent which had some randomized agent variables tied into a massive option node.
Looking at the movie you might join my opinion that the Viking crowd looks a bit more dynamic and randomized then the Egyptian one. This is solely due to the fact that we had 2 base models in that Viking crowd and only in the Egyptian crowd. Though there were 2 model variations, 4 texture variations and different props assigned to those Egyptian warriors, this variation could not live up to having 2 different source models also having this sub variation.

Of course, randomizing your animation is vital as well. Maybe even more then visual pattern appearances, movement patterns are spot very easily and give away your crowd quickly. So it’s good to keep this in mind when preparing/directing your motion capture session to go for as much various motion as possible. For example capturing multiple different run cycles for the same run motion and randomize those in between your agents is a good way to go. Next to that putting a slightly random variable onto your action playback speed gives really nice results too. As we hadn’t much time to prepare our motion capture session we came out of it with good but slightly not enough data so we went in and manually changed some of the basic run loops slightly and randomized that playback rate in the agents brains to get a more natural feel to the overall movement.

Directing the Agents Movement and Flow
The most interesting interaction that had to be taken care off inside of Massive was the terrain. We had trees on the terrain rendered out of Vue which had to be avoided by the agents but obviously were not apparent inside massive. Something simple but which helped a lot for this was the feature in Massive to put an image as your viewports background. So I brought in the terrain and camera for the shot and putted the Vue rendered out backplate image as viewport background to start matching up and to find out how to place/direct the agents. Next to that some dummy geometry was placed on the terrain where the trees supposed to be to get them avoided nicely by the agents vision.

In the end the brain made the agents do the following: follow the terrain topology including banking and tilting them to the tangent; avoid each other based upon sound emission and flock them back together when wandering to far off; follow a flow field for the general running direction; getting random actions applied and blended onto the agents; randomize the playback rates of the actions for variety. Of course, lots of tweaking and testing was involved to get the crowd behaviour to work well but I was very happy in the end to use Massive for this which definitely provided some very good features to combine all these behaviours and keep a lot of control over what you are doing without the need to go into heavy programming. Figure 5 gives an overview of all forces working on the agents and as you see it can get quite tricky as you get opposite forces pushing them away from each other and pulling them back together which makes it necessary to go in and tweak the behaviour on a per shot basis.

Forces
Figure5. Agent Forces.

One thing I did miss in Massive though is the ability to go in and direct a single agent or a ‘make agent independent and tweak brain’ function. As all your agents, or at least groups of them, share the same brain it is sometimes hard to deal with that annoying one that is not doing what it should do. I ended up deleting those agents out or changing their start position to attempt to adjust their behaviour which is a pretty tedious process and finally some were even masked out in compositing.

As I mentioned before, integrating Massive into our project pipeline was one of the biggest challenges on the project. It is a great tool for what it does with artificial intelligence and crowd simulation but it requires some investigation and testing to get it to work properly. I hope reading this might give you some directions on how to integrate it on your project. I do not assume this is the ultimate way but it was sure close to the best way with the out of the box solutions. Contacting some industry companies and speaking to other Massive users it appeared that lots of them develop in house tools around it to fit it into their pipelines. Any comments or suggestions on this are very welcome.

Jo

Written by Jo Plaete

June 25, 2008 at 12:10 am

NCCA Symposium – Renderman Presentation

with one comment

For the people that attended my talk at the NCCA symposium and anyone that is interested, here is my presentation together with some scenes, shaders and scripts I’ve talked about. It concerns a global overview over renderman rendering.

Presentation pdf
Additional material

Jo

Written by Jo Plaete

May 20, 2008 at 12:28 pm

Won NCCA Rob Edwards Memorial Cup 2008

leave a comment »

Worth mentioning, last Saturday we won the annual NCCA soccer tournament with our MA 3D Computer Animation team! The tournament contained 12 teams representing either industry companies and CG courses. It was a very nice day with some more then heroic games and a beautiful turn out for us in the end. The winning team:

The desired goblet will shine for at least one year in the MA3D lab 😉

Jo

Written by Jo Plaete

May 19, 2008 at 1:39 am

Posted in soccer, Uncategorized