Jo Plaete

Visual Effects – 3D – Interactive Design

Threading with PyQt4

with 24 comments

Small post showing some simple examples on how to deal with threading in PyQt4 which would at least have saved me a bit of time when I was first looking into it.

As you start developing ui’s within this cool framework you’ll probably quickly notice that it is valuable to be able to run processes in separate threads and as such keep your ui unlocked while doing things in the background. Tasks like data retrieval and such, which may possibly take up some time, are better done in a sort of worker thread which on completion updates your ui. You can achieve this using the standard python threads – but if you happen to be working with PyQt4 I’d suggest you make use of their threading libraries as they are nicely integrated ensuring signal/slot communication to be thread safe. Both are cross-platform and I found them very useful so far.

So here’s an example on how you can make that happen. To start we’ll set up a very simpel ui containing a list widget which we will add some items to by clicking a button – fancy!

import sys, time
from PyQt4 import QtCore, QtGui

class MyApp(QtGui.QWidget):
 def __init__(self, parent=None):
  QtGui.QWidget.__init__(self, parent)

  self.setGeometry(300, 300, 280, 600)
  self.setWindowTitle('threads')

  self.layout = QtGui.QVBoxLayout(self)

  self.testButton = QtGui.QPushButton("test")
  self.connect(self.testButton, QtCore.SIGNAL("released()"), self.test)
  self.listwidget = QtGui.QListWidget(self)

  self.layout.addWidget(self.testButton)
  self.layout.addWidget(self.listwidget)

 def add(self, text):
  """ Add item to list widget """
  print "Add: " + text
  self.listwidget.addItem(text)
  self.listwidget.sortItems()

 def addBatch(self,text="test",iters=6,delay=0.3):
  """ Add several items to list widget """
  for i in range(iters):
   time.sleep(delay) # artificial time delay
   self.add(text+" "+str(i))

 def test(self):
  self.listwidget.clear()
  # adding entries just from main application: locks ui
  self.addBatch("_non_thread",iters=6,delay=0.3)

If we were to run this code (which you’ll need to add the following for)

# run
app = QtGui.QApplication(sys.argv)
test = MyApp()
test.show()
app.exec_()

we’ll notice that after displaying our ui and clicking the test button – the ui will hang for a bit whilst our addBatch method is adding some items to the list widget. To make this apparent a slight artificial delay is introduced by time.sleep() before adding each element. Now this is exactly the problem we want to address here as if your ui’s grows bigger and you have waiting times for looking up data you really don’t want to hang your ui each time frustrating your user.

Let’s imagine time.sleep() is the time it takes to retrieve a certain piece of data from a database which has to result in an item being added to our list. Here’s how we could let this be dealt with in the background. We will make use of qt’s singal/slot communication mechanism as that is a thread safe way to communicate from our work thread back to the main application. First we need to create another object which will represent our new thread.

class WorkThread(QtCore.QThread):
 def __init__(self):
  QtCore.QThread.__init__(self)

 def run(self):
  for i in range(6):
   time.sleep(0.3) # artificial time delay
   self.emit( QtCore.SIGNAL('update(QString)'), "from work thread " + str(i) )
  return

This is pretty much the easiest it gets (beware you may run into some trouble with this bare version as discussed below). As you can see we are inheriting from QtCore.QThread, that’s where all the Qt threading magic will come from but we don’t have to worry to much about that as long as we call its __init__() method to set it up and implement the right methods. Further on you find the run method which is what will be called when we start the thread. Just remember the method to implement is run() but starting the thread itself is done using start() ! What we currently have in there is something similar to our addBatch method only instead of calling the add method we will emit a signal to the main application passing on some data as an argument.

Now the only thing we have to do in our main application is to make an instance of this and connect to the signal it emits, adding this to our test method

  def test(self):
   self.listwidget.clear()
   # adding in main application: locks ui
   self.addBatch("_non_thread",iters=6,delay=0.3)

   # adding by emitting signal in different thread
   self.workThread = WorkThread()
   self.connect( self.workThread, QtCore.SIGNAL("update(QString)"), self.add )
   self.workThread.start()

If we run this we should find that after clicking our button our ui still freezes for about a second whilst running our original addBatch method but afterwards it unlocks and as the workThread gets started we can see item per item being added without the ui being stuck. This is thanks to the work thread signaling back to the main app which gets then updated accordingly – all the rest is done inside the thread away from the main app. Because we have matched the emit signal signature to our add method we can just connect to this method to the signal call.

An important thing to be aware that of is that if the object which holds the thread gets cleaned up, your thread will die with it and most likely give you some kind of segmentation fault. As we have stored it in an object variable this won’t happen here although it is recommended to override the destructor as follows

class WorkThread(QtCore.QThread):
 def __init__(self):
  QtCore.QThread.__init__(self)

 def __del__(self):
  self.wait()

 def run(self):
  for i in range(6):
   time.sleep(0.3) # artificial time delay
   self.emit( QtCore.SIGNAL('update(QString)'), "from work thread " + str(i) )
  return

This will (should) ensure that the thread stops processing before it gets destroyed. That will do the job in some cases but (at least for me) it may still go wrong. If you hammer the test button a few times (and take out the first addBatch call for that), you will notice you get: The thread is waiting on itself – after which it will get destroyed and the app gets reset or crashes. This is where it gets a bit tricky. As for me, and I am very open to suggestions/explanations on this one, the best cure for this is to terminate the (waiting) thread after your run code has been executed. This makes it (in this scenario at least) more stable.

class WorkThread(QtCore.QThread):
 def __init__(self):
  QtCore.QThread.__init__(self)

 def __del__(self):
  self.wait()

 def run(self):
  for i in range(6):
   time.sleep(0.3) # artificial time delay
   self.emit( QtCore.SIGNAL('update(QString)'), "from work thread " + str(i) )

  self.terminate()

However, terminate() is not encouraged by the docs and overwriting this variable over and over again is not the best thing to do. It is better to design your code so it avoids this from happening altogether. If you happen to be spawning lots of threads, there is a more stable way to get around this problem by using for example a thread pool. This will just be a simple list to store all your threads

# add to __init__()
self.threadPool = []

# replace in test()
self.threadPool.append( WorkThread() )
self.connect( self.threadPool[len(self.threadPool)-1], QtCore.SIGNAL("update(QString)"), self.add )
self.threadPool[len(self.threadPool)-1].start()

Which makes it behave stable without the need to call terminate().

Furthermore something I found convenient is to have a sort of generic thread which you can send a certain method to. That way you can keep your app specific code inside your main class and just dispatch a certain function to the thread. For that we can create a thread object as follows

class GenericThread(QtCore.QThread):
 def __init__(self, function, *args, **kwargs):
  QtCore.QThread.__init__(self)
  self.function = function
  self.args = args
  self.kwargs = kwargs

 def __del__(self):
  self.wait()

 def run(self):
  self.function(*self.args,**self.kwargs)
  return

As you can see this thread takes a function and its args and kwargs. In the run() method it will then just call this. In our test() method we can add

  # generic thread
  self.genericThread = GenericThread(self.addBatch,"from generic thread ",delay=0.3)
  self.genericThread.start()

Tough it is better/safer to communicate through signals so we could change the addBatch method to emit a signal itself

def addBatch2(self,text="test",iters=6,delay=0.3):
 for i in range(iters):
  time.sleep(delay) # artificial time delay
  self.emit( QtCore.SIGNAL('add(QString)'), text+" "+str(i) )

And then connect to it as follows

 # generic thread using signal
 self.genericThread2 = GenericThread(self.addBatch2,"from generic thread using signal ",delay=0.3)
 self.disconnect( self, QtCore.SIGNAL("add(QString)"), self.add )
 self.connect( self, QtCore.SIGNAL("add(QString)"), self.add )
 self.genericThread2.start()

Disconnecting the signal first in this example to avoid registering multiple times to it.

Be careful when you start doing more complicated things with this involving access to data structures and such. Sometimes if you really need to lock an object while you’re working on it is worth looking into the QMutex functionality to enforce access serialization between threads. Something else that ties very well into it is the QEventLoop but I’ll leave those up to you to have a play with!

That’s about it, please let me know if you have any remarks or issues. Here’s the whole thing again in one piece.

import sys, time
from PyQt4 import QtCore, QtGui

class MyApp(QtGui.QWidget):
 def __init__(self, parent=None):
  QtGui.QWidget.__init__(self, parent)

  self.setGeometry(300, 300, 280, 600)
  self.setWindowTitle('threads')

  self.layout = QtGui.QVBoxLayout(self)

  self.testButton = QtGui.QPushButton("test")
  self.connect(self.testButton, QtCore.SIGNAL("released()"), self.test)
  self.listwidget = QtGui.QListWidget(self)

  self.layout.addWidget(self.testButton)
  self.layout.addWidget(self.listwidget)

  self.threadPool = []

 def add(self, text):
  """ Add item to list widget """
  print "Add: " + text
  self.listwidget.addItem(text)
  self.listwidget.sortItems()

 def addBatch(self,text="test",iters=6,delay=0.3):
  """ Add several items to list widget """
  for i in range(iters):
   time.sleep(delay) # artificial time delay
   self.add(text+" "+str(i))

 def addBatch2(self,text="test",iters=6,delay=0.3):
  for i in range(iters):
   time.sleep(delay) # artificial time delay
   self.emit( QtCore.SIGNAL('add(QString)'), text+" "+str(i) )

 def test(self):
  self.listwidget.clear()
  # adding in main application: locks ui
  #self.addBatch("_non_thread",iters=6,delay=0.3)

  # adding by emitting signal in different thread
  self.threadPool.append( WorkThread() )
  self.connect( self.threadPool[len(self.threadPool)-1], QtCore.SIGNAL("update(QString)"), self.add )
  self.threadPool[len(self.threadPool)-1].start()

  # generic thread using signal
  self.threadPool.append( GenericThread(self.addBatch2,"from generic thread using signal ",delay=0.3) )
  self.disconnect( self, QtCore.SIGNAL("add(QString)"), self.add )
  self.connect( self, QtCore.SIGNAL("add(QString)"), self.add )
  self.threadPool[len(self.threadPool)-1].start()

class WorkThread(QtCore.QThread):
 def __init__(self):
  QtCore.QThread.__init__(self)

 def __del__(self):
  self.wait()

 def run(self):
  for i in range(6):
   time.sleep(0.3) # artificial time delay
   self.emit( QtCore.SIGNAL('update(QString)'), "from work thread " + str(i) )
  return

class GenericThread(QtCore.QThread):
 def __init__(self, function, *args, **kwargs):
  QtCore.QThread.__init__(self)
  self.function = function
  self.args = args
  self.kwargs = kwargs

 def __del__(self):
  self.wait()

 def run(self):
  self.function(*self.args,**self.kwargs)
  return

# run
app = QtGui.QApplication(sys.argv)
test = MyApp()
test.show()
app.exec_()

And some more docs and links on the topic:
http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qthread.html
http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qeventloop.html
http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qmutex.html
http://diotavelli.net/PyQtWiki/Threading,_Signals_and_Slots
http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads

Jo

Written by Jo Plaete

July 21, 2010 at 9:30 am

Behavioural Particles

leave a comment »

Lately in my (fairly sparse) spare time I had some fun with particles. Merely playing around a bit with different kinds and in several 3d packages but I thought I’d share the following combo with you.

The simulation basically consists of a pretty simple flocking setup where some wandering masters are chased by a bunch of predator boids, all of them spawning a good amount of trailing particles which then get affected by a bunch of other forces. I kinda liked some of the patterns it formed, here are some stills:

It started out as a few tests and a bit of playing with a particle renderer I had heard good things about: Krakatoa, trying out its fast point rendering ability. And if it says fast, it is fast (!) rendering 7+ million particles with self-shadowing and 4 steps moblur at 720p resolution in just under 1 minute/frame on my 2.93 dualcore laptop. It can also do voxel rendering which gets (obviously) a bit slower but for sure the better choice for more dense/smoky effects. Krakatoa is currently only available as a plugin for 3dsmax but let that not hold you back (unless you’re a max user – win!) from giving it a go as there are some kindly written exporters available for packages like Softimage (thanks Steven Caron) and Maya. This simulation was done using Softimage ICE.

I uploaded it to vimeo, best viewed in HD but still losing a good amount of detail so if you want the real deal you can download it here (720p – 370MB).

Jo

Written by Jo Plaete

March 3, 2010 at 2:29 pm

Posted in 3D, motion, Rendering, simulation, XSI

ICE Kinematics Helper Operator

leave a comment »

Quick post to share something possibly useful when you’re working with Softimage and ICE.

As known ICE is currently still limited to pointclouds/particles only and doesn’t officially support the setting of kinematic properties on objects in your scene yet (reading them works). This could come in handy when using ICE for things like rigging, etc. You can play already with this by enabling a variable in your xsi env.
set XSI_UNSUPPORTED_ICE_KINEMATICS=1 , more info about this: http://www.xsi-blog.com/archives/280
For the official support we’ll have to await a future release, maybe Softimage 2011 (?)

In the meantime, if you do want to get some ICE data to drive object kinematics without enabling the unsupported feature, this operator might help you with that.

What it basically does is reverse the process, instead of writing position and rotation data from the ICEtree onto the object’s kinematics the c++ operator will sit on the object and query those attributes from the pointcloud’s data array. This way the object will constantly update it’s position and rotation mimicking the behaviour of a certain point in your pointcloud. Back in ICE you can do whatever you want with the point/particle including reading kinematics from other objects in your scene, etc.

I’ve compiled the operator for both Softimage 7.x and 2010 64Bit (windows)
link 7.x
link 2010

What do you need ?
1. a pointcloud with at least 1 point
2. an object
3. the ICEkine operator installed (just load it as a usual plugin/put it in your workgroup)

How does it work ?
To bind the operator to an object use the following bit of python code which:
1. finds and defines both the pointcloud and the object
2. applies the operator

# Add/Find object (e.g. Null) / Find object
myObject = Application.ActiveSceneRoot.AddNull( "null" )
# myObject = Application.ActiveSceneRoot.FindChild( "myObject" )
# Provide the pointcloud primitive ( .ActivePrimitive ! ) you want to remap from
pointcloudPrim = Application.ActiveSceneRoot.FindChild("pointcloud").ActivePrimitive
# Bind The Opertor to Global or Local Kinematics
op = Application.AddCustomOp( "ICEkine", myObject.Kinematics.Global, [pointcloudPrim], "ICEkineOp" )

Once this has successfully connected you should find your object traveling along with the first point in your pointcloud. To connect the operator to another point of that cloud you can open up the operators property page and specify your desired point ID. Don’t forget to initialize self.Orientation on your cloud as the operator will look for it to drive the rotation.

As far as performance goes I get 100 objects remapped running still at 60fps (on my laptop) so for remapping ‘a few nulls’ it should be alright.

Again, this is just a temporary helper but it might be useful until the ICE kine beast itself is unleashed, use at own risk.

Pls let me know how you get on it with it, especially in the (unlikely;) case of buggy behavior. I used it for a few applications/test and it worked fine for me so far, also gotta thank my friend Nic Groot Bluemink for testing it !

have fun !

cheers

Jo

Written by Jo Plaete

December 12, 2009 at 4:06 pm

Posted in 3D, animation, scripting, TD, XSI

Speaking at Multi-Mania 2009

with 3 comments

Hi there!

Been a while since my last decent post with the excuse of being crazy busy over the last few months (I know it’s not an excuse at all:) but anyway, news! I just arrived in lovely Flanders on my way from London to Sydney where I will start in June working at Animal Logic.

So I (sadly) have to leave Framestore and London for a while.. It has been a great time working there on cool projects like “Where The Wild Things Are”.

Where The Wild Things Are

Where The Wild Things Are

Coming to the point of the post, I will be speaking at Multi-Mania 2009, which is a (free) new media conference hosting all kinds of topics/speakers going from web, interactive design, flash/flex to 3D and game development. In my lecture I will talk about film visual effects pipeline covering some of the work I’ve been doing over the last few months and further on I’ll introduce the new Softimage ICE technology a bit to the 3d folks in there. Last week I gave 2 days of lectures in Bournemouth University NCCA introducing ICE to the masters 3D computer animation and (I think) they absolutely loved it. As such I’m really looking forward to this one too!

It’s in Kortrijk EXPO, Belgium on the 18-19th of May, you can subscribe to the lectures via: http://www.multi-mania.be/2009/#/schedule/lecture/40

Well, if you’re around, hope to see you there !

Jo

Written by Jo Plaete

May 7, 2009 at 2:59 pm

Posted in 3D, animation, TD, vfx, visual fx, XSI

Using Adobe Flash inside Softimage XSI

with 13 comments

In this post I will try to explain how to set up a connection between Softimage XSI and Adobe Flash. I came to this idea when I was writing a procedural feather tool for Softimage XSI and wanted some more dynamic interface to control the feather system. To understand this post I assume you already have a good understanding of how to do basic scripting in Softimage XSI using python or vbscript, know what embedded javascript/vbscript in an html is and also have notion about how Adobe Flash and its actionscript scripting language works.

Now, before we start, what can we use this for? In its simplest form, we can use this for synoptic views. Using flash, we can make them more interactive, dynamic and probably also more user friendly. Further on, we can build interactive user interfaces to control custom tools. I do believe these interfaces can be a step up from the regular synoptic as they give much more freedom at the design stage and can interchange data from and towards the xsi scene. Figure 1 shows a test of an embedded flash synoptic to control a creature rig. A nice extra about these types of synoptic is that as these interfaces are built up with vector images, these synoptic views are (re)sizeable.

See this in action in my xsi wing feather tool:
https://joplaete.wordpress.com/2008/08/08/wing-feather-tool-version-ii/

Flash Synoptic 1

Flash Synoptic for griffin rig.

For setting up this connection, the basic prerequisite obviously is getting communication in between the flash movie (living in the xsi netview web browser) and the xsi scene itself (using a scripting language). Ultimately you want to set up a sort of remoting inbetween flash actionscript code and your xsi scripting language which in my case is preferably python. After researching deeper into this I tried a few various approaches to get this to work as html javascript links, socket communication flashxmlsocket-python, etc.

After some research the best option to me is to use the Actionscript 3.0 ExternalInterface class which enables you to call external interfaces outside the flash movie but embedded inside the html page which contains the flash movie. The nice thing about this is that next to javascript it can also call vbscript functions inside the html page which brought a solution to the problem as this way I could have xsi based vbscript code straight embedded inside the html page and called up directly through the external interface. It gets even better finding out that the interface works both ways so it enables you to call functions from flash inside xsi but those can also return values and you can even register a function callback inside vbscript which would make event based callbacks possible.

Practically, to make this work, all you have to do out of your AS3.0 code is call the external interface using:

ExternalInterface.call('functionName', arguments)

Inside the html page, you can then catch this call using a regular vbscript function included inside of a vbscript block of code:

<script language=”VBScript”>

Function functionname (argument)
'your xsi code
End Function

</script>

Now, to make this work we cannot just insert xsi commands as we would do inside of the regular script editor. Instead we need to create an object of the XSI Application to be able to execute commands on:

Set XSI = CreateObject('XSI.Application')
Set app = XSI.Application

You can then execute your regular xsi functions like this:

app.logmessage('--print out of flash: ' & argument)

All right, so far so good, this prints data from flash into xsi. Cool.

Further on, as I like the python scripting language, what I actually ended up doing is using the vbscript embedded bit to link through to a python script living somewhere inside the project. This can be achieved by using the ExecuteScript method. Interesting is that from the external interface call through the vbscript in between bit we can still deliver all basic data types as strings, ints,.. as well as arrays into python. In the following example I pass on an array filled up with object names from flash to python through vbs. See the inline code comments for detailed explanations for each line of code:

Flash:

//Make your interface and add code for its interaction
//make array
var selectionArray = ['sphere1', 'sphere2','sphere3']
//call external interface
ExternalInterface.call('toPython ', selectionArray)

Vbscript embedded in html page:

<script language=”VBScript”>

Set XSI = CreateObject('XSI.Application')
Set app = XSI.Application
'vbscript function which takes in the selection array
Function toPython(elementArray)
' put incoming array into vbs array
dim arrParams
arrParams = Array(elementArray)
' call external python function
app.ExecuteScript 'pythonScript.py', 'Python', 'selectElements', arrParams
End Function

</script>

Python:

import string
app = Application
oRoot = Application.ActiveProject.ActiveScene.Root
def selectElements(elementsList):
    #join the array into a ',' separated string to pass on to the selection function
    selectString = string.join( elementsList, ',' )
    #call the selection function using that string
    app.SelectObj(selectString, '', 1)

If you add some save key functionality which builds upon the same principles this gives you already almost all you need to build a synoptic view having all the features an ‘old-style’ synoptic would give you. Off course, the possibilities in interface building in flash are endless and when we add functionality that retrieves values from within the xsi scene into flash and is able to adjust values inside xsi based upon sliders we build in our flash interface, we already take a step up from the normal synoptic. Requesting a value out of xsi is possible this way:

Flash:

function getValue(parameter){
return ExternalInterface.call('getValue',parameter)
}

Vbscript:

Function getValue(parameter)
'return the value collected by GetValue
getValue = app.GetValue(parameter)
End Function

One thing that does not work is to implement this functionality into a normal xsi synoptic window as apparently this is a stripped down browser which does not provide support of the flash player. Luckily, the netview browser, which seems to be an implemented internet explorer does this without any problem and so it’s as simple as dragging up one of those and getting it located to where your html page lives which can be done as follows inside a scripted operator or just an xsi button:

Python:

app = Application
projectPath = app.ActiveProject.Path
synopticPath = projectPath + '\Synoptic\index.html'
app.LogMessage('OPEN NETVIEW SYNOPTIC' + synopticPath)
app.OpenNetView(synopticPath,True,True)

As you probably noticed, this will use the active project path to find the synoptic inside of it.

Flash Synoptic 2
Figure 2. More advanced flash user interface for xsi feather tool. A good example of how powerful this can get as this one dynamically builds up the interface around the feather system you generated/are working on.

We can conclude that there are definitely possibilities in using this technology to build more rich user interfaces. Also, in my examples I’ve been running this locally but off course nothing stops you from running this from an online web source and that way make your tool/script easily available to your users. Downside of the story might be that you do need to know flash pretty well if you want to create nice and highly interactive interfaces and the development time of these interfaces might go up but if this results in more interactive and user friendly interfaces, some people might definitely consider it.

Jo

Written by Jo Plaete

September 27, 2008 at 11:08 pm

Posted in 3D, scripting, TD, XSI

Finished Master Course Bournemouth

with 2 comments

Just finished my Master Course in Bournemouth which wrapped up the end of August. It was a highly intense but super interesting year where I got to know some talented cg folks which I won’t forget quickly. Time for a quick trip home and a short break after which a new episode will start working at Framestore london.

Reel summarizing some of my work done in the year:

I’ve also been slightly involved in some other projects as for example Kevin Meade’s amazing ‘A Little Wicked’ animation short which I did some minor smoke effects for:

And Lars Van Der Bijl’s demolition of The Tower Of Pisa project on which I helped out with the crowd sim and assisted on set.

Some other remarkable projects from some of my classmates:
Dapeng Liu – Modeling Reel:
http://uk.youtube.com/watch?v=Si0gF8K17LE
Lee Baskerville – Modeling Reel:
http://uk.youtube.com/watch?v=fiVlwaGO5aA&feature=related
Alkis Karaolis – Modeling Reel:
http://uk.youtube.com/watch?v=-K7ax6fEXn4
Nic Groot Bluemink – Environmental Modeling:
http://uk.youtube.com/watch?v=BeURMI6Gmlc&NR=1
Pedram Christopher – Animation:

http://uk.youtube.com/watch?v=nyYaKpjGyt4
And many more:
http://uk.youtube.com/view_play_list?p=291138D85F1C166C

Jo

Written by Jo Plaete

September 5, 2008 at 10:28 pm

Posted in Varia

Wing Feather Tool Version II

with 33 comments

Working further on the wing feather tool for xsi, I decided to rewrite the tool partly and enhance it in terms of approach and usability. The next version of the tool introduces a guide feather approach to control the feather behaviour. It is still procedural in terms of generating your feathers and now gives you easier access to switch the interface inbetween various feather layers.

Quick Tour:

In detail overview:
http://www.edjstudios.be/rigging/j_feather_tool_V2_inDetail/j_feather_tool_V2_inDetail.html

All is written in Python(XSI), Actionscript 3.0(GUI) and Vbscript as link inbetween those two.
Any feedback would be very welcome 🙂 !

Jo

Written by Jo Plaete

August 8, 2008 at 12:00 am

Posted in 3D, animation, scripting, TD, XSI

XSI wing feather tool

with 13 comments

New version:
https://joplaete.wordpress.com/2008/08/08/wing-feather-tool-version-ii/

—–

Lately I was looking at rigging feathers on a wing in XSI and I decided to develop a small prototype tool for dealing with this.

Basically it started out as a procedural feather system which enables you to draw out a curve and attach it to your bone system (or extract from a mesh) and generate a desired amount of feathers onto that curve. For the generation you can provide your own feather you model up. On generation you can adjust a set of parameters the script will take into account as for example add in some randomness in scale variation or some offsets. After generation you have a set of custom controls to adjust your feathers as well as a flash interface that pops up to manage your feather distribution and individual placement. The idea is that you animate your bone chain and afterwards you can go in and adjust the exact feather placement and bake that in as animation too. This gives you a lot of control in tweaking your feathers.

The system is still a prototype having some pros and cons and probably still some bugs in there but it definitely was a very good exercise in scripting for rigging and putting the flash-xsi bridge I developed (more info soon) to a more advanced use. I will keep developing this as I think it might be useful for people who quickly want a wing feather setup for their character.

To see it in action you can take a look at this screencap where I briefly go over it:
http://www.edjstudios.be/rigging/j_feather_tool/j_feather_tool_v1.html

Feel free to leave any comments or ideas or drop me a line if you fancy to give it a try. I will upload the tool at a later stage.

Jo

Written by Jo Plaete

July 16, 2008 at 5:07 pm

Posted in Varia

Notes on a Massive Crowd Simulation Pipeline

with 41 comments

As promised, in this post I will talk about how to use Massive Crowd Simulation software in a project pipeline based upon the experience I built up doing the Path To War group project where I had to solve several crowd shots. I will not go in depth into how massive works, agent brain building or explain every step I took in detail. Rather I will pick out some specific issues and techniques I came along using the tool. This document assumes you know what Massive is, what it does and its internal workflow. Watch the Film.

Pipeline
Before I address some more specific issues I’d like to sketch out the overall pipeline we’ve set up to pull off the project as after all one of the biggest challenges in using Massive Crowd Software was to get it integrated into our university pipeline and set up a decent workflow with other 3d tools. We had quite a lot of software involved on the project with Softimage XSI at the core, Massive Prime for crowd simulation, E-On Vue for the natural environment creation/vegetation and Apple Shake as 2d compositing tool. Next to those there was also Autodesk Maya and Motion builder for the motion capture pipeline and Z-Brush for the character artists as sculpting/texturing tool. Because of the various software we were rendering out of three different renderers being Mental Ray for xsi shots, Renderman for crowds and the Vue internal renderer for the environments. The overall pipeline is shown schematically in figure 1.
Pipeline
Figure 1. Path To War overall pipeline.

In general, the core of the pipeline was XSI, this is where the previs was done and the cameras and terrains got placed to be then transferred over to other tools. Also all the modeling, texturing and animation of the close up hero shots were done using XSI. However, the initial idea of having straight a way communication between XSI and Massive did not work out as well as planned. Multiple prove of concepts of bringing over rig and animation data had various positive but also negative results and proved in the end that unfortunately we could not rely on this. Also as the project was on a very tight (8 week) deadline (including learning the tool, setting up the pipeline and finishing off), there was no time to develop custom tools to solve certain issues. Therefore we brought in Autodesk Maya where Massive was initially written for (pipeline wise) and used this as a platform in between Massive and other software.

Models and Textures
A fairly straight-forward part of the pipeline was the transfer of the character/props models and textures as this worked perfectly fine by getting OBJ files with uv information for the accompanying textures out of XSI straight into Massive. To prevent crashing the engine or overloading the renderer with a big amount of agents we had the models in 3 levels of detail. For long shots we had a low polygon version that contained around 4000 polygons. More close up/medium shots had a slightly more detailed model and the hero shots (rendered out of XSI) had the full high polygon model with z-brush displacement maps, etc. It was important though to keep the models fairly low poly for Massive but also to have the possibility to switch to a higher level of detail as agents came closer to the camera. This could be accomplished using the level of detail functionality inside Massive.

Motion Capture
As the project progressed we quickly found out we definitely needed to bring in motion capture to fill up the Massive motion trees with realistic animation and let our animators concentrate on the hero acting shots in XSI. This brings me to an interesting bit which is how the actual motion capture pipeline worked. After a few tests with our xsi-based rigs I decided to leave them for solely XSI purposes and build a new rig that would fit better into the Maya pipeline. After various testing and looking at the motion capture pipeline it seemed most stable to reverse engineer the process and build the actual crowd rig in Massive itself and export it back out to Maya (.ma) from which it could then go back and forth to Motion Builder to apply the motion capture animation. Bringing that animation back into Maya and export it as a .ma for Massive made it very convenient to import actions for the agents. Once imported into Massive, I used the action editor to prepare the actions for use in the brain. Something I’d like to point out that kept me busy for a bit is to remove the global translation and rotation curves from the action in the action editor to make the brain able to drive those based upon its calculations.
Motion Tree

Further, to get some hand animated actions into massive, we’ve built some FK shadow rigs in XSI which mimicked the Massive rigs we had in Maya. This way we were able to FK-wise transfer XSI animation data on the XSI rig through the shadow rigs into Maya and from there into Massive. Which we didn’t really use in the end as we chose to render all the hero characters out of XSI and composite them in with the crowds.

Camera
Because the terrain got rendered out of Vue, hero characters out of XSI and the crowd characters out of Massive, it was very important to get all geometry and cameras lined up. The best way to do this appeared to be to take the camera from XSI to Maya via the FBX format and then import it into Massive out of a .ma file (figure 3). Using an xsi-vue plug-in we were then able to take that same XSI camera into Vue and render out the background plates from the exact same perspective. The terrain geometry was imported into Massive using the OBJ format to have the agent interact with it but only the agents were rendered out from Massive. To keep the simulation as light as possible it was good to cut up the terrain for different shots and only import the parts needed for certain shots.

Camera
Figure 3. Camera pipeline xsi->Maya->Massive.

Rendering
For the rendering I took advantage of the Massive passes based set up approach towards Renderman compliant renderers. We exported the RIB files out of Massive and rendered them out with Prman. We did have to do some post processing on the RIB files though to get them to render out as we wanted by adding attributes and changing them towards our linux based renderfarm. Several python scripts were developed to remove windows path names and change the massive.dll reference into a .so one.

The passes produced out of Massive – Prman were a diffuse pass (figure 4), specular pass, ambient occlusion and a variation pass. Originally, we tried depth map shadows but it was hard to get a decent pass out so an ambient occlusion pass was needed to stick the agents to the ground when composited onto the background plates. Since the ray traced ambient occlusion on a lot of agents (+300) crashed the renderer, a script had to be developed to split up the AO RIB files which were then rendered out to multiple AO files per frame (figure 5) and combined back together in compositing.

Render Passes

Figure 4. Passes.

Another pass we’ve called the variation pass (figure 6) was pretty interesting and helped the final output a lot. Since we didn’t have much time to create a big amount of character variations, we used this pass to pull more variation in the crowd in compositing. Technically it’s just a constant shader which get passed a random argument value between 0.2 and 0.8 defined by a massive agent variable. Make sure to pass on the random value from the agent and so on a RIB level as randomizing inside the shader would give you ‘noise’ agents which is not what you want. This way we had a pass of all agents having a random grayscale value which the compositers could use to pull more variation into the crowds. In the end a lot depended on the compositors to make it all work together since the terrains were rendered with the Vue renderer, the hero characters with Mental Ray out of XSI and the crowds with Prman. So out of massive we had only the crowd agents which were then composited into the terrain which was rendered out in separate layers to achieve the best integration. Another thing to take into account when rendering out using this approach is not to render the terrain out of Massive but do use it as a masking object to occlude agents that are behind the terrain.

Variation pass
Figure 6. Variation pass.

Randomized Agent Setup
When doing a crowd simulation one of the most important things is to take care not to give away similarity or patterns in your crowd. Next to the additional variation pass at render/compositing stage we tried to address this problem as much as possible by making variations of the character models and textures. We ended up having 2 different Viking base models which both got 4 modeling sub variations and 4 texture variations plus also 4 different weapon props. This was all combined into 1 agent which had some randomized agent variables tied into a massive option node.
Looking at the movie you might join my opinion that the Viking crowd looks a bit more dynamic and randomized then the Egyptian one. This is solely due to the fact that we had 2 base models in that Viking crowd and only in the Egyptian crowd. Though there were 2 model variations, 4 texture variations and different props assigned to those Egyptian warriors, this variation could not live up to having 2 different source models also having this sub variation.

Of course, randomizing your animation is vital as well. Maybe even more then visual pattern appearances, movement patterns are spot very easily and give away your crowd quickly. So it’s good to keep this in mind when preparing/directing your motion capture session to go for as much various motion as possible. For example capturing multiple different run cycles for the same run motion and randomize those in between your agents is a good way to go. Next to that putting a slightly random variable onto your action playback speed gives really nice results too. As we hadn’t much time to prepare our motion capture session we came out of it with good but slightly not enough data so we went in and manually changed some of the basic run loops slightly and randomized that playback rate in the agents brains to get a more natural feel to the overall movement.

Directing the Agents Movement and Flow
The most interesting interaction that had to be taken care off inside of Massive was the terrain. We had trees on the terrain rendered out of Vue which had to be avoided by the agents but obviously were not apparent inside massive. Something simple but which helped a lot for this was the feature in Massive to put an image as your viewports background. So I brought in the terrain and camera for the shot and putted the Vue rendered out backplate image as viewport background to start matching up and to find out how to place/direct the agents. Next to that some dummy geometry was placed on the terrain where the trees supposed to be to get them avoided nicely by the agents vision.

In the end the brain made the agents do the following: follow the terrain topology including banking and tilting them to the tangent; avoid each other based upon sound emission and flock them back together when wandering to far off; follow a flow field for the general running direction; getting random actions applied and blended onto the agents; randomize the playback rates of the actions for variety. Of course, lots of tweaking and testing was involved to get the crowd behaviour to work well but I was very happy in the end to use Massive for this which definitely provided some very good features to combine all these behaviours and keep a lot of control over what you are doing without the need to go into heavy programming. Figure 5 gives an overview of all forces working on the agents and as you see it can get quite tricky as you get opposite forces pushing them away from each other and pulling them back together which makes it necessary to go in and tweak the behaviour on a per shot basis.

Forces
Figure5. Agent Forces.

One thing I did miss in Massive though is the ability to go in and direct a single agent or a ‘make agent independent and tweak brain’ function. As all your agents, or at least groups of them, share the same brain it is sometimes hard to deal with that annoying one that is not doing what it should do. I ended up deleting those agents out or changing their start position to attempt to adjust their behaviour which is a pretty tedious process and finally some were even masked out in compositing.

As I mentioned before, integrating Massive into our project pipeline was one of the biggest challenges on the project. It is a great tool for what it does with artificial intelligence and crowd simulation but it requires some investigation and testing to get it to work properly. I hope reading this might give you some directions on how to integrate it on your project. I do not assume this is the ultimate way but it was sure close to the best way with the out of the box solutions. Contacting some industry companies and speaking to other Massive users it appeared that lots of them develop in house tools around it to fit it into their pipelines. Any comments or suggestions on this are very welcome.

Jo

Written by Jo Plaete

June 25, 2008 at 12:10 am

NCCA Symposium – Renderman Presentation

with one comment

For the people that attended my talk at the NCCA symposium and anyone that is interested, here is my presentation together with some scenes, shaders and scripts I’ve talked about. It concerns a global overview over renderman rendering.

Presentation pdf
Additional material

Jo

Written by Jo Plaete

May 20, 2008 at 12:28 pm