Week 2
January 16, 2017
In week 2, I took a closer look at “MB Tools” and considered whether to (1) update the code such that it would work with the changes and additions we made during the previous week or (2) rewrite.
MB Tools
mbtools covers several critical aspects of production at Mindbender, 4 of which I have been evaluating for the updated pipeline.
- Animation Import
- Animation Export
- Shader Import
- Shader Export
At first glance, the source code came up short in documentation, consistency and ability to extend.
Documentation was lacking both internally, as comments within the code, and externally as a purpose-written document for those looking to understand and/or extend.
Inconsistencies in style and lack of any apparent visual standard made it difficult to understand and make changes to the code.
So before committing to a path, I set out to change these factors, expecting that once the code was readable and consistent I was in a better position to both decide whether or not to keep it and if so make the necessary updates.
Having updated the alembicImporter.py
and just starting work on alembicExporter.py
it became clear that this was an uphill battle. The code was not in any position to be extended, even following proper PEP08 standards and having the appropriate internal documentation. The modules differed too greatly, had too many assumptions on paths to salvage any of it.
For completeness, the difficulty in extending the code manifested itself in both logical and graphical implementations. That is, the graphical user interfaces were created in “Qt Designer”, a graphical interface for designing graphical interfaces, but later converted to source code and modified by hand. Embedded within visuals were hard-coded assumptions on how artists performed their work.
With this information at hand, I set out to understand and replicate the functionality.
I wanted the assumptions out in the open as validations that Arvid could understand, extend and maintain along with a unified experience in loading animation, shaders and assets of any kind - namely models and rigs.
Exporting was also unified and automated, as a net effect of passing validation and managed in the same manner any other asset was exported. The Pyblish GUI.
Loader API
For the importing of assets, the concept of loader
s were introduced.
A loader
represents a Python function, called whenever an asset of a given family is loaded via the Loader graphical user interface. Each loader
is implemented in a single file, and registered at start-up and dynamically associated to families, similar to Pyblish plug-ins.
In practice, this means that whenever an asset is loaded, what happens is defined in it’s own encapsulated Python module.
Kinds of assets
At the moment, there are 4 types - of “families” - of assets defined, all of which are created and used in Maya, with Nuke and Houdini still to go.
Model
- a collection of meshesRig
- a riggedModel
Look
- a shadedModel
Animation
- an animatedRig
In time, I expect a few additional families to appear.
Layout
- Blocked out camera and which assets are part of a shotSet
- An environment, with assets imported for each prop. Also known as “set dressing”Render
- En renderad bildsekvens, skulle också kunna delas upp i..RenderLayer
-RenderPass
-
…
- Documentation
Remote Collaboration
The entire pipeline has successfully encapsulated into a single folder, which means any freelancer or studio will be able to install it with ease. Something like this.
$ git clone https://github.com/mindbender-studio/core.git
Due to the source code being open source means we can leverage these free services and cut away a few days of building our own distribution and update system.
The exact installation procedure is to be improved in week 3, along with proper documentation on how to go about it. It should take no more than 30 minutes to install and will only ever need to happen once per studio or freelancer. Updates to the pipeline, at any location, are made by typing this.
$ git pull
ls()
performance considerations.
WARNING: This section is very technical
from mindbender import api
for asset in api.ls():
print(asset["name"])
ls()
is the central mechanism to asset management at Mindbender. It works by (1) scanning the filesystem for assets and (2) delivering them as dictionaries of data as defined by their respective “schema”.
Each dictionary encapsulates all information about a given asset, including (1) versions, (2) subsets, (3) representations and ultimately (4) files. The primary mechanism for storing the content produced at Mindbender.
The API is straightforward.
- Register a path.
- Call
ls()
to read its contents.
At the moment, this reading of content occurs individually on each client - e.g. an artist computer. This means that (1) an artist requests a listing, (2) ls()
is invoked and requests a listing via the OS, (3) the OS makes several (four) requests for a directory listing over a network per asset and (4) have the resulting data formatted as per their schema.
At 10 artists working simultaneously on small projects, even with the subpar network performance already present at Mindbender, the amount of listings could peak at ~2000/second (50 assets 10 artists 4 levels), but the bottleneck is likely still not disk performance and likely need not be of concern.
However, as crew and projects get larger, and as additional strains are added to the filesystem (e.g. sync), the impact on performance may start to become noticeable.
In this case, I have a suggestion for one achievable method that may alleviate this burden up to an estimated 100+ artists and feature film sized projects.
It consists of (1) a client and (2) a server.
- A dedicated computer hosts an “ls service”. This service takes requests for asset listings and reports back.
ls()
is retargeted to make requests to this service, as opposed to querying the networked filesystem directly.- The service may then (1) reside directly on the computer closest to the filesystem (in this case the NAS) and (2), most crucially, cache prior requests for listings.
The caching mechanism can in the simplest case hand pre-made results back without a roundtrip to the filesystem and update the cache only at a given interval - such as every 10-30 seconds.
This means the filesystem, no matter how many requests are made, are never bothered more than once every 10-30 seconds.
The immediate side-effect of this caching, as caching of any kind, is data going stale. That is, the artist not receiving the latest information when it was expected. As a solution to this, the caching mechanism could be integrated with publishing or the native journalling mechanism of the operating system, such that it rarely ever has to perform a “full scan” but instead mostly updates parts that change.
For example, in a given cache made 30 seconds ago, a new asset is published and makes a request to the ls service, saying “Hey, I just added ‘AssetA’ to this location”. The caching mechanism then updates it’s internal cache, without having to roundtrip to the filesystem.
ls()
remote
WARNING: This section is very technical
Once the above service is implemented, an additional benefit arises. Namely that if local artists can query a single source of information for assets, so can remote artists.
Practically, anyone working remotely could send a request to the same server of a listing of assets. Once choosing an asset, his computer could then evaluate whether (1) it is already available locally or (2) whether to “make it available offline”.
Due to caching, requests made to the service would be virtually free, enabling both local and remote computer to poll it for updates. That is, whenever a new asset is made available, the artist could receive a notification - such as a balloon popup in the task bar.
To cover with Arvid
The pipeline is written like a character rig. It does a lot of things, some of which are complex and difficult to modify. Other things are exposed as “controllers”, known as “APIs”.
It is with these APIs that enable Arvid to extend and adapt the pipeline to production needs.
- Where to find information about the pipeline, documentation
- Creating a new project, including defining its directory structure
- Creating new assets within a project
- Exposing a new application to artists
- Defining a new type of asset, a “family”
- Loading a new type of asset in for example Maya or Houdini, a “loader”
Each of these topics will be thoroughly documented as well.
Next steps
The 3 weeks of pipeline development is soon over, here’s what I have in mind for the future.
- I expect Arvid to continue developing this pipeline, there are several APIs with which to interact (plug-ins, loaders and the source code itself is well documented and clean).
- I’d be happy to provide support for the coming months to help Arvid get along and guide him through making important changes in the code base, and to offer advice on difficult choices.
- In 6 months time, once the workflow has settled and you’ve discovered new areas of improvements, I would suggest you hire a pipeline TD once more for maintenance and upgrades.
The primary user interfaces introduced to you that I expect to be around for a long time is Creator and Loader. Every studio has these in one form or another, either directly or indirectly, and they are critical to asset management at any scale.
The creator is the entry-point for the creation of any new asset. It is where an artist specifies what kind of asset he is working on, along with a name. I expect further options to be added to this box as time goes by, such as a Category (Character, Prop) and thumbnail (see inspiration below).
For inspiration, here is what Colorbleed has done since I implemented these user interfaces for them a little over a year ago.
Best,
Marcus