VFX facility Animal Logic has released USD ALab, a free USD asset that the company describes as “the first real-world implementation of a complete USD production scene. see the promo video below.
I gave it a try in the latest Houdini19.5 and Solaris. the scene crashed randomly or was super laggy. I gave a shot with Karma and Arnold renderer. the experience in Lops is still awful and lots of features are still missing, just frustrating.
I’ve opened the scene with GafferHQ, they experience is quite good and feels super snappy. the downside, USD is still not fully supported. I decided to write some scripts to convert USD files into native gaffer shaders inside of gaffer. I am planning to great a modular converting tool to transfer shader between Arnold, Cycles or the upcoming open-source release of MoonRay from Dreamworks animation.
it seems some scene has a lot of Animal Logic-specific settings. like camera projection, UDIM textures etc. issues which I have to solve. I’ve converted the EXR textures into a tx mid-map file which adds another 70GB of data, but this should help to more efficient renderings within Arnold renderer.
In the last years, I am seeing more Artists having trouble to larger projects, here are the problems as I see it.
the Heat problem. Many artists ignore the heat problem of the hardware. They going to build a custom pc without spending any thought about cooling. GPU rendering runs at high frequency which means it needs a lot of power and produces a lot of heat. The placement of the graphics card is essential so it does not get into the heat stream from the CPU or power supply. Your CPU or GPU should run around 72 degrees. Higher temperatures can damage your card and produce more crashes or the system starts to throttle down.
using CPU and GPU for rendering. Using CPU and GPU together is critical and could trigger lots of problems. When the CPU is too busy with rendering it may be impossible for the CPU to feed the GPU fast enough and you could get an overall render slowdown. Also when All Units are running under full throttle you can get easier into heating issues.
tx files. Using mid maps Tx files speed up your rendering and saves memory during render time, use it!
unoptimized geometry. Spending an extra hour to optimize your geometry can save you hours of render time, crashes, disk space etc.. use instances and render procedural as much as possible.
render procedurals. Use render procedurals or file format your render understands natively without the need to translate the geo in a pre-rendering process. For Houdini users, use the sop edit node then you left the path wisdom. 95% of all Houdini TD do not understand the proceduralism of Solaris.
separate graphics display. Use a weaker graphics card for display and keep your beefy GPU free for rendering, it saves tons of extra memory on your GPU.
too much data in cache files. Use as less attributes as possible in your cache files. Create AOV on the fly in den shader it saves a ton of memory.
wrong hardware. Get proper hardware which works together. If you plan a multiple GPU system, get enough PCI lanes and a decent CPU which is able to handle the data transfer for the GPU. Consider buying proper workstations, sure you can get faster machines if you build your own machine. But have a machine that is 30% faster but crashes all the time or battling heat issues etc.? Anyway, the thumb rule for speed benefits with a multiple GPU system:
Card 1 100%
Card 2 100%
Card 3 80%
Card 4 60%
always using the latest driver and software. Motion graphics artists like to live on the edge and ride shitstorms on social media if something fails, instead of working smarter. The biggest mistake, install daily build and working with it. The is a reason why Blender and Houdini have production builds. Work with the same software version during the project and only update if you really must. Have multiple versions of Houdini or renderer installed, if you wanna try new features. Don’t update your graphics cards, kernels, motherboard or windows version during a project, only update if the new version gives a real benefit.
Faster renders do not get your job done faster. To get quick render times, your scene needs to be optimized and your workflow or personal pipelines must work smoothly. Plus your hardware needs to be set up professionally to minimize technical issues. But that’s where most freelancer fails, as soon the things go wrong like hardware issues or dirty render scene on quick iterations then the stress kicks in. If you plan your project with longer render times in the beginning, it increases your error tolerance. Also, it’s a relaxed type of working to set up a render scene and send it over the weekend for rendering. You are sure to get the expected result on Monday because your CPU rendering can handle more significant un-optimized scenes or doesn’t run into heating issues etc. In the end, when you count the extra long hours to fix your hardware and software issues, you could have rendered on a slower CPU as well without the stress. Sure it’s fun to use lasted beta and hardware but also expect bugs and issues and it’s fun to solve the problems. If you are in production, you should fall back to save and solid system, this way you minimize as much as possible technical issues.
Programs do not show the shell running in it on the Windows operation system, unlike Linux. in some cases getting a pop-up window from Houdini tells what’s wrong. but most of the time you don’t get anything. you don’t what’s going on, especially when you dealing with GPU rendering.
for example with Arnold renderer you get warnings, errors, hints and tips directly into the shell, like here:
you make Houdini star in shell-like in windows by adding this line into your Houdini env.
if wanna use arnoldCore with a command line or 3rd party DCC like GafferHQ, you need to install Arnold SDK. it’s a zip archive, you just unpack it and you are done. But to be able to use in Gaffer or command line you want to add the unzip folder to the Window environment.
look for the Environment options with windows search like here.
open the options window for the system environment :
Click on the Environment Variables bottom:
Click on New under User variables and variable :
add a variable name: “ARNOLD_PATH”
add variable value: path to your unzip folder like “C:\users\xxx\arnold”
also, add 2nd variable “ARNOLD_ROOT” as the value pointing to the same location. this way any application will find the Arnold Core and other tools like txmake etc..
First a disclaimer, this is my own opinion and only mine. Many people ask me if the Apple silicon CPU/GPU is good for CGI or VFX productions and is it the future? First, let’s have a look at the concept of the silicone chip from apple.
Macbooks and Macbooks pro with the M1 chip have 20+ battery life, which is 3-4 times more than Intel laptops or previous MacBooks. Is the M1 so much better than the Intel or AMD chips? mmhh , maybe or is the Battery so much better? I don’t think so, it’s still the type of battery. What really makes it stand out is the concept of CPU and GPU design. The M1 CPU has its own Memory onboard without the need to move data forth and back from a memory on the mainboard, which already saves electric power usage. next, the GPU cores, which also handle graphics display are also on the same chip. This means data don’t need to be transferred through the mainboard to a graphics card, which needs a high frequency to be fast. High frequency also needs more power and cooling, that’s the next power savings. the M1 design also has shared memory, meaning CPU and GPU use the same memory, unlike a regular computer, which saves extra work shifting data from CPU memory to GPU memory, also power savings. Properly the biggest energy saving comes from the different CPU cores. M1 chips have different CPU cores. The regular CPU cores like in Intel or AMD chips and lower power CPU cores. These simpler and slower CPU cores only need a fraction of the power of a regular full-featured CPU core. And that’s the key. 95% of laptops used are for writing, office work like excel sheets, watching movies or surfing the internet, tasks which the low-power CPU can easily do. And that’s the main reason why the battery lasts so long. If you do raytracing all the time, your battery life will be in a similar range to intel used laptop.
So, the concept of M1 Chips is, having multiple processing units for different purposes on 1 chip. CPU cores, low-power CPU cores, GPU cores and AI cores (Nvidia calls its AI chip Tensor cores) are on the chips which a unified memory. This means the 128GB of Ram, like on the M1 Ultra Chip, can be fully used by the GPU cores for graphics or heavy jobs like raytracing. That’s huge! If you wanna have 128GB graphics card memory with Nvidia or AMD cores you need to invest 14k$+ just for graphics cards plus the costs of a PC with a proper mainboard and cooling system. The downside of the concept is, that it’s not flexible and you can’t upgrade the memory or graphics cards. But to be honest, how many times have you been in the store and bought more memory these Days?
Overall the concept sounds promising and I think it’s the future for now. This concept is not new, the O2 workstation from Silicon Graphics had the same concept, and so did the Nintendo 64 video game console. It didn’t turn out to be successful. At the time, the speed improvements of the intel chips were so huge, that they outrun any benefits of the O2 design.
In their current state, the M1 chips hold up pretty well with Intel and AMD CPUs and beat anything in the same price class. Of course, you can outrun the M1 chips with a high-end desktop and water-cooling, if you wanna do rendering but at what cost? Also, the M1 chip still can’t compete with the full beefed-up 3090 Nvidia GPU card in terms of rendering speed. The M1 is designed as an all-purpose workhorse and extremely fast one, but it has a big memory advantage. Video processing for example is unmatched, it profits from the shared memory, no need to shuffle insane data of a 4k or 8k video between memory types. At the moment, the M1 chip powerful alternative to Intel’s CPU dominance and competition is good for the consumer. A workstation computer the apple silicone chips is a worthy alternative. Even more so once the software gets converted native M1 are applications and take full advantage of the system. Rumours have it, that’s intel and AMD work on similar architecture. the future looks bright!
The M1, M1 pro and M1 max share the same architecture, just different amounts of core and memory. The ultra chip is basically two M1 max chips glued together and data exchange is so fast, it works as one.
The apple silicone can is also attractive as a server or render-farm system. the low-power usage and small form factor are efficient, you don’t need big motherboards with fast bus lanes, which bumps up the cost of cooling and power. a huge factor in this area.
The question remains, what is the future? will the default PC chips stay dominant or will the ARM architecture chip design, like Apple M1, take over? From a technical standpoint, the ARM architecture is the most efficient way for running a computer system. Having the right CPU/GPU unit for the right job is power efficient and cheaper. The downside, it’s very complicated to write complex software for this kind of system. If AMD comes out with 512 core Threatripper that would be not very power efficient but extremely efficient for software development. It will triumph over power efficiency. Nevertheless for mobile or small devices ARM architecture will be standard just because the power saving is so impactful.
It will rule Desktop machines properly too because having a large amount of full CPU cores unit has its physical limits. Extreme CPU cores need a large mainboard, a lot of electrical power and cooling. We have already reached the physical limit of how much we can compress the CPU into a smaller space, 5mn.(nano-meter). Smaller is impossible because electric photon touch it’s other and you get the correct information you need anymore. On the other side, new software languages and AI improving constantly which helps with software development for abstract parallel processing. I think ARM is the future and it will interesting to see what kind of ARM system the competition bring us. (Intel,Nvidia,AMD ).
trying something new, pixel art with aseprite! a scene from Monty Python Holy Grail. I’ve discovered pixel painter Aseprite. the interface looks kinda retro but it’s a surprisingly intuitive tool and really fun to work with.
I was a little and started to play around with 3D fractal system in Houdini. I used points and instancing with Arnold GPU renderer. I’ve tired use Solaris but its very install with heavy geometry instancing.
I’ve reopened my old fractal landscape Houdini scene and run different fractals on it, change the lighting, etc… with is now much more fun with the current Arnold GPU. I am really supposed you can squeeze out of a scene if spend more time to light the scene.
I’ve picked up an idea from my social media stream, rendering a bubble soup. for this example, I used the idea from Entagma’s tutorial using a flip simulation for bubble soup movement.
the Flip simulation is quite simple and needs fixes to get correct with UV Distortion, but focused on the rendering part. the idea is quite simple using Fluids Dynamics to distort the UV on a sphere. a texture mapped on a sphere the UV’s to drive the thickness of thin-film shader. I used the default ThinFilm feature of regular Arnold Shader. This makes the setup super simple, all you need is Spheres is distorted UV’s.
the rendering from Arnold GPU is a little slow for GPU rendering but the Arnold CPU is extremely fast, the render time for HD was 12 seconds on superslow Xeon CPU. it’s by far the fastest thin-film rendering on the CPU, I’ve seen so far.
raw rendering. I need only AA sample of 1. below is transmission albedo, it quite graphics look on its own.
it almost looks like infrared images from Nasa of Jupiter.
for the next iteration is will great the type animation / noised structure within the shader only. it should be easy to re-recreate the dynamics with noise fields.
The following Standard Surface shader settings were used to create a soap bubble.