Currently have to work with Softimage and Arnold and while learning I accidentally ran into a container with the funny .ass name. Surprisingly the mandatory Mandelbulb renders got exciting again.

Here are some quick test renders without post straight out of the renderer and obviously I encourage giving the Hi-Res versions a go to see the actual spheres. 

HI-RES

Hi-RES


Before I came down to rendering the images above I did one simple Mandelbulb image sequence animating the depth offset to see it render at all. There were about 1 billion spheres per frame:

FRAME 1

FRAME 2

FRAME 3

FRAME 4


Same setup but with sphere count doubled.

FRAME 5

The art group Cosmosys released it’s eleventh exhibition with the name ‘Self Portrait’, so it’s people had to do themselves. 

You can check it out HERE


I contributed with an abstract render of my funny looking head:

FULL VIEW 


Finally here’s a funky looking high poly wireframe. Recommend checking out the full res version.

FULL VIEW 

Geometry created with Proc3Durale an awesome python module by César Vonc. Actually if you scroll down to the examples you will find older works by me.

Some quick renderings as a reminder that I need to use this for something in the future.

This is an animation of a scene I originally did for my weekly render project as a still last week.The original design the render is based on is done by Kilian Eng and you should probably check his awesome tumblr page anyway. I’m writing this post to show this animation though and to share some thoughts on unbiased animation rendering, so if you’re interested in the initial 3D recreation visit my post here.

Using and testing unbiased rendering over the past year I fell in love with the MLT (Metropolis Light Transport) more and more for both, stills and animation. In fact it’s the only mode which made the above animation possible within 11 hours of rendering.

It’s hard to talk about MLT right away without mentioning the more common modes like PT (Path Tracing) or BidirPT (Bidirectional Path Tracing). For instance, while PT is fast in samples calculation especially when GPU accelerated, it tends to converge rather slowly compared to BidirPT even though it has more samples. Personally while I’m convinced that GPU accelerated PT is amazing for product rendering in a realistic studio environment, it simply breaks down on heavy, unexpected scenes more often than not because of either the mode itself or the Vram bottleneck. 

Both PT modes have one downside in common though; being static noise. While denoising problem spots like shadows, transparency or DOF areas on a still image is not the thing one likes to do, it’s easily possible at very last. Unfortunately things can get very nasty with static noise in animation.

Here’s an example video rendered with Octane. This is by all means not to criticize the author of the awesome video and serves only demonstration purposes to show a technical limitation which occurs in pretty much any video rendered with PT.

Look at the background at 00:19 - 00:21, the glass in the centre at 01:08 - 01:14 or the cubes at 01:25 - 01:32. As you can see the static grain distribution appearing only on certain objects is certainly not an effect you’d like to have. Especially because these modes tend to clean up light sources right away while shadow areas will take some extra samples to clean up…

The above reasons were the reason I got excited about MLT. 

First of all MLT can be slow in the long run and while cleaning up rather fast at first it tends to be slightly grainier than it’s mode counterparts when rendering extremely high sampling rates for days or hours. MLT has it’s strengths when rendering volumes and transparencies too. Often people add grain in post to dirty up their clean works, so why not intentionally slightly undersample with MLT? It looks very natural.

But when talking about animation it get’s a lot more interesting as a) MLT has no static noise and b) it distributes noise more evenly over time  leaving a bright HDRI sky or light sources almost as grainy as shadow or glossy parts. 

This makes animation a lot more accessible for scenes a bit more complex than 5 objects with a glossy material on a gradient background. Especially because unbiased rendering is so different in workflow when it comes to problems like these. In a biased renderer you would put more samples into one certain material and render zillions of render passes to adjust in post afterwards. In unbiased rendering you got the whole image and what would you do if 90% of the scene cleans up easily but the image needs 10x the sample amount just to clean that stupid window or shadow area under the table leaving static noise in your animation?Yes you can render material passes for example and denoise that certain material, but apparently no one does this and apart from that it’s the only possibility you’ve got.

Guess I’ve talked enough about how important even and non static sample distribution is. Besides It’s really exciting to have virtual light, travelling to a virtual sensor, creating virtual (natural) grain.

 

 

 

So let the render stuff begin! My very first Vray scene.

Enable the area light for the blue look. Specular surfaces got a high glossy subdiv. Lower it if you’re on a slower machine.

Download scene file: HERE     Released under Creative Commons for educational purposes.

image

image

image