Manuel Kotulla - DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Tue, 09 Dec 2025 11:35:34 +0000 en-US hourly 1 236729828 Houdini 21: Like good wine (Part1, VFX & Geo) https://digitalproduction.com/2025/11/17/houdini-21-like-good-wine-part1-vfx-geo/ Mon, 17 Nov 2025 06:00:00 +0000 https://digitalproduction.com/?p=213697 A clear glass cup with whipped coffee being poured into it, showcasing layers of creamy foam and brown coffee. The cup is surrounded by scattered coffee beans and a silver frother on a dark surface.

Houdini 21 polishes the chaos: production-ready MPM, neural surfacing, smarter Pyro, and Vulkan viewport upgrades: all taste-tested for real-world use. (Part 1 of ???)

The post Houdini 21: Like good wine (Part1, VFX & Geo) first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
A clear glass cup with whipped coffee being poured into it, showcasing layers of creamy foam and brown coffee. The cup is surrounded by scattered coffee beans and a silver frother on a dark surface.

A tasting of the dynamics and geometrys side (of) effects

This release is not about the number of features, but about finishing what was started, achieving true production readiness, robustness, performance, and ease of use. It’s a version focused on quality of life. Feature sets like MPM simulations and Karma have matured like a good wine. The machine learning tools respect the artist’s skill set and have left their state of play are actual usefull in production.

A graphic titled 'R&D Priorities' featuring the logo of Houdini at the center. Below the logo are two gray boxes labeled 'Strengthen Core Technologies' and 'Enhance User Experience', with an orange button at the bottom stating 'KEEP IT ALL PROCEDURAL'. The background is black.
SideFX’s R&D priorities presented at Equinox Hive Keynote

Even though SideFX remains modest in its official statements, Houdini 21 is a massive release. For the feature-hungry among us, the highlights include a fully matured APEX; a refined and clever animation and rigging framework; a simulating Copernicus (think Substance on Dope), and in the VFX realm, the production-ready MPM Solver. On top of that, we’re seeing machine learning tools popping up in all the right places (AI for people who don’t expect a Pixar film from a single click), faster and expanded rendering with Karma, and a healthy dose of quality-of-life improvements.

Since our editorial cat can only count to 300, we can’t tell you the exact number of new or improved features, but we’re impressed nonetheless. And soon, you will be too. To avoid overwhelming both writer and reader, we’re breaking this article into a mini-series. We’ll start with VFX & Geometry, followed by Solaris & Karma, Copernicus & Terrain, and the massive Animation & Rigging tools.

A clear glass cup filled with layers of frothy coffee and cream on a dark background. Coffee beans and sugar crystals are scattered on the surface around the cup, enhancing the rich café ambiance.
The beaut of MPM. Art by Peter Sanitra

MPM

One of the most exciting additions in H20.5 was without question the MPM Solver. The MaterialPointMethod truly allows you to simulate a wide range of different materials. From water, snow, and sand to honey, metal, and concrete, all within a single Solver setup. The geometry is simply substituted by points or particles, which are then simulated. The different materials interact physically accurate and constraint-free, purely based on their assigned parameters.

The initial release was already impressive, but it left users standing somewhat in the simulated rain when asking: “How do I get a proper mesh with UVs?” That exact issue has now been beautifully solved in H21. Any MPM simulation, whether rigid body or fluid, can now be meshed (as polygons or SDF volumes), including UVs, color, and other attributes, using no more than two nodes.

A nice and usefull addition to this workflow is the MPM Debris node, which generates new points along fracture lines as sources for smoke, dirt, or secondary debris effects. So let’s take a look at meshing hard and fluid or granular surfaces across a few setups and scenarios and wrap things up with a creamed cookies drink while watching the official and very excellent MPM demo.

Surfacing MPM Simulations

Testing time! For this test setup, we’ll have a clay ball smash into a vase model, paying close attention to UV transfer and the generation of smaller fragments through the Debris Source. The easiest way to start an MPM simulation is by typing “MPM Configure” into the node search. This gives you a complete set of starter nodes right away. (Under MPM Configure, you’ll also find plenty of additional example setups for study or creative repurposing.) By the way, the MPM container on the far right controls the overall resolution of the entire simulation.

We replace the default sphere with our own model and can now assign materials directly inside the mpmSource node and tweak them to our liking. It’s genuinely fun. Feels a bit like a mini-game. Since a concrete material would be more realistic but also quite boring as a vase material, we went with Chunky Snow instead. The environment comes in via an Object Merge directly into the mpmCollider Sop, ready to go. Our clay ball, the antagonist of this little simulation, isn’t a collider per definition but another mpmSource ready to be smashed, merged with the vase and given its own material behavior, Chunk Soil.

To make sure the particles can actually see and “love” each other, we need to enable Particle-Level Collision in the solver. The new Auto Sleep feature helps keep the vase passive until the collision happens, preventing it from collapsing at frame one and saving quite a bit of compute time.

Since the clay ball will be meshed as a fluid and granular surface and the vase as a rigid object, we first separate the MPM particles using a Blast node, filtering them by their respective source names.

Hero and antagonist with different colored mpmSources in one Simulation.

For surfacing the vase, we use the mpmPostFracture node, which takes the rest-geometry and the MPM particles as input. This node essentially breaks the geometry apart “end to start,” so it needs to be fed the final frame of the simulation. After that, we choose either Voronoi or Boolean Cut as the fracture method. The latter can generate interior details, subtle irregularities on the inner faces of the fracture, that weren’t visible before, and it’s also faster to compute. We can further control the number of pieces as well as define the minimum fragment size at which new pieces should be generated.

A detailed vase with intricate patterns displayed among several gray rocks on a flat surface. To the right, a user interface is visible, showing a node-based layout for editing parameters, with sliders and options for adjustments.
UV-ready for destruction with ray-traced glory in Vulkan. Check out those beautiful UVs.

The final node in the chain is mpmDeformPieces, which transfers the newly generated fracture geometry onto the MPM particles and just like that, the vase shards, physically convinced they’re made of snow, go flying through the scene, complete with perfectly intact UVs. For more of a muddy mess, we could have generated a liquid or granular surface instead, but we’ll save that for the clay ball. The result from the Debris Source, which lets you precisely define when and where particles based on fracture are emitted, is then passed into a POP network, including collisions from the vase and background).

Nice working UVs meet Debris Source particles and a snowy-chunky vase.

Time to get serious:

Smashing time!

Continuos Emissions & Surface Tension

Layered continuos emission powered by a pop turbulence force

With this new option, you can quickly fill containers, simulate expanding materials, or layer different materials on top of each other. The option is located in the mpmource node and spreads new particles apart using positive pressure.

A digital 3D model workspace displaying two objects: a large purple, speckled vase-like shape and a smaller spherical shape to its left. On the right, a settings panel is open, showcasing options for geometry adjustments with sliders and parameters.
The higher the expansion value, the faster the MPM source grows.

Let’s take our vase and let a thick, viscous something ooze out of it. A good chance to show how simple MPM can be: if we want things to float inside the fluid, they just need a lower density. A few cubes are generated and assigned a jelly material. Two geometries, two MPM Sources, merged and fed into the solver. That’s all it takes. Just as easily, you can mix different fluids within the same setup. And for a bit more drama, we can dive into the solver and add a POP Wind node with some turbulence.

Surface tension allows for realistic effects such as droplets, tendrils, and flowing water. H21 introduces two new ways to control surface tension for both liquid and viscous materials inside the MPM Solver. The Point Based method offers higher accuracy and stability, making it ideal for small and detailed simulations. The Grid Based method, on the other hand, is optimized for performance and handles millions of particles more efficiently, which makes it better suited for large-scale scenes. External forces and friction can be increased if objects keep moving when they shouldn’t. Otherwise, you might end up with a scene straight out of Terminator 2.

A flowchart illustrating preferences for surface tension methods. At the top, it states "I want Surface Tension." It branches into options for accuracy and speed, leading to Point Based Surface Tension and Grid Based Surface Tension.
Choose your destiny. / SideFX
Liquid mpm sim with and w/o Surface Tension
Meshing of MPM Particles as (Neural) Fluidsurface …
… or as particles and particle driven instances.
Breaking Geo > Sim with Jp Attribute > Separated Breakpoints > 2nd Sim Based on Breakpoints

To achieve high levels of detail without an unnecessarily large number of simulation points, the new, more precise collision detection allows you to use the fracture edges of a simulation as the source for a targeted secondary simulation. The attribute Jp (Plastic Compression/Stretching) is key here. It can be used to isolate the fractured areas and feed them into a Surfacing node set to VDB mode. This resulting volume can then serve as the source for the second simulation. And don’t forget to use the main simulation result as a collider.

Adding details through 2nd simulation. / SideFX

And finally to top it off, the official demo. A detailed breakdown of that beautiful cookie shot.

SideFX

Machine Learning in Dynamics

You won’t find any generative AI native in Houdini, but rather a growing collection of smart, locally running models, often trained by yourself, designed to simplify or speed up time-consuming tasks.

Surfacing Flip, Vellum & Particle Simulations

Alongside Neural Point Surfacing for MPM, the new Neural Point Surface and proven Particle Fluid Surface nodes now bring neural meshing to FLIP, Vellum, and POP simulations as well. Until now a bunch of points are trying to reconstruct the surface of a material. With neural meshing, you can now achieve much sharper, more detailed surfaces across a both high and low frequencies. The result: surfaces that are no longer uniformly fuzzy, but crisp, structured, and temporally stable. As before, you can train your own models, but even the included presets already produce finer details. And thankfully, the whole thing is GPU-accelerated.

A computer screen displaying a 3D modeling interface. On the left side, two textured models labeled 'Average Position' and 'Neural Surface' are shown. The right side features a node graph and adjustable parameters for editing geometry.
Machine-learning neural-AI particle meshing wonder, now with UV and attribute transfer magic
A side-by-side comparison of two 3D rendering techniques. On the left, a grey geometric shape with particle fluid simulation showing volumetric effects. On the right, a similar shape rendered using neural point surface technology, with a stylized appearance.
More detail at low and high frequencies thanks to Neural Surface / SideFX

Volume Upres

The core problem behind volume up-resing: for efficiency, artists often create and approve low-resolution simulations. But once the voxeldensity is increased, the overall shape of the sim tends to change. With the new tools, a low-res simulation can now be upscaled while preserving its exact shape. This not only keeps previously approved versions intact, but also allows for far more iterations. A model that’s been trained on a specific motion or behavior can now upscale multiple variations of it.

The Billowy Smoke recipe (or shelf tool) comes with a pretrained up-res model already integrated. Let’s start by looking at a comparison between the low-res input simulation and the 3× up-res result.

Promising results — nice details, but some artifacts remain.

Details are nicely added, while the overall shape is preserved impressively well. Caching took a bit of time, but it’s still far faster than running a full high-res simulation. The real idea, however, is that we can now reuse this up-res model for all our future Billowy Smoke setups and honestly, we probably should. So, let’s quickly modify the setup and see if we can break the upscaler.

This time, the solver runs at a voxel size of 0.05, using the 2× upscaler. The 3× model didn’t really add more detail, just extra waiting time. For a bit more fun, a Gas Wind with random direction and a collision shape were added. That collision shape, as it turns out, gives the upscaler a bit of trouble as seen below.

But when comparing the up-res result to a true high-res sim, it’s clear that the system is really good at preserving the base form. Doubling the voxel size in the high-res sim, on the other hand, changes the overall shape and eats up a ton of time but stays artifact-free. Or, if you really want perfection, you could just train a model specifically for this type of collision.

Naturally, the simulation changes with voxel density. Higher resolution, different behavior.
With stronger motion, the up-res process tends to produce more artifacts. To be fair, though, the model wasn’t really trained for that much wind.

If you want to train your own model, this Equinox Hive talk walks you through every detail:

Zibra AI VDB compression

With this plugin you can save up to 99 percent of storage space when caching VDB simulations, which makes it perfectly suited for use in real-time engines such as Unreal Engine 5. The Zibra toolset, distributed via SideFX Labs, provides three dedicated nodes.

The first, zibravdb_compress, writes and exports .zibravdb files for use in Unreal and similar environments. The second, zibravdb_decompress, brings those files back into Houdini. And finally, zibravdb_filecache acts as a modified File Cache node that automatically handles compression, loading, and decompression for further use inside Houdini. Before diving in, you’ll need to download the model and obtain a license, potentially a free personal one if your revenue is below 100 K USD. The license management can be accessed directly from any of the three nodes.

For a quick benchmark, I used the Fireball Recipe and cached one regular VDB sequence alongside two Zibra versions with different quality settings. The original VDB sequence weighed 294 MB. The Zibra compression at a quality setting of 0.2 came in at only 5 MB, roughly 98 percent smaller. At a quality setting of 0.9, the result visually matched the original VDB almost perfectly while staying at just 36 MB, around 88 percent less.
That insane low file size at 0.2 naturally comes at the cost of lost detail, as visible in the comparison graph below. Still, the results are impressive and they open up the possibility for bringing volumetric simulations into real-time pipelines far more efficiently than before.

Three distinct explosion graphics labeled 'Zibra 0.9', 'VDB', and 'Zibra 0.2' displayed on a turquoise background. Each explosion has varying intensity and smoke detail, showcasing the differences in simulation quality.
Different levels of Zibra compression vs reference VDB.

Pyro Shelve Tool Presets

Another way to get artists up to speed faster are production-ready presets, not just educational examples, but tools meant to be customized for your very real projects. Each of them comes with a ready-to-use Solaris network, fully set up for rendering straight out of the box. Even better, the Help section now includes a short guide and exploanation of important Nodes for every preset.

SideFX strikes a noticeably new tone here, aiming to flatten the learning curve rather than overwhelming newcomers with endless options (which, to be fair, they still do from time to time). These guides can be found under Documentation → What’s New → Pyro. In this section, we’ll take a closer look at three fundamentally different presets, each showcasing its own approach and creative use case.

Stylized Flame

An animated orange explosion with splashes of liquid against a black background, creating a dramatic contrast and emphasizing the vibrant colors.
SideFX

With the refined Copernicus toolset, entirely new worlds open up: stylized fire based on a classic Pyro simulation, right inside Houdini. And if needed, even live-rendered in Solaris.

To put the claim of “easy adaptability” to the test, we took the Pyro Fireball preset and gave it a Toon-style makeover. Adding the VDB field “Flame” inside the solver’s output was all it took to make it work. The output from the Cop node, by the way, can be merged directly into the scene for further Houdini style editing.

Ground Explosion B

An explosion with a large fireball and billowing smoke rising against a black background, surrounded by fiery debris and orange flames extending outward.
SideFX
A realistic 3D rendering of a soft cloud formation on the left, with a dark background featuring a grid. On the right, a network diagram displays interconnected nodes and lines, showcasing a procedural geometry editor interface.
Layered Pyro Sim with Render-Ready Solaris network.

This shelf tool sets up a sparse pyro simulation featuring a large-scale explosion, smoke trails, and a shockwave. For more control and efficiency, it’s actually made up of two separate simulations, layered on top of each other and interacting through their velocity fields.

Candle Flame

A red candle with a flickering flame, featuring wax dripping down its sides, against a dark background. The bright flame contrasts with the smooth red surface of the candle.
SideFX

A candle flame might not be the most exciting thing visually, but it’s one of those Pyro results you end up needing again and again. What makes this preset interesting, though, is the procedurally modeled candle that comes with it. Exploring that setup is almost more fun than the Pyro sim itself.

Thruster FX

In true H21 fashion and in the spirit of overall efficiency boosts, the new Thruster FX tool makes its debut: a setup designed to create engine and propulsion emissions with ease. It’s not just a new node, but rather a complete Recipe, a preconfigured network of nodes that some might, in hushed tones, simply call “presets.”

With a cheerful click on Thruster Exhaust in the Pyro Shelf Tools (or via Configure Thruster in the Tab menu), you’ll get a fully adaptable node tree, including a ready-to-render Solaris network. The effect itself isn’t a simulation but a cleverly layered, art-directable procedural system built around VOP Nodes. Multiple pyrothrusterexhaust nodes are stacked in layers, each responsible for different components like sparks, fire, and plasma. All working together to form a surprisingly easy to use thruster system.

A layered procedural effect without the need to use simulations

So, what does it actually look like? And can it really be used straight out of the box, as promised? The short answer: pretty much, yes. The rendering comes surprisingly close to the viewport preview. To get it running, only a few connections inside the included Solaris network needed to be adjusted. For our small test scene, we did a bit of kitbashing inside Solaris, then added some glow and polish in Fusion.

The finished thruster. See how easy this is ?

Let’s take a closer look at the node, both from the outside and under the hood. The node expects a primitive as input, and a simple circle usually does the trick. It outputs both particles and a volume containing density and temperature fields. In the General tab, you can control speed, length, and the overall shape via a spline ramp. The Exhaust section handles the color ramp and lets you tweak the underlying noise pattern, which has a strong impact on the overall form. Under the hood, the node generates a VDB from Polygon, then modifies the result with a Volume VOP and a Volume Adjust Fog node.

A software interface displaying a 3D simulation scene on the left, featuring glowing blue particles and scattered rocks, with a node graph on the right showcasing geometry settings and animation parameters.
Custom Thruster via Ramp und CopytoPoints

As part of the ongoing effort to simplify things and lower the learning curve, SideFX has also released a good and detailed tutorial mini series: sidefx.com/tutorials/how-to-create-thruster-fx

Car Destruction FX

SideFX

So that we can also crash the rigs built with the Car Rig SOP introduced in H20.5, Houdini 21 brings us the new Car Destruction Tools, led by the mighty RBD Car Fracture SOP, supported by the RBD Car Transform SOP. The first one takes care of fracturing and constraint creation, automatically handling the typical materials you’d expect in a vehicle: glass, metal, wood, and rubber. The RBD Car Transform SOP, similar to Transform Pieces, ensures that all pre-fractured parts are efficiently transformed based on the simulation points. You’re not limited to cars, by the way. Anything that follows the same basic logic can be blown apart. From motorcycles to helicopters, it all breaks just fine.

A 3D modeling software interface displays a wireframe model of a car, with color-coded geometry manipulation tools and nodes shown for texture adjustments, alongside various settings and parameters on the right. The background is a light blue grid.
The RBD Car Fracture SOP handles the dirty work — assigning materials, fracturing them, and wiring up the constraints.

Destruction-hungry artists will find a detailed yet easy-to-follow example scene in the SideFX Content Library, the same visuals you might recognize from the keynote: sidefx.com/carbd-dual-car-collision/. A fitting go-through video can be found here:

Geometry, Viewport and other tasty QoL

Coming from its deep VFX roots, Houdini has taken quite a journey to establish its own distinct style of procedural modeling. With H21, that journey continues, extending existing nodes and adding a few genuinely useful new ones along the way. This time, even the viewport got some well-deserved love, now powered by Vulkan and capable of loading Gaussian Splats directly.

Sculpting in Time

The Sculpt SOP, introduced in H20.5 and (surprisingly) quite useful, now gets a genuinely groundbreaking new feature called Shot Sculpting allowing time-based, keyframe-free sculpting. Originally intended as a correction tool for character animation, the node turns out to be just as handy for VFX and motion design work.

Temporal control is handled through the Shot Sculpt panel, which at first glance looks a lot like an NLE timeline and, in principle, works much the same way. Sculpting can be organized into layers that can be offset in time, faded in and out (complete with easing), muted, or adjusted in opacity. Alternatively, you can use mask_track to paint time-based attributes, which can then be passed downstream and used in other nodes, for an obvious example, as masks.

Otherwise, the same rules apply as for the regular Sculpt SOP, whose updates we’ll take a look at next. In line with the new Shot Sculpting feature, the mask system has been reworked. Masks can still be painted manually, but can now also be loaded from an upstream float attribute, saved permanently, and blurred or sharpened as needed.

A 3D modeling software interface displaying a stylized green and gray face sculpture on the left, with various modeling tools and parameters visible. The right panel shows geometry parameters with adjustable settings.
Two Adjust Float nodes generate low- and high-frequency noise attributes — both can be loaded directly as mask inputs (shown in green) inside the Sculpt SOP.

There are also new brushes. My personal highlight, the Elastic Grab brush:

Elastic Grab / SideFX

Of course, the complexity and depth of ZBrush remain unmatched, but for many tasks, artists can now comfortably stay right inside Houdini.

Geometry Masks

There are also updates when it comes to masking. Several well-known nodes now include a Mask parameter, allowing the effect to be restricted to a painted or procedurally generated mask. Among them: Peak SOP, Soft Peak SOP, Inflate SOP, Flatten SOP, and Point Jitter SOP.

A 3D modeling interface displaying two bear figures. The left bear is rendered with a colorful texture overlay, while the right bear is shown in a wireframe format. The interface on the right includes node-based geometry options for adjustments.
Thick leg thanks to a painted mask affecting a Peak Node.

UV Flatten from Points

The latest addition to Houdini’s already powerful UV toolset could just as well be called “UV from Voronoi” since that’s exactly what it’s based on. The node distributes random or precisely placed points across the surface and uses them to calculate clean, non-overlapping UVs. It’s primarily designed for complex, high-resolution meshes, where traditional unwrapping tends to get messy fast.

Vulkan Viewport

Now enabled by default, the new Vulkan 3D viewport offers noticeably improved lighting, Ambient Occlusion, shading and ray tracing with built-in denoising, and a more accurate texture display though performance can take a hit if you push it too far. New worklights including a fully adjustable Dome Light, Physical Sky, and Three-Point Light setup now serve as the default viewport lighting.

Looking toward the future, the viewport can now display Gaussian Splats directly. Since splats are essentially just point clouds, and Houdini is fundamentally point-based, this opens up a rather promising combination. The .ply file can simply be loaded via a File SOP and passed into a Bake Splat SOP for further processing. From there, you can treat and manipulate the splats just like any other geometry using the usual SOP tools. More on rendering those Splats in the upcoming section on Solaris & Karma.

Curve Tools

The new Extract Contours SOP can generate object outlines from a camera’s perspective either directly as edges or as an edge group. Quite handy for toon-style effects.

The well-known Curve SOP now allows you to interactively split points into branches (with unique vertex numbers) or fuse them back together.

Unsubdivide

If things ever get a bit too much, this node can reconstruct a low-res input geometry based on Catmull-Clark.

Unsubdivide … unsubdivides / SideFX

Conclusion after some bottles of VFX

Even though the stated (and achieved) goal of H21 was mainly polishing existing systems and adding plenty of quality-of-life improvements, it still manages to sneak in a massive load of new features along the way. And we’ve only scratched the surface here. Deep dives on Copernicus, Solaris, Karma, Rigging, and Animation are already in the works.

What’s also refreshing is the ongoing effort to flatten the learning curve through better documentation, tons of in-house tutorials, and solid example files in the Content Library. Many things have become easier or let’s say, more accessible, without losing depth, at least for those who want to go there. As always, most nodes can still be cracked open and modified at their core. Nice one, SideFX.

The post Houdini 21: Like good wine (Part1, VFX & Geo) first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
DIGITAL PRODUCTION 213697
The (further) development of Fusion. https://digitalproduction.com/2025/06/02/fusion-20_ai_workflows_node-based_vfx_insights_from_simon_hall/ Mon, 02 Jun 2025 07:00:00 +0000 https://digitalproduction.com/?p=183184 A sheep wearing a harness in video editing software.

Fusion, the compositing tool in and out of Resolve, has got a big update - and we asked BMD's Simon Hall, what it all means and what the future holds..

The post The (further) development of Fusion. first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
A sheep wearing a harness in video editing software.

We had the pleasure of talking to Simon Hall from Blackmagic Design about Fusion, AI and the community. Simon Hall knows post-production from both sides of the suite. Before stepping into his current role in business development at Blackmagic Design—where he focuses on post-production and supports the Resolve ecosystem across EMEA—he spent over a decade getting his hands dirty in the edit bay.

Starting as an offline editor and later working as a Smoke operator in a northern UK studio, Simon eventually moved to London to take up a training role—merging his technical know-how with a knack for sharing knowledge. His past as a freelance editor kept him close to the tools, even as his focus shifted to product strategy. Over the past ten years, he’s been a key figure in shaping DaVinci Resolve’s growth from a specialist grading tool to a full-fledged post-production platform with a massive user base.

DP: Fusion 20 marks a huge step forward, and the community has been very enthusiastic. What was the motivation behind this release, and what can users look forward to in future updates?

Simon Hall: We are a company that really does value user feedback. What the devs always want to do is to take user ideas and integrate into the software. As you can imagine, those lists tend to be very long. SO we have to pick and choose what we do. Our motivation as a company is to give people the features and workflows they want. We will continue to try to get the best out of our software. Because we like making people happy at the end of the day.

DP: The Integration in Resolve was a great idea – as it was to keep the Standalone Edition alive which is favoured by many professionals for performance reasons. Can we expect full feature parity (i.e. all Resolve OFX plugins available in the standalone Fusion)?

Simon Hall: Some of the open effects are obviously designed for colour. So that’s why you may not find them in fusion studio. But the devs try and get parity as much as they can. Over time, what will happen is hopefully we’ll start to get parity between the two applications. The open effects that are available in resolve are then also available in fusion.

https://images.blackmagicdesign.com/images/products/fusion/landing/fusion-connect/fusion-connect-xxl@2x.jpg?_v=1604643399

DP: When you’re developing both editions, is there an ideal use case in your mind for when users should use each version?

Simon Hall: If you’re doing things like shot clean-ups, you’re doing some matte painting, you’re doing some tracking, my recommended workflow is to use fusion in resolve.
I always tend to steer people towards fusion studio when you’re doing sort of heavy VFX work – You’ve got a couple of hundred nodes, you’ve got 3d models, you’ve got particle generators. As soon as you start to do something that becomes quite complex, you’re going to get performance enhancements in fusion studio. The one thing that fusion studio can do that resolve can’t, is to use network rendering.

DP: Both Resolve and Fusion include very useful ML tools like the Magic Mask. What’s next on your roadmap for neural-network features? Any change to see an integration with tools like ComfyUI? What is Blackmagic Design’s long-term vision for applying machine learning across the industry?

I could talk about what we’re thinking, but the truth is-we don’t really know. There’s no set roadmap. That’s just kind of how things work at Blackmagic. We don’t always know what’s coming. Our approach to AI is focused on speeding up workflows-not replacing the person in the chair. That’s the goal behind all the AI we’ve built so far. Also at the moment all of our AI processing happens locally-on your system.

We don’t send your files anywhere to train the AI.


There is one exception, and that’s the scene extension feature. It’s not in the public beta yet, partly because it uses Blackmagic Cloud. Since the system needs to understand the image, it does have to send it up to the cloud for processing. We’re still figuring out how that will work, especially in terms of using it with a cloud account, which may involve some cost. Our AI is meant to help-not take over. So when it comes to the kind of AI that generates full scenes or content, like the comfy UI stuff, I could be wrong, but I don’t see us going in that direction.

DP: The community loves that Blackmagic hasn’t adopted a subscription model—but new features still cost money to develop. Are there any plans to introduce an upgrade-pricing scheme in future releases?

Simon Hall: Potentially in the future, we might charge for upgrades. A lot of this software development takes time, money, and people-we’ve got a growing team of developers. But I don’t think this is going to turn into a subscription. Grant himself really doesn’t like the idea of locking people out of their own work; he thinks it’s unfair. Long-term, yeah, we want to make things sustainable, but nothing’s been decided yet.

It kind of became a hot topic at NAB because Grant mentioned it in his video. What surprised us was how many people actually supported it-people who really want Resolve and Fusion to keep developing and improving. And if that means maybe paying, say, a $50 update once a year, or per version, a lot of folks were like, “Yeah, you know what? I’d be fine with that.” Especially considering we’ve never really charged for updates before.

DP: You surely know the Fusion Forum WSL/Steak underwater. Do you incorporate feedback from the user community into your ongoing development?
Simon Hall: Yes, absolutely. And yeah, great name-“We Suck Less.” The Fusion community there is full of really smart people writing macros, scripts, all sorts of things. A lot of our product specialists are on that forum. I don’t know if you know Steve Roberts-he was one of the original guys behind Eyeon. When we acquired Fusion from Eyeon, Steve came over with it, since it was basically his baby. He’s still a member of the forum and still checks in. We all do, really. I’ve even picked up scripts from there.

https://www.steakunderwater.com/wesuckless/styles/Subway/artwork/WSL_Happy-Charlie.PNG

There’s one I use all the time-it animates to the beat of music by analysing the audio. I use it constantly, though I’ve forgotten what it’s called. So yeah, we’re definitely across that forum. And when people post feedback or feature requests there, the developers do take a look-just like we do across other forums-to see what the community is talking about and what they’re asking for.

DP: Is there one feature you’d most like to add or change for yourself?
Simon Hall: Yeah, one thing I’d really like to see in Fusion 20 is a bit more “smart assistance” when building node trees-especially in 3D scenes. Sometimes I’ll go to connect something, like a camera, and it just doesn’t work. Then I realize I’ve missed a node-maybe I forgot to add a 3D plane or something in between.

What I’d love is if Fusion could recognize that kind of situation. Like, if I try to connect two nodes, and it doesn’t work, the system could say, “You’re probably trying to do this,” and automatically drop in the missing node-like a 3D merge or a plane-so the connection makes sense.

Basically, a bit of smart logic that fills in the gaps when I miss something. That would really help, especially when working quickly. So yeah, that would be my feature request-something like a smart assistant that helps you build the flow correctly when it sees what you’re trying to do.

DP: Do you have a favourite movie (or specific shot) that was created using Fusion?

Simon Hall: Oh, there are quite a few-especially if we’re talking old-school Fusion. One of my favourite shots was in the film Swordfish. It’s about hacking, with a young Hugh Jackman and John Travolta. The opening shot, where someone’s left a claymore mine and the camera moves through the explosion in what looks like a frozen moment-that whole sequence was done in Fusion. That was back in the Eyeon Fusion days, and I always thought it was a fascinating use of the tool.

The boom is at 3:45. Hold on to your hats.

In more recent films, there are a few standouts. The Martian used Fusion heavily, and Top Gun: Maverick-which has been one of my favourite films in the last few years-used it for a lot of shot cleanups. The grading on that was done in Resolve. There’s also a TV show called Bosch on Amazon Prime. It’s been one of my favourite series for a while. Fusion was used extensively on that as well-for scene extensions, keying, and shot clean-up. So yeah, those are some of the highlights for me.

DP: Thanks for your time, Simon 🙂

The post The (further) development of Fusion. first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
DIGITAL PRODUCTION 183184
Fusion 20: a deep look at the core https://digitalproduction.com/2025/05/19/fusion-20-a-deep-look-at-the-core/ Mon, 19 May 2025 09:40:06 +0000 https://digitalproduction.com/?p=182921 Two lion head sculptures surrounded by flames in a digital editing interface.

With its latest release, Blackmagic Fusion is positioning itself more aggressively than ever as a powerful and cost-effective complete package for node-based compositing and motion graphics. We check whether Fusion 20 lives up to the high expectations.

The post Fusion 20: a deep look at the core first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
Two lion head sculptures surrounded by flames in a digital editing interface.

We first take a close look at the new functions and then test various workflows in general. As a special treat we have an interview with Simon Hall from Blackmagic Design about the development of Fusion – which you can read very soon. Stay tuned! But now, let’s start up Fusion and see what’s what.

Native support for Cryptomatte

Or the “compositor’s lifesaver” – with a cryptomatte, complex masks can be created quickly and easily from all objects or materials present in a 3D scene. The only prerequisite for this is that Cryptomattes are included when the 3D scene is rendered. Manual object selection or ID assignment is not necessary.

The Cryptonode itself presents these masks as extremely colourful surfaces that can be selected with a simple click and converted into an alpha channel. The mattes can be combined as required and removed again from a simple list view. Complex adjustments such as the colour grading of objects in the blur can be implemented quickly with cryptomattes, although they can also reach their limits here (we will look at a workaround for this special case later in the chapter on multilayers).


USD & 3D compositing

Let’s stick with this example. Suppose we want to make the light a little more dramatic or add particle FX or text in the room or or or or … Here, in addition to the possibility of masking things to infinity, the fantastic (real) 3D space of Fusion comes into play, which can be fed with all kinds of data from FBX to USD. Let’s take a look at the updated USD workflow.

The USD file generated by Houdini Solaris is simply dragged and dropped into Fusion and is then loaded and ready for use. Incidentally, the 3D space opens automatically as soon as a 3D object is to be displayed on one of the viewers (shortcut “1” or “2”).

To make the rear lion’s head a little brighter, we can now add a USD light and illuminate the head as desired. As we are working with real 3D data, the light is modulated correctly, which is a huge advantage over working with conventional masks. To avoid potentially disturbing influences on other objects, these are faded out using uVisibility. A newly added 3D layer acts as a light blocker to allow the light to flow gently downwards. Finally, this black and white image generated by a uRender Node is added directly over our original rendering – or better, used as a mask for a colour corrector.

3D text can be added in a similarly simple way, and parts of Fusion’s proprietary 3D system can be integrated into the USD Scene – as is the case with a Text3D Node. Particularly nice: If a Z-channel is also output via uRender, Fusion can perform a so-called depth merge, in which the merge node automatically “masks” the objects in the correct order using the Z-channel.

To round off the trip, we export the whole fun as USD and load the file back into Houdini as a sublayer and can access the new light and the Lightblocker here.

What we would like to see in a future update: A USD stage manager and more Hydra delegates aka render engines that support Fusion.

An alternative to relighting via 3D is the relatively new relight node, which uses a rendered or calculated normal map. This is practical if you don’t have a 3D scene to hand and can do without precision.


(Real)Deep Compositing

Deep compositing is a brand-new feature in Fusion for intervening deeply in the image – and completely without 3D. Here, the pixels are also given depth values during rendering so that Fusion now knows where these pixels are located in 3D space.
The decisive advantages are mask- and Z-channel-free merging of objects and the creation of real depth position-based masks (the above-mentioned depth merge accesses a Z-channel, a 2D representation of the depth in space), which, in contrast to the volume mask (which accesses the world position), are very precise and virtually free of edge problems. Volumes such as smoke and explosions can also be handled wonderfully, as objects can interact correctly with volumes. The new freedom comes at the price of larger data volumes and necessarily a render engine that supports deep data (e.g. Karma, Vray, Arnold, Octane, Renderman and Redshift).

But right from the start – how do we even know that we are dealing with deep rendering? Fusion writes “deep” + colour depth after the resolution in the viewer and shows a cheerful “Z” in the channel selection (not multilayer selection).

The new node “DeeptoPoints” then shows the whole truth – as a point cloud. We do not see “haptic” 3D objects, but the distribution of individual image pixels in space.

Let’s assume we now want to mount a fireball behind the lion heads. In “normal” compositing, volumes (explosions, smoke …) are a guarantee for alpha channel problems – not so in deep compositing.

Above: The delivered explosive DeepEXR rendering (via Karma) and the Deep2Points view – the explosion has an actual volume of points, which will be very useful to us in a moment. The two deep renderings are merged using dMerge Node – and automatically know which elements should be in the foreground thanks to the 3D depth information they contain.

The fireball can even be moved in depth using dTransform and is automatically covered by the corresponding objects – or even partially covers other elements.

To bring new 2D elements into the deep space, there is the Image2Deep node, which is attached behind all possible 2D nodes such as graphics, footage and renderings. The result is then available in the deepcomp, but of course remains flat itself.
For example, texts can be perfectly integrated into the fireball. Partially concealed, without a mask and the resulting alpha problems.

The final DeepNode is called Deep2Image and brings the deepcomp into the classic 2D space for further processing or output. We have the fireball, but the right reflections are still missing. Let’s take a look at the brand new EXR multilayer workflow.

EXR Multilayer Workflow

For the greatest possible flexibility in compositing CGI renderings, other helpful render layers such as masks, reflections, depth, separate lights and world position are added alongside the actual beauty pass (“the image”), which then allow a number of far-reaching adjustments to be made in compositing.
For reasons of clarity and working speed, these passes, which can be imagined as Photoshop layers, are not rendered separately as individual file sequences, but in a multilayer ERX sequence. This is then split up in compositing – which was also possible in principle in the past, but was very cumbersome.

For Fusion 20, Blackmagic has revised the entire AOV / multilayer system and created a very fast workflow. The first major innovation is the layer dropdown in the viewer, which can be used to view all the layers contained in the EXR or PSD (shortcut: pageUp/Down) – to simplify things, we are using a rendering here that already has the fireball integrated.

At first glance, there does not appear to be an equivalent to Nuke’s shuffle node for extracting the AOVs. However, to get to the individual layers, we can simply use … any node. For the sake of clarity, we use the ChannelBool node, which can generally be used to swap image channels.
(Update with Beta 4: Blackmagic added a Node called Swizzler which acts more like the shuffle node.)

Below on the left viewer we have only extracted the effects of the AOV rimlight on the reflections (LPE rendered from Karma) using Setting > Backgroundlayer and then made them brighter using a ColourCorrector.

In order to return these to the actual image, we must first remove the rimlight present in the beauty pass, otherwise it will be calculated twice. To do this, we can use the newer Multimerge Node, which allows us to process different layers with different mathematical operations. The layer called “Difference” removes the original rimlight, while the “Add Rim” Layer adds the new processed light using Alphagain = 0 (which is the same as the Nuke plus operation).

Caution: The title “Layerlist” only refers to the sequence in the merge node, not to the EXR layers/AOVs themselves.

As the nodes can access the AOVs directly, nothing needs to be extracted for simpler operations such as a Hueshift. The colour corrector can access the layer directly, in this case the emission pass. The rest happens as usual, a merge removes the previous emission pass, a second merge adds the new, blue-coloured pass.

Further application examples: The books on the left are too shiny? Simply subtract the reflection layer (limited to the books by crypto)!

To give only the explosion a glow, the “DirectEmission” pass can be selected directly in the X-Glow Node as an area mask. Last but not least, we add a camera blur using fresh air Depth of Field. Conveniently, the depth map can also be read directly from the stream here.

By the way, if you want to add new layers to the flow, you can do this with the LayerMux node. LayerRegex allows layers to be removed or renamed. The new workflow not only supports EXR, but also Photoshop layers.

What about ML?

Blackmagic has always been well positioned when it comes to machine learning tools. First and foremost the extremely practical Magic Mask, which can significantly speed up rotoscoping tasks. Version 2.0 is already integrated in Resolve 20, while Fusion still has to make do with version 1.0 in the current beta.

In the Magic Mask Node, lines are simply drawn on a reference frame over the objects to be masked and then tracked. The mask itself can be refined or blurred using the “Matte” tab. As the whole operation is quite computationally intensive, the result is best cached (by right-clicking on the node).

To whet your appetite, let’s take a look at the new version, which has been simplified and improved in terms of precision. With our woolly friend, 2 clicks instead of strokes (the sun collector is precisely recognised as a separate object) are enough to create the mask. After tracking, a temporally stable mask is ready for use.


Intellitracker

Another well-functioning machine-learning tool is the Intellitracker, which copes well with even the most jagged movements such as that of this flower in the wind – as can be seen from the movement path aka the green wild line. The Intellitracker is the new default tracker and is automatically selected as soon as the tracking node is called up.

It is usually not necessary to do more than move the tracker to the desired area and press “Track Forward then Reverse”. The tracker automatically selects the most appropriate colour channel, in this case the red channel (see bar chart below the tracker list). In this case, the text was not attached directly to the tracker. A downstream transform node, whose coordinates are linked to the tracker position, offers the flexibility to position the text anywhere in the image. The tracking path can be adjusted directly in the viewport frame by frame like a normal spline or in the Spline(Curve) Editor.


Let’s continue tracking, even without ML tools – with the Surface Tracker. This practical node is able to track intrinsically deforming objects such as clothing, newspapers or flower petals in the wind with a kind of fine grid and transfer the movement of the individual points to any graphics or text so that they follow the complex deformation. In this example, the text was only positioned on a reference frame using classic gridwarp; all other deformations are handled by the surface tracker.

For the sake of completeness, the planar tracker should also be mentioned – neither new nor ML, but tried and tested and easy to use: draw a spline around the area to be tracked, set the operation to e.g. Cornerpin, adjust to the area and define the new insert as the tracker foreground.


Multitext

One node to … surprise them all with beautiful typography. You can think of this innovation a bit like any number of classic text layers, with all the trimmings: transparencies, transitions, extensive typographical settings such as (manual) kerning, modifiers (more on this in a few staggered lines). This was also possible in the past, but a separate node was required for each text element. However, the dynamic frame and circle text familiar from DTP has never been available.

To quickly create procedural animations, you can of course write expressions (more on this later) and link parameters, but you can also add a modifier to the respective parameters by right-clicking and thus quickly create procedural effects such as random movement (wiggle sends its regards).

The modifiers are not just limited to the text node, but can be applied to almost all parameters in almost all nodes (and also linked to each other). For example, a master random modifier can control the opacity of different merge nodes at the same time.

The modifier “Follower”, responsible for “character-by-character” animation, is exciting for moving typographers. This allows you to animate letter by letter in opacity, colour, size, etc. – not only in 2D, but also in 3D, as Fusion has an extensive 3D text node, including bevel and extrude.


Shape Tools

Primarily intended for motion graphics, the Shape Tools are characterised by a vector-like workflow. In contrast to the other 2D systems from Fusion, they are basically resolution-independent and are only cast in pixels using a shape render node.

With the shape tools, various basic shapes and paths can be drawn, combined, duplicated, created as a grid and, of course, animated. Linking individual parameters with the above-mentioned modifiers offers great possibilities. The jitter node ensures random movements. For even more control (or chaos), the shape can be instantiated directly on a particle system.


Expressions

Many motion (and VFX) tasks can be significantly accelerated or automated with simple expressions. To access the expression editor, simply right-click on the relevant parameter field > Expression. For example, as the simplest of all possibilities, values can be changed by the pure passage of time(value “Time”) or linked and nested in complex ways. Parameters can be linked interactively using the small “+” icon.

You can find more examples here:
https://www.steakunderwater.com/VFXPedia/96.0.243.189/index4aa9.html?title=Simple_Expressions


Of masks and multipoly

The aim of the various ways of creating a mask is always the same – to create an alpha channel. Fusion is quite flexible in its use of masks, which are created using rectangles, ellipses, polygon splines or paint nodes. The individual masks can be easily combined, animated and attached to trackers. Individual mask points (if created by path) can be “published” and the parameters of other nodes can be linked to them. The bitmap node converts individual colour channels or the luminance of footage or graphics into masks (more precisely: into an alpha channel). Thanks to the new layer system, AOVs can now also be selected directly as masks here. Thanks to the node system, a mask can be reused or instantiated (copy>shift-v) as often as required.

A few examples:

Above: A simple vignette with ellipse mask node and strongly blurred mask edge aka softedge. In the right viewer the alpha channel of the mask.

Complex shapes are possible by linking individual mask nodes. Alternatively, the newer Multipoly tool can also be used for this, although this is limited to polygon and BSpline masks.

Quick and easy thanks to the new layer system: the blue channel of the AOV glossy transimission as a mask for a colour corrector. Connect the footage loader to the ColorCorrectNode as an input AND as a mask (blue input), set the effect mask layer to the desired AOV under Settings and select a channel under Channel if necessary.

A simple polygon mask in “DoublePoly” mode. The outer outline defines a soft edge gradient, which can be set in addition to the global “soft edge”.

Timesaver: The Multiframe option allows you to change mask points for all keyframes simultaneously – similar to Mocha’s Überkey.

Masks can of course also be drawn beautifully and even support graphic tablet pen pressure …

… and brushes such as this useful fish (available as a preset).

To track a mask, the mask centre can be linked to a tracker modifier. The actual mask path can then still be adjusted or even animated, as only the centre position of the node is actually attached to the tracker.


Particles

Fusion comes with a very powerful and intuitive native particle system. The setup is very simple: The basis for every ParticleFX is the pEmitter Node. As a starting point, it can assume various basic shapes, use 3D shapes (see text example above) or an image input (see crypto example above) and the pRenderNode, which displays our particle system either in 3D space or as 2D pixels.

The particles created in this way can now be subjected to various forces, e.g. the extremely popular turbulence. The effect strength can be set using a 3D mask (region), probability, particle groups or particle age.

Although the particles are not really simulated in comparison to Houdini, they can mimic many effects such as gravity and bouncing of objects

One of the most powerful nodes is the “replicate3D”, which can instantiate any 3D objects on the points and vary their size, rotation and position randomly.


ACES 2.0

Fusion 20 now supports ACES 2.0 & OCIO 2.4.2. Let’s take a look at what this looks like in practice.

The loaded rendering was created in ACEScg, which can be seen in the metadata (hotkey “V”, right-click and display metadata). The display is too dark without display transform.

The small raster icon (1) leads to the Display LUT/Transform menu. Here we select ACES Transform (2) and edit (3) the input & output transform (4) depending on the pipeline, here ACEScg in, sRGB Gamma 2.2 out.

Now we see the image correctly displayed in the viewers (but only displayed, the image itself is not sRGB!) and can continue to perform all operations in ACEScg.
For the final output as e.g. ProRes 4444, however, we have to apply these values via ACES Transform Node.

If you would like to work with AGX / Filmic and co, proceed as follows: Again via the ViewLut menu (1), this time select OCIO Display (2) and edit with the following settings (4). The conversion for the final output is carried out this time via the OCIO Colourspace or Ocio Display (!) node, with the same values as in the ViewLut. The necessary config comes this time from: https://github.com/Joegenco/PixelManager

So much for the application in Fusion. If you want to read more about ACES or argue about it, you can do the former here: https: //chrisbrejon.com/cg-cinematography/chapter-1-5-academy-color-encoding-system-aces


Vector Warp

The new vector tools are based on analysing the movement of the image pixels. Fusion knows what happens to which pixel and can therefore apply complex deformations or retouching quickly and usefully. Areas of application include digital make-up or the insertion of new objects.

The basic prerequisite is motion vectors, which can be supplied externally or generated via an optical flow node (cache without any need!).

To simply place new objects on the background, the new VectorWarp node in “Generate Warp Map” mode is sufficient. The deformed result is placed over the background again using the Merge Node.

For more complex retouching, the VectorWarp node can be set to “Unwarp” and “freezes” the object in time. In this way, objects can be retouched or new ones added using the Paint Node. The result then flows into a 2nd Vector Warp Node, which brings the image back into motion using “Generate Warp Map”.

Curve Warp

The warper previously only available in Fusion Reoslve can now also be controlled using curves. Simply draw a line or an outline, set limits if necessary and bring the object into the desired shape.

Node versioning

A very small and extremely practical function is the somewhat hidden versioning of a node. Up to 6 different settings can be saved to quickly try out different looks. These are not presets, but can be created as such by right-clicking on the node name and Save Settings.

Performance optimisation

Fusion offers various optimisation options to keep performance high despite complex effects and comps:

1. The render area selection (region of interest) limits the calculation to a selectable part of the image
2. Proxy mode reduces the preview resolution (click on the PRX button on the right for further options)
3. High Quality Preview and Motionblur can be switched off for faster previews
4. Framestep skips every frame and leads to faster previews (right click on the play icon for more options)
5. Cache to disc renders and saves the flow up to the selected node (right-click on the desired node)


Reactor

Reactor, which can be downloaded free of charge from the unofficial official Fusion forum, is the counterpart to Nukepedia and lets you install all kinds of macros, scripts and fuses, also commonly known as user-created plug-ins, directly from within Fusion. From exponential glows, edge blur and complete mograph solutions (Krokodove!), everything is included – including the Nuke2Fusion project, which bases shortcuts and settings on Nuke as far as possible.

Installation and instructions: https://www.steakunderwater.com/wesuckless/viewtopic.php?t=2159

Top nodes to try:
X-Glow for wonderful exponential glow
FTL tools for modular lens flares
Krokodove adds many motion graphics nodes to Fusion
OIDN denoiser to be able to use the Intel Open Image Denoiser directly in Fusion.

Last news but not least news: Fusion inside Resolve only: The option you have been waiting for since the integration in Resolve: You can now display the grading applied in the colour page in Fusion and set the start frame count independently of the footage.


Fusion Resolve (Studio) integrated vs. Fusion Studio Standalone

A quick look at the possible versions and versions of Fusion – the standalone is only (still) available as Studio and therefore costs just €355. However, for that money you not only get Fusion, but also Resolve Studio. Or vice versa.
The free Resolve version has Fusion integrated, but has to do without a few really practical Studio OpenFX such as Lens Blur, Termporal Denoise and the new Neural Engine FX such as Magic Mask II. For this reason, the Studio version is highly recommended, especially because it is a perpetual licence. No subscription. For Resolve Fusion. As you can see, it’s worth it.

If you don’t need the other Resolve tools for your current task, it’s better to use the standalone version for performance reasons – it’s simply faster and more flexible as it doesn’t have the Resolve overhead and also offers network rendering (with unlimited render clients). By the way: Before Blackmagic times Fusion alone cost around 2500 $ …

Conclusion

Fusion offers a powerful complete package for compositing / visual effects and also a lot of core power for motion graphics, if you can get involved with the NodeSystem and do without direct integration of Adobe Illustrator files. Working with the programme is fun and quick. The price of the programme is unbeatable, and Resolve Free is even free – just give it a try.

The beta is available immediately and can be downloaded from the Blackmagic website. A Resolve or Fusion dongle or key is required for operation. This is available as a one-off purchase for 355€ – as a perpetual licence.

The post Fusion 20: a deep look at the core first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
DIGITAL PRODUCTION Fusion 20 indepth AOV 182921
Blackmagic Fusion 20 https://digitalproduction.com/2025/04/10/blackmagic-fusion-20/ Thu, 10 Apr 2025 09:10:51 +0000 https://digitalproduction.com/?p=164856 Screenshot of a 3D modeling software interface displaying a dark scene with illuminated structures, navigation controls at the bottom, and light settings on the right panel.

At NAB, Blackmagic presented what is probably the most relevant update for Fusion since the
Acquisition of Eyeon. But what exactly is in it?

The post Blackmagic Fusion 20 first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
Screenshot of a 3D modeling software interface displaying a dark scene with illuminated structures, navigation controls at the bottom, and light settings on the right panel.

The compositing tool, which is integrated into Davinci Resolve or available as a standalone, is positioning itself more aggressively than ever as a cost-effective alternative between Nuke and After Effects as a one-stop shop for VFX and motion graphics. Whether integrated or standalone - the feature set is basically the same. Both versions bring exciting features that have been eagerly awaited for years. Real deep image compositing Blackmagic has been advertising on its website for years that deep pixel compositing is possible with Fusion, but what it really meant was the (admittedly extensive) possibilities of reading a world position pass and using it for all kinds of effects. Version 20 finally introduces real deep compositing, very similar to Nuke's well-known toolset. This allows the depth data stored in the pixels to be read out and, for example, overlapping compositing or depth-based holdouts to be created without masks. Deep to Points in Fusions 3D Space. Shuffle something - th...


Hello Stranger!

This article is exclusively for Digital Production Subscribers.
If you are already subscribed, please log in below,
if you aren't subscribed, What are you waiting for?

Subscribers get
exclusive access to many articles like the one you just wanted to read,
can directly contact the authors or the newsroom,
can download many cool things from the archives,
support one of the last independent platforms weithout an "Algorithm"
and are granted exclusive bragging rights for being a better person!

Get an overview on what's available here, and access everything on the site!


Subscribe Now!

If you need another reason,
here is a picture of the editorial cat,
which you'll be supporting as well!


The post Blackmagic Fusion 20 first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
DIGITAL PRODUCTION 164856
A beastly good choice https://digitalproduction.com/2024/08/02/a-beastly-good-choice/ Fri, 02 Aug 2024 06:00:00 +0000 https://digitalproduction.com/?p=144524 A cheerful animated orange cat with large green eyes and a wide, friendly smile. The background is a soft, gradient purple, enhancing the cat's vibrant colors and playful expression.

The in-depth investigation of a FullCGI production with Houdini KineFX, Grooming and V-ray in Solaris.

The post A beastly good choice first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
A cheerful animated orange cat with large green eyes and a wide, friendly smile. The background is a soft, gradient purple, enhancing the cat's vibrant colors and playful expression.

Who has no hands and cares about our health? Our pets, of course. Yes, really. Our intestinal health is the biggest concern of our loved ones (furry flatmates). Following this idea from Serviceplan Health and Life, the TVC was created for the Burda Foundation's current bowel cancer prevention campaign. The task "in a nutshell": How can a dog and a cat engage in a loving dialogue, form words without the appropriate biological prerequisites, express human emotions and still remain animals?

After initially examining possible implementation routes and styles and collecting references (from strange things from the depths of the internet to Disney remakes), the path of real-looking animals, which are likeable thanks to their respective character-driven acting, crystallised: a somewhat grumpy older man's dog lives together with a charming cat, masterfully spoken by Jürgen Prochnow and Katja Burkard, rounded off by Sky Du Mont as the voice-over a...


Hello Stranger!

This article is exclusively for Digital Production Subscribers.
If you are already subscribed, please log in below,
if you aren't subscribed, What are you waiting for?

Subscribers get
exclusive access to many articles like the one you just wanted to read,
can directly contact the authors or the newsroom,
can download many cool things from the archives,
support one of the last independent platforms weithout an "Algorithm"
and are granted exclusive bragging rights for being a better person!

Get an overview on what's available here, and access everything on the site!


Subscribe Now!

If you need another reason,
here is a picture of the editorial cat,
which you'll be supporting as well!


The post A beastly good choice first appeared on DIGITAL PRODUCTION and was written by Manuel Kotulla.

]]>
DIGITAL PRODUCTION #image_title 144524