Many three.js applications render their 3D objects directly to the screen. Sometimes, however, you want to apply one or more graphical
effects like Depth-Of-Field, Bloom, Film Grain or various types of Anti-aliasing. Post-processing is a widely used approach
to implement such effects. First, the scene is rendered to a render target which represents a buffer in the video card's memory.
In the next step one ore more post-processing passes apply filters and effects to the image buffer before it is eventually rendered to
the screen.
</p>
<p>
three.js provides a complete post-processing solution via [page:EffectComposer] to implement such a workflow.
</p>
<h2>Workflow</h2>
<p>
The first step in the process is to import all necessary files from the examples directory. The guide assumes your are using the official
[link:https://www.npmjs.com/package/three npm package] of three.js. For our basic demo in this guide we need the following files.
</p>
<code>
import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer.js';
import { RenderPass } from 'three/examples/jsm/postprocessing/RenderPass.js';
import { GlitchPass } from 'three/examples/jsm/postprocessing/GlitchPass.js';
</code>
<p>
After all files are successfully imported, we can create our composer by passing in an instance of [page:WebGLRenderer].
</p>
<code>
var composer = new EffectComposer( renderer );
</code>
<p>
When using a composer, it's necessary to change the application's animation loop. Instead of calling the render method of
[page:WebGLRenderer], we now use the respective counterpart of [page:EffectComposer].
</p>
<code>
function animate() {
requestAnimationFrame( animate );
composer.render();
}
</code>
<p>
Our composer is now ready so it's possible to configure the chain of post-processing passes. These passes are responsible for creating
the final visual output of the application. They are processed in order of their addition/insertion. In our example, the instance of *RenderPass*
is executed first and then the instance of *GlitchPass*. The last enabled pass in the chain is automatically rendered to the screen. The setup
of the passes looks like so:
</p>
<code>
var renderPass = new RenderPass( scene, camera );
composer.addPass( renderPass );
var glitchPass = new GlitchPass();
composer.addPass( glitchPass );
</code>
<p>
*RenderPass* is normally placed at the beginning of the chain in order to provide the rendered scene as an input for the next post-processing step. In our case,
*GlitchPass* is going to use these image data to apply a wild glitch effect. Check out this [link:https://threejs.org/examples/webgl_postprocessing_glitch live example]
to see it in action.
</p>
<h2>Built-in Passes</h2>
<p>
You can use a wide range of pre-defined post-processing passes provided by the engine. They are located in the
Many three.js applications render their 3D objects directly to the screen. Sometimes, however, you want to apply one or more graphical
effects like Depth-Of-Field, Bloom, Film Grain or various types of Anti-aliasing. Post-processing is a widely used approach
to implement such effects. First, the scene is rendered to a render target which represents a buffer in the video card's memory.
In the next step one ore more post-processing passes apply filters and effects to the image buffer before it is eventually rendered to
the screen.
</p>
<p>
three.js provides a complete post-processing solution via [page:EffectComposer] to implement such a workflow.
</p>
<h2>Workflow</h2>
<p>
The first step in the process is to import all necessary files from the examples directory. The guide assumes your are using the official
[link:https://www.npmjs.com/package/three npm package] of three.js. For our basic demo in this guide we need the following files.
</p>
<code>
import { EffectComposer } from 'three/examples/jsm/postprocessing/EffectComposer.js';
import { RenderPass } from 'three/examples/jsm/postprocessing/RenderPass.js';
import { GlitchPass } from 'three/examples/jsm/postprocessing/GlitchPass.js';
</code>
<p>
After all files are successfully imported, we can create our composer by passing in an instance of [page:WebGLRenderer].
</p>
<code>
var composer = new EffectComposer( renderer );
</code>
<p>
When using a composer, it's necessary to change the application's animation loop. Instead of calling the render method of
[page:WebGLRenderer], we now use the respective counterpart of [page:EffectComposer].
</p>
<code>
function animate() {
requestAnimationFrame( animate );
composer.render();
}
</code>
<p>
Our composer is now ready so it's possible to configure the chain of post-processing passes. These passes are responsible for creating
the final visual output of the application. They are processed in order of their addition/insertion. In our example, the instance of *RenderPass*
is executed first and then the instance of *GlitchPass*. The last enabled pass in the chain is automatically rendered to the screen. The setup
of the passes looks like so:
</p>
<code>
var renderPass = new RenderPass( scene, camera );
composer.addPass( renderPass );
var glitchPass = new GlitchPass();
composer.addPass( glitchPass );
</code>
<p>
*RenderPass* is normally placed at the beginning of the chain in order to provide the rendered scene as an input for the next post-processing step. In our case,
*GlitchPass* is going to use these image data to apply a wild glitch effect. Check out this [link:https://threejs.org/examples/webgl_postprocessing_glitch live example]
to see it in action.
</p>
<h2>Built-in Passes</h2>
<p>
You can use a wide range of pre-defined post-processing passes provided by the engine. They are located in the