stage
Links the view to a following view stage that postprocesses the view for adding
further effects. This way a view rendering chain can be defined for complex effects,
such as shadow mapping.
Type:
VIEW*
Remarks:
-
If the following stage has no PROCESS_SCREEN flag
set, the view renders into a target bitmap rather then on the screen.
If the view has no own render target (bmap), a standard render target bitmap is used.
A8.23 The standard render target is automatically generated in the size of the engine window, and adapts to this size when the resolution is changed.
After rendering,
the global render_target pointer
is set to the target bitmap, making it available to the
following stage through
the TargetMap texture
parameter.
-
If a view renders into a render target, the content from the screen or from a previous stage is not automatically copied into the target bitmap before rendering. This must be kept in mind when the view post processes an existing scene, for instance by blurring the stencil buffer content on the screen. In such a case the resulting image must be composed from the content of the previous stage ( TargetMap) blended with the new content (e.g. StencilMap).
- The following stage does not need the SHOW flag for
rendering. For disabling a postprocessing stage, it is sufficient
to set stage at NULL.
- The following stage either renders the whole scene again (PROCESS_TARGET not
set), or only the TargetMap (PROCESS_TARGET set) for
2D postprocessing effects. You can use pixel shaders for 2D postprocessing,
but no vertex shaders.
- An arbitrary number of stages can be linked together for a view
rendering chain. For instance, shadow mapping requires a view rendering
from a light source into a depth map, followed
by a view that renders the shadow map using the depth map as a source
map, followed by one or two post processing stages for blurring,
and finally by a view that renders the scene combined with the blurred
shadow map.
- Depending on the complexity of the stage shader
and the size of the view, postprocessing can remarkably reduce the
frame rate. However in some cases postprocessing can even be used
for increasing the frame rate, especially when complex effects and
shaders are used. For this, the first view renders only into the
z buffer and the stage view then renders the full scene with shaders,
this way reducing the number of shaded pixels through early z-buffer
clipping.
-
View entities are rendered after all views, and thus are not affected by postprocessing stages.
Edition:
C
LC
Example (lite-C):
MATERIAL* mtlEmboss = // a postprocessing material
{
effect = "
Texture TargetMap;
sampler2D smpSource = sampler_state { texture = <TargetMap>; };
float4 vecViewPort; // contains viewport pixel size in zw components
float4 embossPS( float2 Tex : TEXCOORD0 ) : COLOR0
{
float4 Color = float4( 1.0, 0.5, 0.5, 0.5 );
Color -= tex2D( smpSource, Tex.xy-vecViewPort.zw)*2.0;
Color += tex2D( smpSource, Tex.xy+vecViewPort.zw)*2.0;
Color.rgb = (Color.r+Color.g+Color.b)*0.333;
return Color;
}
technique emboss { pass one { PixelShader = compile ps_2_0 embossPS(); } }
technique fallback { pass one { } }
";
}
VIEW* viewEmboss = { material = mtlEmboss; flags = CHILD | PROCESS_TARGET; } // a postprocessing view
...
camera.stage = viewEmboss; // enable postprocessing for the camera view
See also:
VIEW, view.material, view.bmap, PROCESS_SCREEN
/ PROCESS_TARGET, render_target, render_stencil
► latest
version online