This tutorial describes the implementation of a “halftone” filter as a Lua script for OBS. Such a filter appears in the list of filters and can be added as part of a filter chain to any video source. As effect, the filter mimics the halftone technique, used for printing with a reduced set of ink colors arranged in patterns. It is based on dithering with a carefully designed texture.

First part - 4 Shades of Grey

The tutorial is divided into two parts. In this first part, we create a minimal script that implements a simple rendering effect. Such a script with its effect file can be easily re-used as starting point for new projects.

Script creation

Similarly to the Source Shake tutorial, create a script file named filter-halftone.lua with this content:

  1. obs = obslua
  2. -- Returns the description displayed in the Scripts window
  3. function script_description()
  4. return [[Halftone Filter
  5. This Lua script adds a video filter named Halftone. The filter can be added
  6. to a video source to reduce the number of colors of the input picture. It reproduces
  7. the style of a magnified printed picture.]]
  8. end

Add the new script in the Scripts window, the description should appear (with no properties).

Registering a first source info structure

Sources are supported in Lua through the source_info structure. It is defined as a usual Lua table containing a set of mandatory keys referencing values and functions that mimic a subset of the C obs_source_info structure.

The source_info structure is registered using obs_register_source. OBS will use the structure to create the internal data and settings when the filter is added to an existing video source. A filter has its own independent set of properties and data settings, i.e. an instance of such settings exists for each filter instance. OBS manages the filter creation, destruction and the persistent storage of its settings.

Let’s create the source info structure for a video filter with a minimal set of values and functions. It is typically registered in script_load:

  1. -- Called on script startup
  2. function script_load(settings)
  3. obs.obs_register_source(source_info)
  4. end
  5. -- Definition of the global variable containing the source_info structure
  6. source_info = {}
  7. source_info.id = 'filter-halftone' -- Unique string identifier of the source type
  8. source_info.type = obs.OBS_SOURCE_TYPE_FILTER -- INPUT or FILTER or TRANSITION
  9. source_info.output_flags = obs.OBS_SOURCE_VIDEO -- Combination of VIDEO/AUDIO/ASYNC/etc
  10. -- Returns the name displayed in the list of filters
  11. source_info.get_name = function()
  12. return "Halftone"
  13. end

Add the code, refresh the script, then choose a source (here a colorful picture of the VR game Anceder) and display the Filter dialog window for the source (e.g. Filters in the context menu of the source). Click on + and the name Halftone should appear in the list of filters:

filter halftone list

Add the new Halftone filter. For now the filter has no effect because the source_info.create function is not defined.

Effect compilation and rendering

Lua would not be fast enough for video processing so the filter is based on GPU computation. A GPU can be programmed through “shaders” compiled by the graphics device driver from GLSL for OpenGL or HLSL in the MS Windows world. OBS relies on HLSL and expects shader code in the so-called effect files.

An effect can be compiled from a file with gs_effect_create_from_file or from a string with gs_effect_create, and is destroyed with gs_effect_destroy). Like other graphic functions in OBS, manipulating effects needs to occur in the graphics context. This is ensured by calling obs_enter_graphics and then obs_leave_graphics when everything is done.

The compilation is typically implemented in the source_info.create function and resources are released in the source_info.destroy function. source_info.create returns a Lua table containing custom data used for the filter instance. Most functions of the source_info structure will provide this Lua table passed as an argument, so this is where you want to store any custom data needed by the filter. Please note that if source_info.create returns nil, then the filter initialization is considered failed (and logged accordingly).

Add this code to the Lua script:

  1. -- Creates the implementation data for the source
  2. source_info.create = function(settings, source)
  3. -- Initializes the custom data table
  4. local data = {}
  5. data.source = source -- Keeps a reference to this filter as a source object
  6. data.width = 1 -- Dummy value during initialization phase
  7. data.height = 1 -- Dummy value during initialization phase
  8. -- Compiles the effect
  9. obs.obs_enter_graphics()
  10. local effect_file_path = script_path() .. 'filter-halftone.effect.hlsl'
  11. data.effect = obs.gs_effect_create_from_file(effect_file_path, nil)
  12. obs.obs_leave_graphics()
  13. -- Calls the destroy function if the effect was not compiled properly
  14. if data.effect == nil then
  15. obs.blog(obs.LOG_ERROR, "Effect compilation failed for " .. effect_file_path)
  16. source_info.destroy(data)
  17. return nil
  18. end
  19. return data
  20. end
  21. -- Destroys and release resources linked to the custom data
  22. source_info.destroy = function(data)
  23. if data.effect ~= nil then
  24. obs.obs_enter_graphics()
  25. obs.gs_effect_destroy(data.effect)
  26. data.effect = nil
  27. obs.obs_leave_graphics()
  28. end
  29. end

The effect file filter-halftone.effect.hlsl is mentioned in the code, it will defined in the next section. Please note that the source argument of the function source_info.create is a reference to the current instance of the filter as a source object (almost everything is a source in OBS). This reference, as well as the variables width and height, initialized with dummy values, are used in the rendering function.

Namely, the function source_info.video_render is called each frame to render the output of the filter in the graphics context (no need to call obs_enter_graphics). To render an effect, first obs_source_process_filter_begin is called, then the effect parameters can be set, and then obs_source_process_filter_end is called to draw.

Determining the width and height to pass to obs_source_process_filter_end is somehow tricky in OBS, because the filter is itself in a chain of filters, where the resolution could theoretically change at any stage. The usual method is to retrieve the “parent source” of the filter with obs_filter_get_parent, i.e. “the source being filtered”, and then to use the so far undocumented functions obs_source_get_base_width and obs_source_get_base_height. Note that some filters reference the next source in the chain using obs_filter_get_target, not the source being filtered (it may make a difference depending on the use case).

In addition, two additional functions source_info.get_width and source_info.get_height must be defined to provide the values to OBS whenever necessary. The functions will re-use the values determined in source_info.video_render.

The effect rendering code looks like:

  1. -- Returns the width of the source
  2. source_info.get_width = function(data)
  3. return data.width
  4. end
  5. -- Returns the height of the source
  6. source_info.get_height = function(data)
  7. return data.height
  8. end
  9. -- Called when rendering the source with the graphics subsystem
  10. source_info.video_render = function(data)
  11. local parent = obs.obs_filter_get_parent(data.source)
  12. data.width = obs.obs_source_get_base_width(parent)
  13. data.height = obs.obs_source_get_base_height(parent)
  14. obs.obs_source_process_filter_begin(data.source, obs.GS_RGBA, obs.OBS_NO_DIRECT_RENDERING)
  15. -- Effect parameters initialization goes here
  16. obs.obs_source_process_filter_end(data.source, data.effect, data.width, data.height)
  17. end

Add the code blocks to the Lua script, but no need to reload for now, the effect file is still missing.

Simple HLSL luminance effect file

The Lua code is in place, now create a new file called filter-halftone.effect.hlsl in the same directory as the Lua script.

warning: The extension .hlsl is chosen such that a code editor or an IDE recognizes directly that it is HLSL code. As the parser of OBS may miss unbalanced brackets or parenthesis (and say nothing about it in the logs), it is very important to have syntax checking in the editor.

An effect file follows a strict syntax and structure. It is a mix of static data definitions and code. These constraints are necessary to allow the compilation for the GPU and the massively parallel execution. The various parts are described below.

Our effect file starts with macro definitions that will allow writing HLSL-compliant code instead of the effect dialect understood by OBS, in order to support a full syntax check in the IDE. Indeed, the keywords sampler_state and texture2d are specific to effects in OBS. Using such macros is of course not mandatory (and not so common looking at other OBS effect files):

  1. // OBS-specific syntax adaptation to HLSL standard to avoid errors reported by the code editor
  2. #define SamplerState sampler_state
  3. #define Texture2D texture2d

Then the two mandatory uniform parameters required by OBS are defined. The values of these parameters will be set transparently by OBS according to the input picture of the source being filtered set in image, and the “View-Projection Matrix” ViewProj used to compute the screen coordinates where the filtered picture will be drawn:

  1. // Uniform variables set by OBS (required)
  2. uniform float4x4 ViewProj; // View-projection matrix used in the vertex shader
  3. uniform Texture2D image; // Texture containing the source picture

The next definition is a “sampler state“. It defines how to sample colors from pixels in a texture, i.e. how to interpolate colors between two pixels (Linear) or just to take the next neighbor (Point), and how to behave in the the pixel coordinates are outside of the texture, i.e. take the pixels from the nearest edge (Clamp), wrap around (Wrap) or mirror the texture over the edges (Mirror). The supported values for sampler states are documented in the OBS API reference.

We define a simple linear clamp sampler state that will be used in the pixel shader:

  1. // Interpolation method and wrap mode for sampling a texture
  2. SamplerState linear_clamp
  3. {
  4. Filter = Linear; // Anisotropy / Point / Linear
  5. AddressU = Clamp; // Wrap / Clamp / Mirror / Border / MirrorOnce
  6. AddressV = Clamp; // Wrap / Clamp / Mirror / Border / MirrorOnce
  7. BorderColor = 00000000; // Used only with Border edges (optional)
  8. };

After that, two very specific data structures are defined to specify which parameters are given to the vertex shader, and passed from the vertex shader to the pixel shader. In this tutorial these structures are artificially separated for better understanding and more generality. In most effects, both structures are defined in only one structure.

The structures require for each parameter a semantics identifier giving the intended use of the parameter. Supported semantics are documented in the OBS API reference. Semantics are necessary for the graphics pipeline of the GPU:

  1. // Data type of the input of the vertex shader
  2. struct vertex_data
  3. {
  4. float4 pos : POSITION; // Homogeneous space coordinates XYZW
  5. float2 uv : TEXCOORD0; // UV coordinates in the source picture
  6. };
  7. // Data type of the output returned by the vertex shader, and used as input
  8. // for the pixel shader after interpolation for each pixel
  9. struct pixel_data
  10. {
  11. float4 pos : POSITION; // Homogeneous screen coordinates XYZW
  12. float2 uv : TEXCOORD0; // UV coordinates in the source picture
  13. };

Before going further, some words about the classical 3D rendering pipeline. Roughly speaking, it comprises these main steps:

  • The application, here OBS, set-ups all data structures (textures, data parameters, arrays, etc) necessary for the computation and then feeds the 3D coordinates of triangles into the vertex shader together with various vertex attributes such as UV coordinate of the vertex mapped to a texture or a color assigned to this vertex, all part of the vertex_data structure.
  • Each time a vertex_data structure is available in the GPU, the vertex shader is called and returns a pixel_data structure corresponding to the pixel under the vertex as seen on the screen. During this call, the vertex shader transforms the 3D world coordinates of the vertex into screen coordinates, and transforms the other vertex attributes if necessary.
  • Then many complex steps are performed by the GPU to determine how far the surface of each triangle is visible on the screen, up to each visible pixel.
  • Each time the GPU determines that a pixel of a triangle is visible, it calls the pixel shader with a pixel_data structure as argument that corresponds to the position of the pixel. In order to provide position-specific values for each pixel of the triangle, the GPU interpolates values from the values of the 3 structures returned for the 3 vertices of the triangle by the vertex shader.

To render sources, OBS follows the same model and renders simply quads (2 triangles) on the screen, where each vertex has the UV mapping of the source picture as attribute. The vertex shader is only used to compute the screen coordinates of the 4 vertices of the quad, according to the transform of the source, and to pass the UV coordinates to the pixel shader. The classical method for the transformation of world coordinates into screen coordinates is a multiplication through the “View-Projection” 4x4 matrix in homogeneous coordinates (don’t be afraid, it is not necessary to understand homogeneous coordinates for the tutorial):

  1. // Vertex shader used to compute position of rendered pixels and pass UV
  2. pixel_data vertex_shader_halftone(vertex_data vertex)
  3. {
  4. pixel_data pixel;
  5. pixel.pos = mul(float4(vertex.pos.xyz, 1.0), ViewProj);
  6. pixel.uv = vertex.uv;
  7. return pixel;
  8. }

The vertex shader will remain like this in most cases with OBS. Altering the 3D transformation would change the proper location of displayed sources on the screen. Note that interesting results could be achieved, e.g. actually the Source Shake animation could be certainly implemented as a smart modification of the View-Projection matrix.

Please note the well-adapted HLSL syntax vertex.pos.xyz here. pos is a vertex member with vector type float4. Adding the suffix .xyz is sufficient to convert it to a temporary float3 vector with the values of the components x, y and z of pos. Then float4(vertex.pos.xyz, 1.0) is again a float4 vector with the 3 first components of vertex.pos.xyz and 1.0 as fourth component. The HLSL syntax, which is by the way very similar to GLSL used in OpenGL, has this special feature that makes it very compact for matrix and vector operations.

Now the real heart of the video filter effect is in the Pixel Shader. The main principle of a pixel shader is very simple: this is a function that computes a color at a given pixel position. The pixel shader function is called for every single pixel within the draw area.

As a simple gray shading effect, we want to compute the “luminance” of the source pixel (i.e. its brightness or luminous intensity), and use it for each RGB component of the output color:

  1. // Pixel shader used to compute an RGBA color at a given pixel position
  2. float4 pixel_shader_halftone(pixel_data pixel) : TARGET
  3. {
  4. float4 source_sample = image.Sample(linear_clamp, pixel.uv);
  5. float luminance = dot(source_sample.rgb, float3(0.299, 0.587, 0.114));
  6. return float4(luminance.xxx, source_sample.a);
  7. }

Please again admire the incredibly compact syntax:

  • On the first line, the source_sample variable receives the RGBA color of the pixel at pixel.uv in the source picture in UV coordinates, following a sampling method given by linear_clamp. Attention: here .uv is just a float2 member of the structure pixel, not a suffix to access particular vector components.
  • On the second line, the relative luminance is computed as a dot product of the float3 vector of the RGB components of sample_color and the constant float3 vector (0.299, 0.587, 0.114). The expression is equivalent to source_sample.r*0.299 + source_sample.g*0.587 + source_sample.b*0.114. Suffices r, g, b, a can be used like x, y, z, w to access particular vector components. It is necessary to write source_sample.rgb here because it is a float4 vector and we want to exclude the a component from the dot product.
  • On the third line, a float4 vector is built with three time the luminance value and then the original alpha value of the source color as fourth component. The color is returned as a float4 containing the values of the RGBA components between 0.0 and 1.0.

The position of the pixel is provided through the pixel_data structure:

  • pixel.pos is a float4 vector resulting from the computation in the vertex shader and further interpolation. In the pixel shader pixel.pos.xy may contain the absolute position on screen of the pixel being rendered (i.e. values change if the user moves the source with the mouse). Now if the source is itself scaled with a setting of Scale filtering different than Disable, or if some other filter needs to render in a texture as an intermediate step, then pixel.pos.xy may contain some other pixel coordinates corresponding to the internal texture used to render the output of our filter before further processing. In addition, as pixel.pos is in homogeneous coordinates, normally it is necessary to divide x and y by the w components (which should be always equal to 1 here as there is no 3D). Long story short, it is not recommended to use pixel.pos directly.
  • pixel.uv gives the interpolated UV coordinates of the pixel in the source picture as a float2 vector, i.e. it does not depend on the position or scaling of the source. As UV coordinates, pixel.uv.x and pixel.uv.y have values in range 0.0 to 1.0, with the upper left corner of the source picture is at position (0.0,0.0) and the bottom right corner at (1.0,1.0). This is what we want to use to reference the source pixels. In addition, the UV coordinates can be used directly to retrieve the pixel color from the image texture.

The final part of the effect file is the definition of the “techniques”. Here again the structure will be similar in most effects, typically with one pass and one technique:

  1. technique Draw
  2. {
  3. pass
  4. {
  5. vertex_shader = vertex_shader_halftone(vertex);
  6. pixel_shader = pixel_shader_halftone(pixel);
  7. }
  8. }

Add all the code blocks in the HLSL file, then restart OBS, normally the source should be displayed in black and white:

filter halftone luminance

Well, this is not really a halftone picture here, but it clearly shows the expected shades of gray according to the luminance. The example picture is very dark and needs some luminosity correction.

Gamma correction

Colors are typically encoded with 8 bits for each RGB component, i.e. in range 0 to 255, corresponding to the range 0.0 to 1.0 in HLSL. But because the human eye has a non-linear perception of the light intensity emitted by the monitor, usually the values encoded in RGB components do not grow linearly with the intensity, they follow a power law with an exponent called “gamma”. This non-linearity of the RGB components can be a problem in a video filter if some computation is performed assuming components are encoded in a linear way.

Actually, in the frame of an OBS filter, this is not always an issue. Some sources will provide linearized pixel data to the shaders, some others will provide gama-encoded data. As of version 26.1, several attempts were made to have consistently linearized colors available in shaders. As long as it is not fully implemented, to give the maximum flexibility, we will to let the user select if the source is gamma-encoded or not. As the filter of this tutorial is more artistic than exact, a user would just try different settings to get the best result.

The power law for encoding and decoding uses typically an exponent 2.2 or 1/2.2. The two operations of gamma correction are called first gamma compression, i.e. encoding of an RGB component proportional to the light intensity into a value to be displayed to a human eye, with an exponent lower than 1, and gamma expansion with an exponent greater than 1.

Let’s call gamma the exponent used for color decoding (greater than 1) with a value of 2.2. We can use simplified formulas for gamma encoding:

_encoded_value = linearvalue 1/gamma

And for gamma decoding:

_linear_value = encodedvalue gamma

Please note that such formulas keep values nicely between 0.0 and 1.0.

Now to come back to the halftone filter, we noted that the example picture is a bit too dark, and we know that changing the “gamma” of a picture change its overall luminosity. So we can introduce a gamma computation for 2 purposes: use of linear values for more exact calculations and a “gamma shift” of the source picture (not called “gamma correction” to avoid misunderstandings).

Namely, a gamma decoding including a subtractive shift would look like this, performed before any computation on the RGB color components (with a minus such that positive values increase the luminosity):

_linear_value = encodedvalue gamma-shift

Let’s do it in the code, starting with the definition of the gamma and gamma_shift uniform variables and default values with other uniform variables at the top of the HLSL effect file:

  1. // General properties
  2. uniform float gamma = 1.0;
  3. uniform float gamma_shift = 0.6;

Experiment and adapt the default values of gamma and gamma_shift to your source. They will be defined through OBS properties later on.

We introduce the gamma encoding and decoding functions, and re-write the pixel shader to use it. We use an additional clamp function to be sure to keep values between 0.0 and 1.0 before the exponentiation. In fact this may not be necessary but it avoids a warning that may be produced by the compiler. Add the code below the vertex shader:

  1. float3 decode_gamma(float3 color, float exponent, float shift)
  2. {
  3. return pow(clamp(color, 0.0, 1.0), exponent - shift);
  4. }
  5. float3 encode_gamma(float3 color, float exponent)
  6. {
  7. return pow(clamp(color, 0.0, 1.0), 1.0/exponent);
  8. }
  9. // Pixel shader used to compute an RGBA color at a given pixel position
  10. float4 pixel_shader_halftone(pixel_data pixel) : TARGET
  11. {
  12. float4 source_sample = image.Sample(linear_clamp, pixel.uv);
  13. float3 linear_color = decode_gamma(source_sample.rgb, gamma, gamma_shift);
  14. float luminance = dot(linear_color, float3(0.299, 0.587, 0.114));
  15. float3 result = luminance.xxx;
  16. return float4(encode_gamma(result, gamma), source_sample.a);
  17. }

Add all the code blocks in the HLSL file, then restart OBS (only reloading the script would not help, because the effect remains cached and would not be re-compiled). Depending on the values you select for gamma and gamma_shift, the filtered picture should have a different overall luminosity:

filter halftone gamma corrected luminance

The picture is a bit better now.

Width and height from Lua to the shader

Whatever the final form, the halftone effect is based on a pattern applied pixel-by-pixel to the source picture. While it is easy to transform the color of a single pixel, if the position of the pixel is necessary to recognize in which part of a pattern the pixel is located, then the UV coordinates alone are not sufficient in the general case, it is necessary to know the size of the source picture. More about the calculations in the next section.

In this section we will first see how to make the values of width and height available in the shader from the Lua code. Strangely, OBS does not foresee these parameters in effect files by default. We define the uniform variables with other uniform variables at the top of the HLSL effect file:

  1. // Size of the source picture
  2. uniform int width;
  3. uniform int height;

Only uniform variables can be changed from the Lua code. Once an effect is compiled, the function gs_effect_get_param_by_name provides the necessary gs_eparam structure where the value can be set.

The effect parameters can be retrieved at the end of the source_info.create in the Lua file (above return data):

  1. -- Retrieves the shader uniform variables
  2. data.params = {}
  3. data.params.width = obs.gs_effect_get_param_by_name(data.effect, "width")
  4. data.params.height = obs.gs_effect_get_param_by_name(data.effect, "height")

Finally the values are set with gs_effect_set_int between obs_source_process_filter_begin and obs_source_process_filter_end in in source_info.video_render:

  1. -- Effect parameters initialization goes here
  2. obs.gs_effect_set_int(data.params.width, data.width)
  3. obs.gs_effect_set_int(data.params.height, data.height)

Add the code, restart OBS, for now no difference is expected, the interesting things start in the next section.

Luminance perturbation

Now that the size of the picture is available, we can start to use pixel coordinates for formulas. Namely, we want to add a little perturbation on the computed luminance according to the formula cos(x)\cos(y)*. The form of this classic formula looks like:

filter halftone

The formula will be such that:

  • It adds to the luminance a small negative or positive value with a given amplitude
  • The scale of the form can be changed, with a scale of 1.0 corresponding to a 8 pixels long oscillation

If we name the parameters of the formula simply scale and amplitude, assuming x and y are in pixels, the angle for the oscillations on x is given by 2\π*x/scale/8 (the bigger the scale, the longer the oscillations). The cosine function returns values between -1.0 and 1.0, so the amplitude* can be just multiplied to the product of cosine.

The final parameterized formula will be (with simplification):

perturbation = amplitude\cos(π*x/scale/4)*cos(π*y/scale/4)*

For the coordinates x and y in pixels, as we have the width and height plus the UV coordinates, the formula are simply x = U \ width and y = V * height if we call the UV coordinates U and V* here. In the code, we will use a more compact form with a component-by-component multiplication of pixel.uv with float2(width,height).

Re-write the center part of the pixel shader (between the decoding and encoding lines) in the HLSL effect file:

  1. float luminance = dot(linear_color, float3(0.299, 0.587, 0.114));
  2. float2 position = pixel.uv * float2(width, height);
  3. float perturbation = amplitude * cos(PI*position.x/scale/4.0) * cos(PI*position.y/scale/4.0);
  4. float3 result = (luminance + perturbation).xxx;

At the top of the file, do not forget to define a new constant for PI and the new uniform variables scale and amplitude with default values:

  1. // Constants
  2. #define PI 3.141592653589793238
  3. // General properties
  4. uniform float amplitude = 0.2;
  5. uniform float scale = 1.0;

Add the code, restart OBS, now the effect should look like this:

filter halftone perturbation

It slowly starts to look like a halftone.

Reducing the number of colors

Until now we use a continuous luminance. The next step is to mimic a reduced number of inks on a printed paper.

For a given value of luminance from 0.0 to 1.0, we can multiply the value by a constant factor and then round the product to obtain a certain number of integer values. For example, with a factor of 3, we obtain the values 0, 1, 2 and 3. The we re-divide by 3 to obtain 4 luminance values with values 0.0, 0.33, 0.66 and 1.0. It can be generalized to n colors by multiplying/dividing by n-1.

The computation is really simple to implement. We start by adding a global uniform number of color levels with other uniform variables at the top of the HLSL effect file:

  1. uniform int number_of_color_levels = 4.0;

And we add a single line in the pixel shader, just before the gamma encoding:

  1. result = round((number_of_color_levels-1)*result)/(number_of_color_levels-1);

Add the code, restart OBS, now the effect should look like this:

filter halftone 4 colors

Here we are! It is nice to see what a simple cosine-based perturbation plus rounding can do. The active part of the effect is just a couple of lines long.

And now it is also very interesting to check how the effect behaves with different kinds of scale filtering (context menu of the source in OBS, then Scale Filtering).

First example, the scale filtering Point does not perform any interpolation after zoom and shows square-formed pixels:

filter halftone zoom scale filtering point

Typically, as we use a periodic pattern in the effect, “aliasing” artifacts may appear if we reduce the size of the picture:

filter halftone unzoom scale filtering point

With a scale filtering set to Bicubic, the interpolation after scaling shows anti-aliasing:

filter halftone zoom scale filtering bicubic

With reduced size, the aliasing is not completely gone but at least reduced:

filter halftone unzoom scale filtering bicubic

Now a very interesting effect appears when the scale filtering is set to Disable on a strongly zoomed picture (attention it may not work if other filters are in the chain of the source). The pixel shader renders directly to the screen (the output is not rendered into an intermediate texture for later scaling), so it is called for every single pixel in the screen space, at a sub-pixel level for the source picture. As we use continuous mathematical functions, and sample the source picture using a Linear interpolation with linear_clamp, the curves drawn by the pixel shader hide completely the pixel grid of the source picture. It looks like a vector drawing:

filter halftone zoom scale filtering disable

With reduced size it still behaves well:

filter halftone unzoom scale filtering disable

The cosine-based halftone effect is completely implemented now. It has many parameters set with default values but the user cannot set these parameters so far.

Adding properties

The effect is already satisfactory, we now want to improve the user experience by creating properties for the uniform variables that we already have in the effect file:gamma, gamma_shift, amplitude, scale and number_of_color_levels.

By convention, we will name all instances of the variables or properties with the sames names as in the effect file, i.e. for the effect parameters, the data settings and the variables of the data structure.

Similarly to what is described in the Source Shake tutorial, we first have to define default values in source_info.get_defaults. Default values are chosen such that applying them to a new source would give a reasonable result:

  1. -- Sets the default settings for this source
  2. source_info.get_defaults = function(settings)
  3. obs.obs_data_set_default_double(settings, "gamma", 1.0)
  4. obs.obs_data_set_default_double(settings, "gamma_shift", 0.0)
  5. obs.obs_data_set_default_double(settings, "scale", 1.0)
  6. obs.obs_data_set_default_double(settings, "amplitude", 0.2)
  7. obs.obs_data_set_default_int(settings, "number_of_color_levels", 4)
  8. end

Then we define the properties as sliders in source_info.get_properties, which builds and returns a properties structure:

  1. -- Gets the property information of this source
  2. source_info.get_properties = function(data)
  3. local props = obs.obs_properties_create()
  4. obs.obs_properties_add_float_slider(props, "gamma", "Gamma encoding exponent", 1.0, 2.2, 0.2)
  5. obs.obs_properties_add_float_slider(props, "gamma_shift", "Gamma shift", -2.0, 2.0, 0.01)
  6. obs.obs_properties_add_float_slider(props, "scale", "Pattern scale", 0.01, 10.0, 0.01)
  7. obs.obs_properties_add_float_slider(props, "amplitude", "Perturbation amplitude", 0.0, 2.0, 0.01)
  8. obs.obs_properties_add_int_slider(props, "number_of_color_levels", "Number of color levels", 2, 10, 1)
  9. return props
  10. end

Once a property is changed, the source_info.update function is called. This is where we will transfer the values from data settings to the data structure used to hold values until they are set in the shader:

  1. -- Updates the internal data for this source upon settings change
  2. source_info.update = function(data, settings)
  3. data.gamma = obs.obs_data_get_double(settings, "gamma")
  4. data.gamma_shift = obs.obs_data_get_double(settings, "gamma_shift")
  5. data.scale = obs.obs_data_get_double(settings, "scale")
  6. data.amplitude = obs.obs_data_get_double(settings, "amplitude")
  7. data.number_of_color_levels = obs.obs_data_get_int(settings, "number_of_color_levels")
  8. end

Next, like for width and height, we need to store the effect parameters and we call once source_info.update to initialize the members of the data structure (do not forget it, otherwise OBS will try to use non-initialized data in source_info.video_render and log errors every frame). This block comes at the end of the source_info.create function just above return data:

  1. data.params.gamma = obs.gs_effect_get_param_by_name(data.effect, "gamma")
  2. data.params.gamma_shift = obs.gs_effect_get_param_by_name(data.effect, "gamma_shift")
  3. data.params.amplitude = obs.gs_effect_get_param_by_name(data.effect, "amplitude")
  4. data.params.scale = obs.gs_effect_get_param_by_name(data.effect, "scale")
  5. data.params.number_of_color_levels = obs.gs_effect_get_param_by_name(data.effect, "number_of_color_levels")
  6. -- Calls update to initialize the rest of the properties-managed settings
  7. source_info.update(data, settings)

And finally we transfer the values from the data structure into the effect parameters in source_info.video_render:

  1. obs.gs_effect_set_float(data.params.gamma, data.gamma)
  2. obs.gs_effect_set_float(data.params.gamma_shift, data.gamma_shift)
  3. obs.gs_effect_set_float(data.params.amplitude, data.amplitude)
  4. obs.gs_effect_set_float(data.params.scale, data.scale)
  5. obs.gs_effect_set_int(data.params.number_of_color_levels, data.number_of_color_levels)

Well, that’s a lot. Each variable appears 7 times in different forms! Every line above is somehow needed such that OBS manages the persistency of settings, default values, display of properties, etc. Let’s say it is worth the pain but it calls for a proper object-oriented design to avoid writing so many times the same lines of code.

Add the pieces of code, restart OBS, open the Filters of the source, it should look like this (here in 2 colors):

filter halftone properties

Playing with the parameters of a video filter and seeing the result immediately is quite satisfying. Note the aliasing effects on the preview in the Filters window (unclear if it is possible to change the filtering there).

The old-fashioned newspaper effect is very convincing with 2 colors:

filter halftone sparrow

Note that some edges are preserved on the picture so it is probably not the same result as an optical process would produce.

Pictures in just 3 colors can be fascinating:

filter halftone lena

The first part of the tutorial is completed and the script plus its effect file are definitely a good starting point for further development. The complete source code of this first part is available.

Second part - Dithering with texture

In this second part we will see how to use additional textures for the pattern and the color palette.

Some colors please

If you followed the tutorial up to this point, then you are certainly fed up with the black and white pictures!

We change only one line in the pixel shader:

  1. float3 result = linear_color + perturbation;

And this is the result after re-starting OBS:

filter halftone colors

It works! And it is another example of the great versatility of the HLSL language. Here we add the float perturbation scalar to the float3 linear_color (instead of the float luminance). HLSL makes the necessary casting transparently (i.e. converts perturbation to a float3).

So we add a small perturbation to the Red, Green and Blue channels. Then the color quantization is done on each channel by the round operation, which results in a palette of 64 colors.

The scalar cosine formula gives great results and has an undeniable style, but it shows as well the limit of the method. Rather than a mix of dots with different colors, as a magnified printed photo would look like, we observe a grid of uniform color tones:

filter halftone lena colors

When the number of color levels is reduced to 2, i.e. with 8 colors (black, white, red, green, blue, cyan, magenta, yellow), the scale factor of the perturbation can be reduced as well to obtain a picture reproducing the original colors (here with a scale filtering set to bicubic to show the single pixels):

filter halftone lena reduced colors

Not bad with 8 colors and the primitive approach! But the typical “printed paper effect” cannot be reached with the current cosine-based scalar perturbation, it needs to have different values on each RGB channel. To make the perturbation fully flexible we will replace the computed formula with a pre-computed bitmap texture.

Re-factoring the pixel shader

The general method to obtain our effect can be divided in simple steps:

  1. The color of the source pixel is retrieved from the image texture and gamma-decoded
  2. A variable perturbation is determined according to the position of the source pixel
  3. The perturbation is added to the RGB channels of the decoded source color
  4. A color close to the perturbed color is selected among a limited set through the rounding operation
  5. The close color is gamma-encoded and returned as output

First, to be very general, we will add a variable offset set to 0 by default at step 3:

_perturbed_color = linearcolor + offset + amplitude\perturbation*

In addition, we allow the amplitude to be negative. So first, in the Lua file, we change the definition of the amplitude property (-2.0 as lower bound):

  1. obs.obs_properties_add_float_slider(props, "amplitude", "Perturbation amplitude", -2.0, 2.0, 0.01)

And in the effect file we need an additional uniform value at the top of file:

  1. uniform float offset = 0.0;

Second, and this is the main change, the steps 2 and 4 are put into separate functions for better modularity. We will even define the “perturbation color” (determined at step 2) and the closest color (determined as step 4) with an additional alpha, i.e. both are defined as float4. The variable alpha channel will be treated later on, for now the exact same functionality will be implemented with alpha set to 1.0.

Replace the pixel shader with the 2 new functions and the new code for the pixel shader in the effect file:

  1. float4 get_perturbation(float2 position)
  2. {
  3. float4 result;
  4. result = float4((cos(PI*position.x/scale/4.0) * cos(PI*position.y/scale/4.0)).xxx, 1.0);
  5. return result;
  6. }
  7. float4 get_closest_color(float3 input_color)
  8. {
  9. float4 result;
  10. result = float4(round((number_of_color_levels-1)*input_color)/(number_of_color_levels-1), 1.0);
  11. return result;
  12. }
  13. float4 pixel_shader_halftone(pixel_data pixel) : TARGET
  14. {
  15. float4 source_sample = image.Sample(linear_clamp, pixel.uv);
  16. float3 linear_color = decode_gamma(source_sample.rgb, gamma, gamma_shift);
  17. float2 position = pixel.uv * float2(width, height);
  18. float4 perturbation = get_perturbation(position);
  19. float3 perturbed_color = linear_color + offset + amplitude*perturbation.rgb;
  20. float4 closest_color = get_closest_color(clamp(perturbed_color, 0.0, 1.0));
  21. return float4(encode_gamma(closest_color.rgb, gamma), source_sample.a);
  22. }

Add the code, restart OBS, no change is expected. Even if the amplitude is set to a negative value, the output is similar due to the symmetric cosine formula.

Adding the texture properties

Two textures will be added:

  • A seamless pattern texture to replace the cosine formula by an arbitrary bitmap-based perturbation
  • A palette texture to retrieve colors from a limited set and hence replace the rounding operation

The basis for managing a texture is a gs_image_file object. It is selected as a file by the user.

We need some additional variables to manage the texture:

  • Obviously we need the texture2d objects (re-defined as Texture2D type by a macro to be more HLSL compliant)
  • We define the sizes of the textures (not available through the Texture2D definition)
  • And we define an alpha encoding/decoding exponent just in case

To start the definition, add the following uniform variables at the top of the effect file:

  1. // Pattern texture
  2. uniform Texture2D pattern_texture;
  3. uniform float2 pattern_size = {-1.0, -1.0};
  4. uniform float pattern_gamma = 1.0;
  5. // Palette texture
  6. uniform Texture2D palette_texture;
  7. uniform float2 palette_size = {-1.0, -1.0};
  8. uniform float palette_gamma = 1.0;

Coming to the Lua file, we will use the opportunity to add the new offset used in the perturbation calculation (see previous section). Add the retrieval of effect parameters to the source_info.create function:

  1. data.params.offset = obs.gs_effect_get_param_by_name(data.effect, "offset")
  2. data.params.pattern_texture = obs.gs_effect_get_param_by_name(data.effect, "pattern_texture")
  3. data.params.pattern_size = obs.gs_effect_get_param_by_name(data.effect, "pattern_size")
  4. data.params.pattern_gamma = obs.gs_effect_get_param_by_name(data.effect, "pattern_gamma")
  5. data.params.palette_texture = obs.gs_effect_get_param_by_name(data.effect, "palette_texture")
  6. data.params.palette_size = obs.gs_effect_get_param_by_name(data.effect, "palette_size")
  7. data.params.palette_gamma = obs.gs_effect_get_param_by_name(data.effect, "palette_gamma")

The next addition is a helper function to set the size and texture effect parameters according to a simple logic, i.e. if the texture object is nil, then its size is set to (-1,-1) such that the shader is able to recognize it (and note the use of the vec2 OBS object):

  1. function set_texture_effect_parameters(image, param_texture, param_size)
  2. local size = obs.vec2()
  3. if image then
  4. obs.gs_effect_set_texture(param_texture, image.texture)
  5. obs.vec2_set(size, image.cx, image.cy)
  6. else
  7. obs.vec2_set(size, -1, -1)
  8. end
  9. obs.gs_effect_set_vec2(param_size, size)
  10. end

The helper function is used in the source_info.video_render function, assuming the related image file objects representing the pattern and palette textures are stored in the data variable passed by OBS:

  1. obs.gs_effect_set_float(data.params.offset, data.offset)
  2. -- Pattern texture
  3. set_texture_effect_parameters(data.pattern, data.params.pattern_texture, data.params.pattern_size)
  4. obs.gs_effect_set_float(data.params.pattern_gamma, data.pattern_gamma)
  5. -- Palette texture
  6. set_texture_effect_parameters(data.palette, data.params.palette_texture, data.params.palette_size)
  7. obs.gs_effect_set_float(data.params.palette_gamma, data.palette_gamma)

We continue to implement the textures with default values. Note that the properties set by the user, and kept in the user settings, are paths to the texture files (empty if no texture is selected). Add the following lines in the source_info.get_defaults function:

  1. obs.obs_data_set_default_double(settings, "offset", 0.0)
  2. obs.obs_data_set_default_string(settings, "pattern_path", "")
  3. obs.obs_data_set_default_double(settings, "pattern_gamma", 1.0)
  4. obs.obs_data_set_default_string(settings, "palette_path", "")
  5. obs.obs_data_set_default_double(settings, "palette_gamma", 1.0)

For the properties, the function obs.obs_properties_add_path is used to let the user select a path. Because we want to be able to fallback to a formula (without texture), we foresee a button for each texture to reset the path to an empty string (with an inline function that returns true to force the refresh of the properties widget and uses a kept reference in data.settings, see below). Add the following lines in source_info.get_properties:

  1. obs.obs_properties_add_float_slider(props, "offset", "Perturbation offset", -2.0, 2.0, 0.01)
  2. obs.obs_properties_add_path(props, "pattern_path", "Pattern path", obs.OBS_PATH_FILE,
  3. "Picture (*.png *.bmp *.jpg *.gif)", nil)
  4. obs.obs_properties_add_float_slider(props, "pattern_gamma", "Pattern gamma exponent", 1.0, 2.2, 0.2)
  5. obs.obs_properties_add_button(props, "pattern_reset", "Reset pattern", function()
  6. obs.obs_data_set_string(data.settings, "pattern_path", ""); data.pattern = nil; return true; end)
  7. obs.obs_properties_add_path(props, "palette_path", "Palette path", obs.OBS_PATH_FILE,
  8. "Picture (*.png *.bmp *.jpg *.gif)", nil)
  9. obs.obs_properties_add_float_slider(props, "palette_gamma", "Palette gamma exponent", 1.0, 2.2, 0.2)
  10. obs.obs_properties_add_button(props, "palette_reset", "Reset palette", function()
  11. obs.obs_data_set_string(data.settings, "palette_path", ""); data.palette = nil; return true; end)

Another helper function is added to manage the reading of the image file, the initialization of the texture inside, an the freeing of allocated memory when necessary. Note that this is only possible in the graphics thread (this is the reason for the call to obs_enter_graphics), and that the function gs_image_file_init_texture is separated from reading the image:

  1. -- Returns new texture and free current texture if loaded
  2. function load_texture(path, current_texture)
  3. obs.obs_enter_graphics()
  4. -- Free any existing image
  5. if current_texture then
  6. obs.gs_image_file_free(current_texture)
  7. end
  8. -- Loads and inits image for texture
  9. local new_texture = nil
  10. if string.len(path) > 0 then
  11. new_texture = obs.gs_image_file()
  12. obs.gs_image_file_init(new_texture, path)
  13. if new_texture.loaded then
  14. obs.gs_image_file_init_texture(new_texture)
  15. else
  16. obs.blog(obs.LOG_ERROR, "Cannot load image " .. path)
  17. obs.gs_image_file_free(current_texture)
  18. new_texture = nil
  19. end
  20. end
  21. obs.obs_leave_graphics()
  22. return new_texture
  23. end

Finally, reading the image files is triggered in the update function, i.e. when a data settings was changed by the user or at OBS startup. Special variables are defined to keep the path of an image file previously loaded, such that it can be freed when another file needs to be loaded. Note that it is necessary to keep a reference in data.settings for the inline callback functions of the buttons:

  1. -- Keeps a reference on the settings
  2. data.settings = settings
  3. data.offset = obs.obs_data_get_double(settings, "offset")
  4. local pattern_path = obs.obs_data_get_string(settings, "pattern_path")
  5. if data.loaded_pattern_path ~= pattern_path then
  6. data.pattern = load_texture(pattern_path, data.pattern)
  7. data.loaded_pattern_path = pattern_path
  8. end
  9. data.pattern_gamma = obs.obs_data_get_double(settings, "pattern_gamma")
  10. local palette_path = obs.obs_data_get_string(settings, "palette_path")
  11. if data.loaded_palette_path ~= palette_path then
  12. data.palette = load_texture(palette_path, data.palette)
  13. data.loaded_palette_path = palette_path
  14. end
  15. data.palette_gamma = obs.obs_data_get_double(settings, "palette_gamma")

After adding the code, restart OBS and the properties should appear in the Filters dialog window:

filter halftone properties textures

The pattern and palette textures are not yet functional.

Bitmap-based dithering with seamless patterns

The naive algorithm we use is actually a form of ordered dithering, that typically relies on a “Bayer matrix” tiled over the picture, where each number from the matrix is added to the color of the related pixel, and the closest color is determined. In average, the mix of neighbor color dots reproduces the original color. Note that we keep this simple algorithm, while other algorithms for dithering with an arbitrary palette exist.

In its mathematical form, a Bayer matrix contain levels between 0 and 1 distributed over the matrix area. Bayer matrices can be represented as well as bitmap images with gray scales, like in this repository of Bayer PNG textures.

The picture of the basic 2x2 Bayer matrix is a square of 2 pixels by 2 pixels, what make a really tiny image:

Bayer matrix texture with 4 levels: :arrow_right: filter halftone bayer 2x2 :arrow_left:

After a zoom:

filter halftone bayer 2x2 big

Please note the 4 gray levels 0, 1/4, 2/4, 3/4 increasing along a loop like the “alpha” greek letter (top-right then bottom-left then top-left then bottom-right). This construction ensures that the pattern can be tiled infinitely in each direction without visible border artifact.

Bayer matrices of higher orders can be constructed recursively by replacing each pixel by the whole matrix and adding the levels. As an example, for a 4x4 matrix starting from the 2x2 matrix, the top-right pixel (level 0 pure black) is replaced directly with the 2x2 matrix, then the bottom right pixel (level 1/4) becomes the 2x2 matrix with 1/4 added to the levels of the 2x2 matrix, etc. At the end, the 4x4 Bayer matrix looks like (note the top-right 2x2 block, which is the same as the 2x2 Bayer matrix):

Bayer matrix texture with 16 levels: :arrow_right: filter halftone bayer 4x4 :arrow_left:

After a zoom:

filter halftone bayer 4x4 big

An interesting variation with Bayer matrices is to use a non-rectangular shape itself arranged to make a seamless pattern. For example with a cross of 5 pixels (hence 5 levels), arranged with no holes, resulting in a minimal 5x5 matrix:

Bayer matrix texture with 5 levels: :arrow_right: filter halftone bayer cross 5 levels :arrow_left:

After a zoom:

filter halftone bayer cross 5 levels big

Now we have some patterns to play with, time to modify the effect file.

Usually, in order to find coordinates in a tiled pattern, given the coordinates in the full area, the solution would be to use a “modulo” operation (remainder of the division by the size of the pattern). Here we will just let the GPU do it for us by using a “wrap” texture address mode. A new sampler_state (redefined as SamplerState by macro to be more HLSL compliant) is defined for that in the effect file:

  1. SamplerState linear_wrap
  2. {
  3. Filter = Linear;
  4. AddressU = Wrap;
  5. AddressV = Wrap;
  6. };

With this definition, we re-define the function get_perturbation:

  1. float4 get_perturbation(float2 position)
  2. {
  3. if (pattern_size.x>0)
  4. {
  5. float2 pattern_uv = position / pattern_size;
  6. float4 pattern_sample = pattern_texture.Sample(linear_wrap, pattern_uv / scale);
  7. float3 linear_color = decode_gamma(pattern_sample.rgb, pattern_gamma, 0.0);
  8. return float4(2.0*(linear_color-0.5), pattern_sample.a);
  9. }
  10. else
  11. return float4((cos(PI*position.x/scale/4.0) * cos(PI*position.y/scale/4.0)).xxx, 1.0);
  12. }

Please note that:

  • If the X size of the pattern is negative, i.e. if no pattern file was selected by the user or the file could not be loaded, then the function falls back to the cosine formula.
  • It is necessary to compute UV coordinates in pattern_uv to locate the pixel in the pattern texture, given by position / pattern_size, but because position (pixel coordinates in the source picture) has values bigger than pattern_size, the resulting UV coordinates has values between 0.0 and much more than 1.0, and that is where the “Wrap” mode is important.
  • The function is expected to return RGB values from -1.0 to 1.0, that is why there is a normalization as 2.0*(linear_color-0.5)
  • Using a “Linear” sampling (as in linear_wrap) is an arbitrary choice here, and means that if the pattern is scaled, colors from the pattern are interpolated (maybe not the expectation with a Bayer matrix). A “Point” filter could be used too, depending on the kind of scaling that is desired, in conjunction with the Scale Filtering setting.

Add the code, restart OBS, then download and select the 4x4 Bayer matrix above (the tiny one), set the scale to 1.0, the Scale Filtering to Bicubic and you should obtain something like this with 2 color levels:

filter halftone lena bayer 4x4

Such an image may bring back memories of the 8-bits or 16-bits computer era, depending how old you are, this is almost pixel-art!

The texture based on the cross-formed pattern with 5 levels gives a quite different style to the dithering:

filter halftone lena bayer cross 5 levels

Creating seamless patterns

The effect is now strongly driven by the pattern used as input. A “seamless” pattern is necessary and it is not trivial to create one. There are special tools around to do that, or there is a quite simple method that works with any paint tool (e.g. paint.net).

Actually, the issue here is that any global transformation applied to a complete picture (e.g. gaussian blur) would assume that the outside of the picture has a constant color (e.g. white), and would produce border artifacts that are only visible when the picture is tiled.

The simple method to avoid this is to “pre-tile” the picture on a 9x9 grid, then apply the global transformation, then select and crop the central part of the picture. This way the border artifacts are limited to the surrounding pictures and the central picture can be tiled seamlessly:

filter halftone seamless pattern

Using a blur filter just creates the different levels expected in a dithering pattern. This is the resulting star pattern:

filter halftone pattern star blur

The filtered picture shows stars with different sizes, that correspond the the different levels created by the blur operation (a better effect would be reached with more blurring in the pattern, do your own experimentation):

filter halftone lena stars

The same kind of technique was used to blur a texture from the hexagonal tiling page on Wikipedia:

filter halftone pattern hexagons blur

On this pattern the RGB channels are separated. The blurring was done in one operation that kept the RGB separation. The result brings us back to producing a halftone picture, this time with colors. Look at the many little discs of different sizes, resulting from blurring the hexagons in the texture:

filter halftone lena eyes hexagons blur

Finally, another method to produce a nice pattern is just to compute it and grab it as a screenshot. As an example, this shader on GLSL sandbox was used to produce this pattern, which kind of reproduces the shape of printed paper, i.e. with an orthogonal raster for each RGB, and an orientation of 0° for Red, 30° for Green, 60° for Blue (the pattern repeats after 30 oscillations, this is why the picture is so big):

filter halftone pattern print raster

The resulting picture is very similar to the one produced with the blurred hexagons:

filter halftone lena eyes print raster

Actually printed paper uses a CMYK color model (Cyan-Magenta-Yellow-Black) with a subtractive blend mode, and our simple approach is classically additive with the RGB components. The real computation with CMYK would require a modification of the perturbation formula. Nevertheless this simple computation produces vary convincing results to reproduce the style of printed paper. This is especially true for comics or pop art (after tweaking the amplitude and offset):

filter halftone pattern pop art

Even if our filter does not produce the typical CMYK print raster, it works very well in 8 colors because mixing the primary colors Red, Green and Blue with an additive blend gives directly the primary colors of a subtractive blend Cyan (Green+Blue), Magenta (Red+Blue) and Yellow (Red+Green). At the end, the resulting set of colors is the same with additive RGB or subtractive CMYK, and this is why the general impression produced by the output of the filter reminds printed paper.

Arbitrary palette

Until now the color quantization is based on finding the closest discrete level on each RGB component. The operation is simple and quick.

The goal now is to read the different colors from a texture and find the closest one. The operation will imply, for each single rendered pixel, a comparison with a potentially large number of other colors. Of course the operation will not be quick and the palette cannot be too big.

The web site Lospec is a good source for palettes. It proposes curated palettes with usage examples in different formats. For our filter the best is to use the one-pixel per color form, all pixels in a single row.

As an example, with this format, the original palette of the Gameboy would look like:

Palette of the Gameboy: :arrow_right: filter halftone palette gameboy :arrow_left: Zoomed: filter halftone palette gameboy big

coming to the code, first, to be sure that the colors do not get interpolated, a new sampler state using the “Point” filter is necessary in the effect file:

  1. SamplerState point_clamp
  2. {
  3. Filter = Point;
  4. AddressU = Clamp;
  5. AddressV = Clamp;
  6. };

Then, the get_closest_color is modified to include a color search. Comparing two colors is based on a simple distance function. This is the same as the usual distance between 3D points, applied to the RGB components. There are many sophisticated color comparison methods, the usual distance will be good enough for our purpose here.

The new function replaces get_closest_color in the effect file:

  1. float4 get_closest_color(float3 input_color)
  2. {
  3. float4 result;
  4. if (palette_size.x>0)
  5. {
  6. float min_distance = 1e10;
  7. float2 pixel_size = 1.0 / min(256, palette_size);
  8. for (float u=pixel_size.x/2.0; u 0
  9. obs.obs_property_set_visible(obs.obs_properties_get(props, "pattern_reset"), pattern)
  10. obs.obs_property_set_visible(obs.obs_properties_get(props, "pattern_gamma"), pattern)
  11. obs.obs_property_set_visible(obs.obs_properties_get(props, "number_of_color_levels"), not palette)
  12. obs.obs_property_set_visible(obs.obs_properties_get(props, "palette_reset"), palette)
  13. obs.obs_property_set_visible(obs.obs_properties_get(props, "palette_gamma"), palette)
  14. return true
  15. end

Then we re-defined the source_info.get_properties function as such, with some renaming:

  1. -- Gets the property information of this source
  2. source_info.get_properties = function(data)
  3. print("In source_info.get_properties")
  4. local props = obs.obs_properties_create()
  5. local gprops = obs.obs_properties_create()
  6. obs.obs_properties_add_group(props, "input", "Input Source", obs.OBS_GROUP_NORMAL, gprops)
  7. obs.obs_properties_add_float_slider(gprops, "gamma", "Gamma encoding exponent", 1.0, 2.2, 0.2)
  8. obs.obs_properties_add_float_slider(gprops, "gamma_shift", "Gamma shift", -2.0, 2.0, 0.01)
  9. gprops = obs.obs_properties_create()
  10. obs.obs_properties_add_group(props, "pattern", "Dithering Pattern", obs.OBS_GROUP_NORMAL, gprops)
  11. obs.obs_properties_add_float_slider(gprops, "scale", "Pattern scale", 0.01, 10.0, 0.01)
  12. obs.obs_properties_add_float_slider(gprops, "amplitude", "Dithering amplitude", -2.0, 2.0, 0.01)
  13. obs.obs_properties_add_float_slider(gprops, "offset", "Dithering luminosity shift", -2.0, 2.0, 0.01)
  14. local p = obs.obs_properties_add_path(gprops, "pattern_path", "Pattern texture", obs.OBS_PATH_FILE,
  15. "Picture (*.png *.bmp *.jpg *.gif)", nil)
  16. obs.obs_property_set_modified_callback(p, set_properties_visibility)
  17. obs.obs_properties_add_float_slider(gprops, "pattern_gamma", "Pattern gamma exponent", 1.0, 2.2, 0.2)
  18. obs.obs_properties_add_button(gprops, "pattern_reset", "Reset pattern texture", function(properties, property)
  19. obs.obs_data_set_string(data.settings, "pattern_path", ""); data.pattern = nil;
  20. set_properties_visibility(properties, property, data.settings); return true; end)
  21. gprops = obs.obs_properties_create()
  22. obs.obs_properties_add_group(props, "palette", "Color palette", obs.OBS_GROUP_NORMAL, gprops)
  23. obs.obs_properties_add_int_slider(gprops, "number_of_color_levels", "Number of color levels", 2, 10, 1)
  24. p = obs.obs_properties_add_path(gprops, "palette_path", "Palette texture", obs.OBS_PATH_FILE,
  25. "Picture (*.png *.bmp *.jpg *.gif)", nil)
  26. obs.obs_property_set_modified_callback(p, set_properties_visibility)
  27. obs.obs_properties_add_float_slider(gprops, "palette_gamma", "Palette gamma exponent", 1.0, 2.2, 0.2)
  28. obs.obs_properties_add_button(gprops, "palette_reset", "Reset palette texture", function(properties, property)
  29. obs.obs_data_set_string(data.settings, "palette_path", ""); data.palette = nil;
  30. set_properties_visibility(properties, property, data.settings); return true; end)
  31. return props
  32. end

The change with the groups is always the same: first a new obs_properties object is created, and added to the main obs_properties object, then the new properties object is filled with property objects.

Add the code, reload the script, the properties should now be displayed in named groups and some parameters be displayed only when necessary:

filter halftone final properties

Single file delivery

Final operation on this script, we want to provide a single Lua file to ease deployment. OBS features the gs_effect_create function to compile an effect as a string and not as a file.

First we need to copy the complete content of the effect file into one big string:

  1. EFFECT = [[
  2. // !! Copy the code of the effect file here !!
  3. ]]

Then change the compilation in source_info.create to (note that the second parameter is supposed to be some file name and cannot be nil, the value may be used for error messages):

  1. data.effect = obs.gs_effect_create(EFFECT, "halftone_effect_code", nil)

And modify the error message line to remove the mention of the effect file path:

  1. obs.blog(obs.LOG_ERROR, "Effect compilation failed")

No visible effect normally with this change. Just a matter of putting everything in one file.

Conclusion

Woow that was a long tutorial!! The code of the second part is as well available.

Many aspects were covered, from writing shaders and Lua scripts to video processing basics. The most incredible thing is the simplicity of the basis algorithm and the effects it produces.

There is still room for further experiments: real CMYK transformation, better color comparison, transparency effects, seamless patterns design, etc. This is left as exercise for the reader!

That’s all folks!