To be able to create a property list dictionary or json object that you use to setup a core image filter in a filter chain, you need to know what the list of keys are for each filter are and for each key what the range of allowed values is that can be assigned to the filter. To get the filter description you can get the "imagefilterattribute" property from the "imagefilterchain" type. The information can be returned in one of 3 ways. Either as a json string (see the example below) or saved to a json or a plist file (-jsonfile, -plistfile). If saving to a file then after the (-jsonfile, -plistfile) option you need to provide a file path.
To get the CIDroste filter attributes in ruby:
To get the CIDroste filter attributes on the command line:
Both these methods return the compact form of the json string. To view a human friendly version, copy the result and paste it into either: JSONLint or JSON Editor online.
Setting up the properties for a filter
All but a few of the core image filters require inputs. All the numerical inputs for the filters have default values which means for a quick and dirty test of a filter you don't need to assign these values. Other input types like the CIImage class, are required as they don't have a default, and input types like CIVector and CIColor have default values, but the default values need to be overridden.
Each filter in a filter chain, is described by the filter name, a name identifier and a list of properties to be assigned. The filter name is the core image name of the filter to be created, and the name identifier is used to identify the filter when chaining filters together to build a filter chain. Filters later in the chain can reference earlier filters by their name identifier and assign any of their input images to the output image of an earlier filter.
In the above json example, the core image filter to be created is a Core Image Unsharp Mask filter. The filter is given the identifier "com.yvs.documentation.renderingfilterchain.unsharpmask" and the filter takes three inputs. These are the input radius, which takes a numeric value, the input intensity which also takes a numeric value and an input image which takes a dictionary/json object which describes how to obtain the image.
When an input is an image, then the input value is a way to obtain the image and is a json object. The image can be sourced as the output image of a previous filter in the filter chain and if that was the case then the key in the object would be "cifilterindex" and the value is an index to an earlier filter in the filter chain or a key "mifiltername" and the value would be a string which is the name identifier of the earlier filter. If the image is sourced from a bitmap context then the key in the object would be "objectreference" as in the above example which refers to a base object with reference 0. The image can also come from an image importer object in which case the "objectreference" key is also used but that a "imageindex" key should also be supplied specifying the index of the image in the image file. If "imageindex" is not specified and the object reference refers to an image importer object then "imageindex" defaults to 0.
When a filter takes a numeric input, then the "cifiltervalueclass" key is not needed but for all other input types specifying the value class is required and in the above example "CIImage" is demonstrated. Other "cifiltervalueclass" keys are "CIVector", "CIColor", "NSString". In the case of both CIVector and CIColor the type of the value for the key "cifiltervalue" is a string, the purpose of the "cifiltervalueclass" key is to inform Moving Images what object type the string needs to be converted into before it can be assigned to the filter.
In the following example the json object represents the information needed for creating a Core Image Radial Gradient filter. It demonstrates the defining of filter inputs with value class "CIVector" and "CIColor". The format of the strings defining these object needs to be kept to strictly. In this example the the centre of the radial gradient is specified as a coordinate pair of two numbers, x and y. For specifying a color the string format is slightly different with no square brackets. Each color is specified by 4 numbers representing the three color components and the alpha value: Red Green Blue Alpha. The input "inputColor0" is assigned a white opaque color while "inputColor1" is assigned a black opaque color.
The following json object is everything needed to define the filter chain managed by a "imagefilterchain" base object. The render destination is specified using the "cirenderdestination" key. The value for the "cifilterlist" key is a list of filter objects each one describing one filter in the filter list. The order of the filters in this list is important, for a few reasons. The outputImage of the last filter in the filter chain is the the one that renders to the render destination. Since you can refer to an earlier filter in the filter chain by its filter index "cifilterindex" then the order needs to be correct or alternatively if you refer to filters by their name identifier you can only refer to filters earlier in the list of filters.
The filter graph for the above filter chain looks like:
I'm going to show what the output looks like at each of the stages of the above filter chain so you can see how the final result is built up from the intermediate steps. The radial gradient filter generates an image that has infinite extent. To produce a usable image it is followed by the crop filter. The result of the two filters is:
The input mask image provided to the CIHeightFieldFromMask filter is below, followed by the output image of the CIHeightFieldFromMask filter:
The shaded material core image filter takes two images. The inputImage is the height field image, and the inputShadingImage is the generated radial gradient image. The next image is the application of the shaded material filter with a inputScale value of 16.0 and that is followed by the image generated from the bump distortion filter:
After this I took advantage of other features of Moving Images. I generated multiple images with a varying input for the bump distortion filter. Each of these images was added to a image exporter object and when all images were generated the image sequence was saved as gif animation. This is all done using the embossmask script. Here is the resulting gif animation:
Providing input images
An input image for a filter can come from either the output image of an earlier filter in the filter chain or from a base object. Both of these ways of supplying an input image to a filter has been demonstrated above.
If you source an input image from an output image of an earlier filter in the filter chain then you can do it in one of two ways:
Method 1 using the name identifier of an earlier filter, in this case the filter supplying the image is one with a name identifier "heightfieldmask".
If the input image for a filter is from a base object like a bitmap context or a image importer object then the "objectreference" key is required and it's value is a base object reference, or alternatively the two keys "objecttype" and "objectname" are required. The value for the "objecttype" key will be "bitmapcontext", "imageimporter" or "nsgraphicscontext". The value for the "objectname" is the name of the base object given to it when it was created.
If the base object reference identifies a "imageimporter" object or if the "objecttype" is "imageimporter" then a "imageindex" should also be supplied which refers to the index of the image in the image file. If the "imageindex" key is not supplied then the image index value defaults to 0. The image index in a image file starts at 0, so in the example below the image referred to is the second image in the image file.
The bitmapcontext and nsgraphicscontext (window) objects are the the only ones that can be render destinations for the image filter chain. The rendering destination is specified as a plist dictionary/JSON object. This object can be defined in one of two ways, by specifying a value for the key "objectreference" which is a reference to the base object that is the destination for the image filter chain render, or by setting the value for the key "objecttype" which for now will be "bitmapcontext" and setting the value for the key "objectname" which is the name given to the base object when it was created. There will be more values for the key "objecttype" than "bitmapcontext" in the future, do not assume that this value will remain invariant.
The render destination is specified as part of the creation of the image filter chain base object.
Modifying filter properties when rendering
At render time you can modify filter properties of the filters in the filter chain. To modify their values the render filter chain command can take a json object which has a "cifilterproperties" key. The value for this key is an array of objects, each object contains three members. The "mifiltername" key which identifies the filter in the filter chain which has a property whose value you want to change. The other two members have keys "cifilterkey" and "cifiltervalue". The "cifilterkey" specifies the property key of the attribute of the filter you want to change, and the "cifiltervalue" which is the value that you want to assign to the filter property.
All properties except the source for input images can be changed. This does not mean that input images do not change between each time the filter chain is rendered. If the input image is the output image of a filter earlier in the filter chain then if that output image changes then the input image will capture that change. If the input image is sourced from a "bitmapcontext" base object then whenever the contents of the "bitmapcontext" changes the filter chain object knows that it needs to update the image it uses next time the filter chain is rendered.
Setting the source and destination rectangles when rendering
When the filter chain is rendered to the destination then a source and destination rectangle can be specified but both are optional. If the source rectangle is not specified then the render command will be the extent of the of the output image of the image filter chain. This may not be the source rectangle that you want, for instance the blur filters extend the bounds of the image by the size of blur in each direction and you are likely not to want the extended bounds. Instead you can supply the source rectangle which is likely to be the rectangle of the input image for the filter.
The following example supplies a source rectangle which is the dimensions of the input image.
If the destination rectangle is not specified, then the rectangle drawn to is the dimensions of the destination object. If we have created a bitmap context with width 1000 pixels and height 800 pixels then the rendering of the image filter chain will be drawn to fill the bitmap context. This can be overridden by specifying the destination rectangle. For example to render only to the right half of a bitmap context then our json object would look like:
The examples for using the image filter chain scripts assumes that the scripts are in a folder that is in your PATH.
There are four scripts that come with Moving Images for applying core image filters to images. They are the coreimageblend, simplesinglecifilter, dotransition and embossmask. All these scripts take the command line switch --help which returns information about how to use the script.
There is also the --verbose option, which is helpful when debugging writing scripts but can also be useful when you want to grab the generated json objects which is how I used the embossmask script when I needed json objects for this documentation.
The coreimageblend script takes two input images, combines them together using the selected Core Image blend filter and produces an output image:
The simplesinglecifilter script allows you to apply a single core image filter to an image and save the result as a new image file. The simplesinglecifilter script only allows you to apply a filter which has relatively simple inputs. This means a filter that takes a single input image, and 2 or fewer numerical inputs. Filters that are not available are ones that takes inputs that are vectors or colors, or ones that take an image which is assigned as an input using a key other than "inputImage".
To get a list of filters that simplesinglecifilter can apply to an image:
The "cifilterproperties" key has an array of filter properties as its value. The order is important, the first item in the array relates to input 1, and the second is input 2. The min, max, and default keys are informational only, letting you what range of values is appropriate to assigning to the inputs for the filter.
The option "inputvalue1" relates to the "inputRadius" property of the CIUnsharpMask filter while "inputvalue2" relates to the "inputIntensity" of the CIUnsharpMask filter.
The "dotranstion" filter allows you to use any of the Core Image transition filters to transition from one image to another. This script is a bit more involved than the others with more command line options. Reading the output when using the "--help" command line option is sensible. The "dotransition" script will produce multiple images for each transition, you can specify how many images with the "--count" command line option.
The "embossimage" script is more a demonstration script than a general purpose script, and is the basis for how I have demonstrated many of the features of the image filter chain object and how to chain filters together as described the documentation. Nevertheless the script has a number of inputs making it possible to view how changing input values affects the produced output.
The YVSChromaKey filter
As well as the built in Core Image filters I have added another filter for use with Moving Images. It is called YVSChromaKeyFilter and allows you to make parts of an image based on a selected color transparent. The filter has three inputs, a chroma key color "inputColor" which is defined as a CIVector, a distance number "inputDistance" which is distance in a color space within which the image is made fully transparent and a slope number "inputSlopeWidth" which is also a distance in a color space and this is the width of the slope. The smaller the width the steeper the slope.
cr = chroma color red component.cg = chroma color green component.cb = chroma color blue component.pr = pixel color red component.pg = pixel color green component.pb = pixel color blue component.redDiffSquared = (cr - pr) * (cr - pr)greenDiffSquared = (cg - pg) * (cg - pg)blueDiffSquared = (cb - pb) * (cb - pb)colorDistance = sqrt(redDiffSquared + greenDiffSquared + blueDiffSquared)
In the following diagram the colorDistance is on the horizontal axis. Pixels with colors similar to the chroma color become transparent, whilst colors further away are either semi transparent or fully opaque.
The following shows a json object representation of the properties needed to define a YVSChromaKeyFilter as a filter in a filter chain:
The minimum value for both "inputDistance" and "inputSlopeWidth" is 0.0 and the maximum value for both is 1.0. The above assigns their default values which because they are defaults is not actually necessary.
Like many core image filters, the YVSChromaKeyFilter can be part of a larger filter chain, including a filter chain of multiple YVSChromaKeyFilters. This allows transparency based on multiple chroma key colors to be achieved providing flexibility for creating images with transparency.