Presented by Thomas Tucker, A10755095.

Assignment 1

Download link has been removed so that GitHub stops complaining.

3.2 Basic Operations

Image::Brighten

This may be run through image -brightness <val>. This was implemented by scaling each color of each pixel by the given amount, clamping the values to remain within the range [0, 255]. Example given below:


The image on the left was calculated using a brightness factor of 2.4. The right image used 0.4.

Image::ChangeContrast

This may be run through image -contrast <val>. This was implemented by first calculating the average luminance of the image, followed by linearly interpolating between each pixel color and that average luminance. Example given below:


The left image was calculated using a contrast value of -1.3; the center image was calculated with 2.5; the right image was calculated with 0.3.

Image::ChangeSaturation

This may be run through image -saturation <val>. This was implemented by linearly interpolating each pixel with a greyscale version of itself, clamping values to legal ranges. Examples given below:


The left image was generated using a saturation value of 2; the right was generated with 0.2.

Image::ChangeGamma

This may be run through image -gamma <val>. This was implemented by converting the discrete color values into the continuous range [0, 1), and then raising them to the inverse of the value provided. Examples provided below:


The left image used a value of 0.5; the middle is 1; the right is 2.

Image::Crop

This may be run through image -crop <x> <y> <width> <height>. This is done by creating a new image and dropping the unused pixels. This allows a photo from an old CSE 167 assignment revealing a bug to have the bug conveniently removed, as such:


The image on the left reveals the bug; the image on the right hides it.

3.3 Quantization and Dithering

Image::Quantize

This may be run through image -quantize <nbits>. This works by first reducing each pixel to the constraints given, then inflating that value back into the [0, 255] range expected in the bitmap. Examples are given below:


The image on the left is quantized to 2 bits; the image on the right is quantized to 4.

Image::RandomDither

This may be run through image -randomDither <bits>. This works by adding noise to the image prior to quantizing with the hope that this will reduce the sharp contours created by quantizing. Examples are given below:


The image on the left is quantized with random dither to 2 bits; the image on the right is quantized with random dither to 4 bits.

Image::FloydSteinbergDither

This may be run through image -FloydSteinbergDither <bits>. This works by spreading the error created through quantization to neighboring pixels, with the intent that the result will be seen by users as similarly colored to the original. Examples are given below:


The image on the left is quantized with Floyd-Steinberg dither to 2 bits; the image on the right is similarly quantized to 4 bits. Note that while the quantization is harder to see due to the error spreading and the shrunken image, the fault lines created by a substandard implementation are quite visible.

3.4 Basic Convolution and Edge Detection

Image::Blur

This may be run through image -blur <n>. First, pixel weights are calculated in a single dimension along the Gaussian curve. This is then expanded to create a Gaussian matrix in two dimensions, and then fed through a convolution function. Examples are shown below:


The image on the left was blurred with a matrix size of 3; the image on the right was blurred with a matrix size of 13.

Image::Sharpen

This may be run through image -sharpen. The sharpness amount is hardcoded; a 3x3 convolution matrix is utilized which gives heavy weight to the center pixel and negative weights to its neighbors. Examples are shown below:


The image on the left is the image prior to modification; the image on the right has been sharpened. Notice the increased definition of its fur and nose features.

Image::EdgeDetect

This may be run through image -edgeDetect <threshold>. This uses a Sobel filter which is passed through a convolution function in order to detect gradients. The gradients are then sent through a sieve which uses the provided threshold to determine the gradient speed required to count as an edge. Examples are shown below:


The image on the left is the base image used. The center image has a threshold of 50; the right image has a threshold of 125.

3.5 Antialiased Scale and Shift

Image::Scale

This may be run through image [-sampling <num>] -size <x> <y>, with valid sampling numbers being 0 for nearest neighbor and 1 for hat. The respective filter is first applied along the x axis for stetching / shrinking, and then along the y axis for the appropriate operation. Mitchell filter had a late-emerging bug which has prevented its inclusion entirely; hat filter also had a similarly late-emerging bug, although it is still vaguely presentable. Examples are given below:


The image on the left has been expanded; the image in the center has been shrunk; and the image on the right has been squished (nearest neighbor).
The image on the left has been expanded; the image in the center has been shrunk; and the image on the right has been squished (hat). Notice that there is a bug.

Image::Shift

This was not implemented... sadness fills the land.

3.6 Fun nonlinear filters

This was not implemented...