I have taken a very long time to get myself familiar with OpenGL ES Shaders. Finally I made some codes run and got some results that looked nice. The main area I want to explore is to use shaders to perform general purpose GPU computation (GPGPU). Here I summarize some image manipulation examples from the very popular textbook Graphics Shaders: theory and practice by Bailey and Cunningham. The examples are made to be running on the Android phone.
Texture mapping using OpenGL ES 2.0
Here is a good online tutorial to get start with on OpenGL ES 2.0 for Android. A simple project of texture mapping is created. It basically maps an bitmap image to a square fragment that fits into the window size of the device. The fun things I am going to do start from this stage.
Basic Concepts
GLSL deals with images by treating them as textures and using texture access and manipulation to set the color of each pixel in the color buffer. This texture file may be treated as an image raster by working with each texel individually. Since any OpenGL texture has texture coordinates ranging from 0.0 to 1.0, the coordinates of the center of the image are vec(0.5, 0.5) and you can increment texture coordinates by 1./ResS or 1./ResT to move from one texel to another horizontally or vertically. ResS and ResT are the sizes of the displayed image in pixel.
Working with Fragment Shaders
The Fragment Shader is responsible for the per texel processing. The manipulation of the image pixels are handled merely using Fragment Shaders. Here is the simple fragment shader that just displays the texture:
Color/Brightness/Hue
Luminance: luminance is defined as a linear combination of red, green and blue. The weight vector is a const: vec3(0.2125, 0.7154, 0.0721)
.
Hue shifting: the RGB is first converted into the HSV color space, the hue shifting is just a adjustment of the h value in degree.The code is a bit long, check it out in the source code directory.
Negative: negative is obtained by subtracting the color of each pixel from the white
Brightness: add or subtract black can be used to adjust the brightness of the image
Contrast: use a gray image as a base image, and mix with the color image. It can be made either move the color component towards the gray or away from. That is how the contrast is adjusted.
Saturation: mix the color and grayscale image from the luminance example, can obtain a saturated image
Warping/Distortion
The twirl transformation rotates the image around a given anchor point(xc, yc) by an angle that varies across the space from a value alpha at the center, decreasing linearly with the radial distance as it proceeds toward a limiting radius r. There are lots of other transformation worthy being tried.
Image processing/vision
Edge detection: Sobel filter is used for edge detection. The manipulation of neighboring pixels are required. Here we need to know the exact width and height of the texture displayed in pixel. However I haven’t figure out how to do this. So I cheated a bit, just use the screen height (720 px in this case) since the image is assumed to fit in the window, which is not exactly true.
Blurring: 3x3 Gaussian filter s used for blurring in this case. The result is not a obvious since a small radius is used.
Artistic Effect
Embossing: the embossing image is obtained from applying the edge detection luminanced image and highlighting the images differently depending on the edge’s angle.
Toon like image: steps includes -Calculate the luminance of each pixel, apply the Sobel filter to detect edge and get a magnitude, if magnitude is larger than the threshold, color the pixel black, else quantize the pixel’s color.
Closure
These are all simple image manipulations that can be easily done by software like Photoshop. The interesting thing is that it is now running on the GPU of the embedded devices. The advances of GPGPU might not be seen from here, but I will try to do some efficiency evaluations and continue with my “serious” research to see how it will help. Source Code on Github