Occasionally I receives emails asking how to render Camera Frame using OpenGL ES on Android. I lazily used some codes from an open source project InstaCam without fully understand until recently I reviewed some fundamentals of OpenGL ES and re-impelemented a demo app for camera frame rendering using shaders. This post is to discuss some key aspects in the implementation and share the demo codes.
CameraRenderer
Drawing using OpenGL is implemented by rendering on GLSurfaceView
. The common approach is subclass the GLSurfaceView
and implements the GLSurfaceView.Renderer
. The rendering tasks are performed by implementing the interface.
SurfaceTexture
SurfaceTexture
Interface provided by Android SDK (API Level 11+) has made our life much easier when dealing with image streaming either from Camera or MediaPlayer. SurfaceTexture
is bound with an OpenGL Texture id at its instantiate (mCameraTexture
is discussed later, which generates OpenGL texture handle):
Listeners can be registered to SurfaceTexture.setOnFrameAvailable to make updates whenever a new frame is streamed in. Here the camera renderer is registered to listening for the updates, whenever a new frame is streamed in, the renderer is required to draw a new frame on the surface. Use mSurfaceTexture.updateTexImage() to query the most recent frame on the stream.
The texture updated by SurfaceTexture
can only be bound to GL_TEXTURE_EXTERNAL_OES
target rather than the GL_TEXTURE_2D
target. Therefore a texture handle generated by the mCameraTexture
object as mentioned above using the following implementation (bind with GLES11Ext.GL_TEXTURE_EXTERNAL_OES
):
In the fragment shader when the texture is binded, the first line has to be inserted:
Draw Screen Quad
Using shaders, a simple screen quad can be easily drawn.
Three issues are considered here
-
A transformation matrix is queried using
mSurfaceTexture.getTransformMatrix(float[])
. This matrix transforms traditional 2D OpenGL ES texture coordinate column vectors of the form (s, t, 0, 1) where s and t are on the inclusive interval [0, 1] to the proper sampling location in the streamed texture. -
The orientation change of the phone has effect on the Surface dimension change (Height and Width swapped) but has no effect on the camera size (Width and Height remains as Width > Height all the time). This should be considered using an orientation matrix passed to the shader to adjust the orientation of the frame whenever the phone’s orientation changes.
-
The screen dimension (
SurfaceView
dimension and the camera frame dimension might not be the same, to maintain a proper w/h ratio, a scaling factor should be passed to the shader to resize the screen quad.
The codes below are the passing of the three parameters to the shader. Noted that uTransformM
updated every frame as required, uOrientationM
updated whenever the orientation of the phone changes, and ratios updated also when the orientation of the phone changes since the w/h ratio changes when their actual values change. The later two are updated in the onSurfaceChanged(GL10, int width, int height)
method.
In the vertex shader, uTransformM
, uOrientationM
and ratios together do some work to make sure the frame texture coordinate fit into the phone window:
Closure
Some details are not covered in the post. It might be confusing by looking at the code fragments above. Here what I want to do is to take note of the critical steps in my implementation for my own record. I think it will be much helpful to go through the complete implementation of the demo app (which is quite concise, only a few hundered lines of codes). What will be more interesting? Try to replace the fragment shader with some funny shaders in Instagram Filter and Shader Filter, to see the interesting filter applied real-time on camera view.