1. Team Members:
Han Li (CGGT), Qing Sun (CGGT)
2. Project Description:
We are going to implement the post-process depth of field effect on mobile device using GPU. We think this is meaningful because it allows users to make photos taken by cellphones look like they are taken with a large aperture DSLR!
a) The input of our system composes of two parts.
The first part is the original image the user wants to process.
The second one is the depth information. Since this is a post-processing, we cannot get depth information from any geometry. Our method is to let user “draw” the depth map. We provide the user some brushes with different radii and hardness. By using these brushes, user can draw on the original image in grey scale. While indicates closest to camera and black means farthest.
b) After this, we will let the user specify the focusing point of camera.
c) Next stage is the implementation of depth of field as GLSL using OpenGL ES 2.0.
We are going to use Monodroid as our tool. This is based on C# and can be deployed on Android cellphones. Our target platform is Android 2.2 which is popular and has good support for OpenGL ES 2.0.
The main challenge of this project includes four parts:
b) OpenGL ES programming. Even though we have learned something of GLSL. There are still many differences when come into mobile system. We have to explore the way to transfer algorithm from OpenGL to OpenGL ES 2.0.
c) DOF algorithms. Here are many algorithms about DOF, in which the key word is “blur”. How to carry out a decent and fast blur on the Mobile device will be a challenge for us.
d) Depth Map generation. This application is fully depending on the depth map we generate. As this piece of data is largely delivered to user control, we should find a best way to get it work: easy for users to use; processing the inputs and extract a usable depth map from it. We believe the algorithm needs a learning period before it outputs qualified results for our next passes.