Tuesday, April 26, 2011

Android Hand Draw & Canvas Details: DOF map creating

In the previous section we mentioned we can use the Canvas class to display Bitmaps on the screen, and we can get Bitmap from the Gesture layout or hand touch events.

Here we can going to talk about the study of Android hand draw touch event and the Canvas class, to see how we are going to create the DOF map.

The Canvas class is like a container on the screen. When we created a Canvas we will be able bind a Bitmap to it. A canvas with a Bitmap bound to it will be able to draw stuffs including the Bitmap itself.

So for the DOF map, firstly we "put" an empty screen sized Bitmap on our Canvas using the Canvas.drawBitmap(Bitmap, 0, 0, null);
"0,0" here means the left/top corner of the Bitmap to be drew at (0, 0); The last parameter is a Paint instance.
A Paint instance can specify how we draw the things, it is like a shading function, and it's important for us to get the "Blurred" DOF map.

As we mentioned, Canvas is also able to draw other things on the Bitmap like a circle.
We use the
Canvas.drawCircle(x, y, radius, Paint);
to get a styled circle on the Bitmap -- which can be placed at the touch point/stroke point of the user hand drawing.
Just draw a solid circle is not enough for us, we would like the user to paint the depth gradually -- the more he touches a place, the closer/farther that point becomes to the eye point.
We can do this with the Radial Gradient style in the Paint instance assigned to each drawCircle.

The Radial Gradient basically takes an initial color in the center, an end color on the circle boundary and interpolate linearly/smoothly (according to our call) inside the color.

With these, we will be able to draw things like this with the finger touch:
with assigned Alpha channel too.
The graduate change into black along the radius should allow the user to get better blur along the edges.


Hand Touch: 
Now we turn back and see how can we get a hand touch point of the user on the screen.
In our View class, we can override the onTouchEvent(MotionEvent event)function to make response to the user touch event on the screen.
To decide what is the action of the user (touch down, touch up, touch cancel(?)), we can get an Action handle (int) with the passed in event parameter.  Very like the glut mouse callbacks.

To entry each stroke the user has made and historical information we directly use the event.getX(int index) and event.getY(int index) to access stroke(i) position between a touch down and touch up event pair, as well as Pressures.

With the pressure correspond to the radius of our drawCircle and the (X, Y) position we got, we will be able to sketch on the Canvas.

Android Entries: Activity

To launch an Android application with all the events we mentioned previously, we will be using the Android Activity API.

Acvitity
To do this, we extends our entry class with the Android Activity class which has a function onCreate() for us to override. The onCreate() function will be called when our application is loaded and launched, acts as the initializer.
But the more important reason we use the Activity is because we can grab many resources and handles in it.
For example, the View clusters.

View
In Android the class View deals with all the drawings on the screen, displays and hardware information: pixel dpi and so on.  Extension of View class allows us to use Canvas, GLSL shading, openGL and all kinds of customized drawing/rendering technique. And in the Activity we can set our View instance to the currentView, sounds like loading different GLSL shaders in the same application.

Activity's Functions:
Also the Activity allows us to request current resources like requestWindowFeature(), or request device information using getDeviceConfigurationInfo() before we launch different Views.
These information can tell us what version of GLSL does our current device use; what is the windows size and what is the scr dpi -- etc.
For Android platform we will encounter huge number of different device.  With the Activity class we will be able to deal with them.

Now we know our application entry point and how do we display with what we want, the steps are quite simple:
create an instance of our customized View (or any default View classes) using the parameters we get from the Activity.
Launch our view with setContentView(view); 

Android Modules

An important step of this development is to get all necessary Android phone resources into our hands.
According to our demand we need the to handle the Camera, Finger Touches, Bitmap/Display and GUI Layout.

The Camera Module 
In Android the photo taken is called with the PictureCallBack event, implementing function:
public abstract void onPictureTaken (byte[] data, Camera camera)
This function will be called when the picture data is available after the photo is taken, given the byte[] data for further processing.

The Hand Draw Module
Here are several APIs to allow the user to draw sketch images. They are all based on Event Callbacks.
The first possible choice is using the GestureOverlayView in the Android layout:

<android.gesture.GestureOverlayView
android:id="@+id/gestures"
android:layout_width="fill_parent"
        android:layout_height="0dip"
  android:layout_weight="1.0" >
  <GestureOverlayView android:id="@+id/gestureOverlayView1">        
  </GestureOverlayView>
</android.gesture.GestureOverlayView>

Add these to the Android res/layout/your_choice.xml and load this layout in the Android application, we will get an area that allows us to draw gestures on it.
Then we can implement a callback function to get the user hand draw information like Strokes, History Points and even Predictions.
In addition, we must call this resource in our layout and register a listener for the event:


GestureOverlayView gestures = (GestureOverlayView)
findViewById(R.id.gestures);
gestures.addOnGesturePerformedListener(this);


The following callback function will then be called 
public abstract void onGesturePerformed (GestureOverlayView overlay, Gesture gesture)
If the user draws in our the GestureOverlayView layout area.



Another way is create a canvas for the user to manipulate, which is even better for our implementations except the GUI is more primary and requires extra effort. In my next diary I will explain how to use the canvas and Touch Event instead of the GestureOverlayView.

The BitMap Module & Display
All of our work is based on processing image buffers. In android the wrap up is a Bitmap class.
A Bitmap instance can be created from byte[] buffer and IntBuffer/FloatBuffer class in android SDK.
It can also be converted back into any Buffer type - which we can then bind as textures.

Taking these nature qualities of Bitmap class, we can modify it easily as byte[] arrays as well as display it in at least two ways:
1st way is using an Android.Canvas.  A canvas can be activated in the View and be displayed, it can draw() anything inside its area: e.g., a Bitmap or a circle.
2nd way is using openGL texture. As we can bind the Bitmap to a texture, we will be able to draw it easily.

Conclusion
With the modules above, the required functionalities for our application is mostly completed.
The input byte[] is from the Camera.  Another input byte[] is from the Hand Drawing.  We can directly manipulate these byte[] arrays, and we can bind them as textures then modify in GLSL.
After that we can create Bitmaps for display - more for saving into files, as the openGL is able to display the texture before it is read back into a Bitmap on the host.

Monday, April 11, 2011

Progress report and problems

We have constructed the development environment for Android.
At first, we agree to use C# instead of Java because we are more familiar with Visual Studio. We installed Mono for Android and completed the Hello World program. It seemed smooth. While problem raised when we tried to write OpenGL ES 2.0 program. We wrote a simple triangle shader. But this program crashed with no reason on the Android emulator, no matter which version we were using.
After that, we decided to turn to Java instead. So we took some time on installing Eclipse and its Android components. We also traslated our simple GLES 2.0 program into Java. But the program still crash on emulator.
After strugglling for two days. I found this link online:http://stackoverflow.com/questions/4455783/does-the-android-emulator-support-opengl-es-2-0. Which means OpenGl ES 2.0 is NOT supported on existing emulator! This is the worst news for us since we don't have a android cellphone.
So we decide to borrow an Android cellphone from our classmates and make sure all the shaders can work correctly. Than we can concentrate on the coding for GLSL shader for depth of field.

Tuesday, March 29, 2011

Project Proposal v0.5

1.         Team Members:
Han Li (CGGT), Qing Sun (CGGT)

2.         Project Description:
We are going to implement the post-process depth of field effect on mobile device using GPU. We think this is meaningful because it allows users to make photos taken by cellphones look like they are taken with a large aperture DSLR!

3.         Approach:
a)         The input of our system composes of two parts.
The first part is the original image the user wants to process.
The second one is the depth information. Since this is a post-processing, we cannot get depth information from any geometry. Our method is to let user “draw” the depth map. We provide the user some brushes with different radii and hardness. By using these brushes, user can draw on the original image in grey scale. While indicates closest to camera and black means farthest.
b)         After this, we will let the user specify the focusing point of camera.
c)         Next stage is the implementation of depth of field as GLSL using OpenGL ES 2.0.

4.         Platform:
We are going to use Monodroid as our tool. This is based on C# and can be deployed on Android cellphones. Our target platform is Android 2.2 which is popular and has good support for OpenGL ES 2.0.

5.         Challenges:
The main challenge of this project includes four parts:
a)         UI development. In terms of user interaction, we want to support some gestures for zoom in/out when panting the depth map. We also want to support direct photo capture using camera. This might be a problem since we have no such experience.
b)         OpenGL ES programming. Even though we have learned something of GLSL. There are still many differences when come into mobile system. We have to explore the way to transfer algorithm from OpenGL to OpenGL ES 2.0.
c)         DOF algorithms. Here are many algorithms about DOF, in which the key word is “blur”. How to carry out a decent and fast blur on the Mobile device will be a challenge for us.
d)         Depth Map generation. This application is fully depending on the depth map we generate. As this piece of data is largely delivered to user control, we should find a best way to get it work: easy for users to use; processing the inputs and extract a usable depth map from it. We believe the algorithm needs a learning period before it outputs qualified results for our next passes.