U R O P  

 

Speech + Mobility Group. Hand Gesture Recognition & Augmented Reality

 

 

Introduction

 

        Currently, I am involved in a project for a depth camera. My task is to implement hand gesture recognition system for a smart glass with an embedded mini depth camera. I am working with Microsoft Kinect depth camera until it can be replaced by the smart glass soon. I am designing an augmented reality application using the hand gesture system.

 

 

Project Idea

 

         I was given the task to create a furniture organization application for the depth camera. Because a device like smart glass does not have a physical input system, I implemented hand gesture recognition system. Using hand gestures, the users can pick a furniture from a projected interface, manipulate its size and orientation, and virtually place it in the space. This is illustrated in the images below.

 

         

 

 

 

 

 

          The user can highlight (outline) around an object in the camera view, crop that part out of the scene and virtually place it in one of the layers. The user can also point at a color and draw images or texts onto one of the layers with that color. The user also can drag around the different layers of images. Once the user is satisfied with the arranged layers of images, (s)he can merge them. This is illustrated in the short video below.
Figure 1

Application


         In a group setting, people can instantly create layers of images, manipulate them according to their needs, and share them with people near their work space. As shown in figures 2 through 4, one user can simple pick a layer and swipe it to the person next to him or her. In a learning environment such as a classroom setting, students will have be prone to engage themselves in interacting with their devices and to share their works with other student sin the classroom. It will be easier for people either very young or elderly to learn the interface because their inputs to the device are hand gestures, which we use on our daily lives.

Figure 2
Figure 3
Figure 4

Other Possible Applications


         There are some other possible application ideas that I have in mind. As I am interested in designing an entertaining interface for people, my second idea involves playing musical nodes or instruments through hand gestures at different depths. A single hand gesture at one depth can make a certain sound and the same gesture at different depth can make a different sound, allowing the music player to create music unconstrained by the number of inputs available.

         Another idea for a useful tool is to automate CAD programs for 3-D modelers. Because all CAD programs create objects on the grid plane (or at the origin), 3-D modelers have to move the objects to desired positions every time. If they could give hand gestures to the system at a certain depth and if the program can create objects according to the depth information, then the number of clicks and dragging will be minimized, which would create it much easier work experiences for the modelers.    


         My supervisor had the idea of shared augmented reality space between two mobile device users. I am particularly fascinated by this idea because I want to create multiuser environment for my future applications. The idea, as illustrated below, is that if one user moves an object (the cup in the images), then the second user is also updated with the changed cup position so that they are sharing a virtual space in between the two mobile devices. After I finish implementing the furniture organization application, I hope to explore this concept.

Webpage designed by Hye Soo Yang

Other Possible Applications

 

         I find depth cameras very fascinating because it allows people to use the space in front of the camera to interact with digital devices. Among a number of possible application ideas that I thought of for the depth camera, the 3-D layer image processing tool appeared interesting and useful. Because the camera can measure precise depth information from a scene, the user can now take pictures and layer them virtually in the space in front of the camera (as shown in Figure 1). 

 

Current Progress

 

Step One.

         My main task was to implement the hand gesture system to be able to manipulate 3D furniture objects. I started out with writing a program that controlled the mouse and automates PowerPoint. Opening up all five fingers brought up one of my PowerPoint presentations. With two fingers open, swiping to the right switched the view option to 'sorter view' and swiping to the left switched it back to the 'default view.'

Step Two.

         Then, I applied the hand gesture recognition system to a 3-D object in OpenGL. My goal was to drag 3-D object around in a video view. Below video shows the result. If all five fingers are open, then object disappears. When three or four fingers are open, object stays still. When one finger open, the object sticks to the finger tip if it is touches the object and follows the finger tip movement.

Step Three.

         Currently, I am working on getting a number of different funiture in the scene. My goal in the near future is to be able to create and pinch the furniture of my choice, resize and rotate it, all using hand gestures. Updates will be posted as soon as I have it working.

                    Step 1                                                                Step 2                                                                    Step 3


* Shared virtual environment between two users

ART & TECH | HYE SOO YANG