It has been quite difficult to research about facial recognition technology for AS3. I came across a library that someone had developed which uses the Eigenface method for detecting and recognising faces: https://code.google.com/p/face-recognition-library-as3/
There is little documentation online about how to implement it in the way I intend to. Looking at other options, there is a possibility of using fingerprint scanning instead, however a quick search online looked like there doesn’t seem to be affordable fingerprint scanners that are small enough for the ZipperBan system:
The above image would not work well attached to a zipper. The tutor suggested using the fingerprint scanner built on the latest iPhones, however I found it difficult to get access to one for the duration of the project, since I do not own an iPhone. I also looked at fingerprint scanning apps on android. One in particular gets you to hold your index finger up to the camera, take several photos of it to register the fingerprint, and then it will unlock your phone by scanning your finger. I decided it would be too difficult and impractical to incorporate too many physical devices that would need to interact with each other, so I went back to researching facial recognition.
First I went through this tutorial (http://code.tutsplus.com/tutorials/automatically-tag-photos-with-the-as3-face-recognition-library--active-9033) which introduces how to use the facial recognition library by Oskar Wicha (as mentioned above) and sets up an application in flash that allows the user to upload an image, automatically detects the faces in the images and inserts a square tag around their faces:
After replicating the app through this tutorial the next step is - how do I use this to recognise the identity of people in the picture, not just detect where their face is? The library used contains two sets of sub-libraries: one for facial detection (which is used in this tutorial) and one for facial recognition. There are close to no resources out there that explain how to use the facial recognition library. The author explains that the code for the facial recognition that happens in this video is available to be downloaded on the library website, however it is not in a format that I am able to open.
It is now up to me to decipher the documentation of the library, and figure out where to make certain function calls and retrieve the identity of faces in an image. After a lot of digging around, I find a discussion thread where the author outlines the order of function calls:
The first step involves training the system to recognise faces. I use the example training images from the download section on the library website and create two new arrays: one to hold the file names, and the other to hold the corresponding classifications (i.e names). These need to be in the same order. The following example shows the format to be used.
After setting up the arrays, I then used the loadTrainingFaces function as part of the recognizer class to load the faces. Under the hood, what it does is creates new Face objects (defined in the library) which contain the bitmap of the face, the classification (name) and other attributes. Then once that’s finished, it trains the system with the images and then begins the detecting process.
At this point I try to use the probe function on an image - the probe function is the one that returns the classification of the face recognised in the image. I tried the following code:
recognizer.probe('izzy.jpg');
However this would bring up an error:
So it appears that the argument will need to be a Face object, which will take an image as an argument. I tried using the probe function with a Face object such as this:
But would get this error:
With hardly any information about exactly where this error occurs, I had to dig through the source files in the library to debug what was going on. Basically when loading the faces at the beginning (loadTrainingFaces), it sets a Picture attribute for each face. Although there is a URL to an image which is an acceptable argument for Face, it does not convert this to an appropriate format which then needs to set as a Picture attribute, which is then used in the probe function. In the end, the following code does the trick:
which probes a newly created face from an image (bmpData) taken from the camera.
There were also some formatting issues in converting the image to a bitmap which required the external library containing the JGPEncoder function.
Now that it is working as it should, I made some adjustments to further improve the prototype. I created a basic interface with the necessary buttons/input required for the facial recognition system: name, take photo, register face etc. The below images show screenshots of the different screens that appear in order:
I also created an external file for the training, rather than hardcoding this into the source code. This is much more efficient, as this way it is also able to store and add new faces for future use. The external file contains the url to the image, and its corresponding classification on every line:
Then I created a function to parse the file and add it to the two arrays:
Creating this prototype has been quite challenging. Although the library provided most of the facial recognition technology, the challenge for me was to be able to use and debug another person’s code. A lot of the time Flash’s console would give minimal information about the exact location of errors, however through this I have improved my debugging skills by using the debugger and stepping through the stack trace.
I came across some big hurdles - particularly where it was not recognising the correct identity of the person, and kept giving false classifications. From this, it was difficult to know what was going on, as it was not raising an error per say. I almost gave up on this method after getting close to successfully implementing this functionality, however stuck to it and what solved this issue, was changing one line - newFace.detectFaceInPicture(source:Bitmap) to newFace.loadFaceImageFromBitmap(source:Bitmap). I’m still not sure why this works, however it was this change that was necessary to get the prototype to work.