In the final week of semester (first week of exams) I had tested the facial recognition technology aspect of ZipperBan. It comprises of a Flash movie that:
asks to register your face as the owner, and enter a name
in registering your face, it takes 5 photos in order for the Eigenface method to work properly
It is also connected to my second interactive prototype with the makey makey, and so prompts the user to begin unzipping the bag
unzipping the bag triggers the camera to take a snapshot of the user, and after comparing with the face of the owner, either states “ACCESS GRANTED” or “THIEF ALERT”
The final interface can be seen below:
The testing session was also successful in testing the efficacy of the facial recognition system and testing it with different lighting settings and different people. Overall people seemed to enjoy using the facial recognition technology, and more often than not it would detect correctly if the person was the registered owner of not.
The testing process occurred as follows:
the subject sat down and was briefed on the concept of the prototype
they followed the prompts - first by entering their name
they then took 5 photos of their face for training and went on to unzip the bag
the console printed out that the person identified in the photo is indeed the registered owner
after the system guesses the owner correctly, I then try to unzip the bag to test if the opposite is true, and surely enough, it recognises that it is me and outputs ‘THIEF ALERT’
Some other feedback received was to transfer the information being received in the console onto the actual prototype itself, so the user could see what is happening at every step.
It has been quite difficult to research about facial recognition technology for AS3. I came across a library that someone had developed which uses the Eigenface method for detecting and recognising faces: https://code.google.com/p/face-recognition-library-as3/
There is little documentation online about how to implement it in the way I intend to. Looking at other options, there is a possibility of using fingerprint scanning instead, however a quick search online looked like there doesn’t seem to be affordable fingerprint scanners that are small enough for the ZipperBan system:
The above image would not work well attached to a zipper. The tutor suggested using the fingerprint scanner built on the latest iPhones, however I found it difficult to get access to one for the duration of the project, since I do not own an iPhone. I also looked at fingerprint scanning apps on android. One in particular gets you to hold your index finger up to the camera, take several photos of it to register the fingerprint, and then it will unlock your phone by scanning your finger. I decided it would be too difficult and impractical to incorporate too many physical devices that would need to interact with each other, so I went back to researching facial recognition.
After replicating the app through this tutorial the next step is - how do I use this to recognise the identity of people in the picture, not just detect where their face is? The library used contains two sets of sub-libraries: one for facial detection (which is used in this tutorial) and one for facial recognition. There are close to no resources out there that explain how to use the facial recognition library. The author explains that the code for the facial recognition that happens in this video is available to be downloaded on the library website, however it is not in a format that I am able to open.
It is now up to me to decipher the documentation of the library, and figure out where to make certain function calls and retrieve the identity of faces in an image. After a lot of digging around, I find a discussion thread where the author outlines the order of function calls:
The first step involves training the system to recognise faces. I use the example training images from the download section on the library website and create two new arrays: one to hold the file names, and the other to hold the corresponding classifications (i.e names). These need to be in the same order. The following example shows the format to be used.
After setting up the arrays, I then used the loadTrainingFaces function as part of the recognizer class to load the faces. Under the hood, what it does is creates new Face objects (defined in the library) which contain the bitmap of the face, the classification (name) and other attributes. Then once that’s finished, it trains the system with the images and then begins the detecting process.
At this point I try to use the probe function on an image - the probe function is the one that returns the classification of the face recognised in the image. I tried the following code:
recognizer.probe('izzy.jpg');
However this would bring up an error:
So it appears that the argument will need to be a Face object, which will take an image as an argument. I tried using the probe function with a Face object such as this:
But would get this error:
With hardly any information about exactly where this error occurs, I had to dig through the source files in the library to debug what was going on. Basically when loading the faces at the beginning (loadTrainingFaces), it sets a Picture attribute for each face. Although there is a URL to an image which is an acceptable argument for Face, it does not convert this to an appropriate format which then needs to set as a Picture attribute, which is then used in the probe function. In the end, the following code does the trick:
which probes a newly created face from an image (bmpData) taken from the camera.
There were also some formatting issues in converting the image to a bitmap which required the external library containing the JGPEncoder function.
Now that it is working as it should, I made some adjustments to further improve the prototype. I created a basic interface with the necessary buttons/input required for the facial recognition system: name, take photo, register face etc. The below images show screenshots of the different screens that appear in order:
I also created an external file for the training, rather than hardcoding this into the source code. This is much more efficient, as this way it is also able to store and add new faces for future use. The external file contains the url to the image, and its corresponding classification on every line:
Then I created a function to parse the file and add it to the two arrays:
Creating this prototype has been quite challenging. Although the library provided most of the facial recognition technology, the challenge for me was to be able to use and debug another person’s code. A lot of the time Flash’s console would give minimal information about the exact location of errors, however through this I have improved my debugging skills by using the debugger and stepping through the stack trace.
I came across some big hurdles - particularly where it was not recognising the correct identity of the person, and kept giving false classifications. From this, it was difficult to know what was going on, as it was not raising an error per say. I almost gave up on this method after getting close to successfully implementing this functionality, however stuck to it and what solved this issue, was changing one line - newFace.detectFaceInPicture(source:Bitmap) to newFace.loadFaceImageFromBitmap(source:Bitmap). I’m still not sure why this works, however it was this change that was necessary to get the prototype to work.
The prototype was finally tested this week and it was interesting to see how people interacted with the prototype. There were some issues with the foil making contact with the zipper, however people seemed to understand the concept and provided me with some feedback.
In conducting the testing, I had slipped a $5 note into the pencil case - it was the testing person’s job to unzip the bag and take the $5 without setting off the alarm. By having a physical item to grab inside the pencil case, it better recreates the circumstances of pickpocketing. Unzipping, opening, inserting the hand and grabbing the item are all physical interactions that impact how the prototype responds. The purpose of testing is to see how different sensitivities affect the thief’s ability to take items from inside a bag. A stronger sensitivity would make it more difficult for a thief to unzip, insert their hand and take out items without setting off the alarm, particularly if the ‘active point’ on the zipper rail creates a gap too small for a human hand. In reality, the sensitivity of this kind of concept would always be set to high. The reason for multiple sensitivity levels is to test the threshold - at which point on the zipper rail would it be impossible to steal anything.
After observing people testing the prototype, several things were noted. Firstly, there were a lot of issues with the zipper making contact with the foil covered wires. This meant users were not able to experience the different levels of security, or had to purposefully hold the foil together the zipper. This does not represent the real-life physical interactions of unzipping, however this was necessary. It was also noted that people were actually quite skilled in retrieving the note through just a small gap. This meant that in order for this prototype to be improved, it should be taken into account that people can still pull things out with their fingers (smaller than hands), provided the item being pulled out is not as big. In summary, stealing things from a backpack involves a variety of physical interactions that need to be taken into account, and cannot all be classified/defined into a certain set of interactions.
The questions asked were:
How do you find the physical interface overall?
Did you find it more difficult to retrieve the money as sensitivity increased?
Most of the people who tested the prototype indicated that it is intuitive and easy to understand. Unfortunately due to the physical connections being volatile this aspect had not been tested as desired, however overall people could grasp the idea as they were made to force the alarm to set off (through vigorous zipping). Some also offered suggestions on how to improve the prototype. One suggestion in particular that helped improve the prototype: by having a ‘thumb-pad’ made of foil on the outside of the bag, so the user won’t need to be clipped with another wire to be grounded. This also better replicates the human interaction with a bag: one hand to pull the zipper, the other to pinch the side.
This week was also spent brainstorming ideas for the final prototype. It might be worth trying to implement a facial recognition system, however I’m not sure how difficult this will be to implement. Facial recognition will test another aspect of the interactivity of ZipperBan - the detection process.
The second interactive prototype requires us to incorporate Makey Makey into our designs. I’ve been testing out different ways to use it and familiarising myself with the technology. What I plan for this prototype, is that the makey makey will be connected to a bag with a zipper, and as the user unzips/zips the bag, this will also be represented in the companion digital prototype (and sets off alarm accordingly).
Since the main physical interface of ZipperBan is a zipper, a decided to test out different interactions using the Makey Makey connected to the zip in an extra pair of jeans I had at the time. I hooked up the ground to my other hand, and inserted a wire as close as possible to the zipper rail, which was connected to the space bar on the makey makey. The challenge was to make sure the pin on the wire makes contact with the zipper as it passes through, in order to complete the circuit.
I even tried playing some music on Spotify and zipped up/down in order to play/pause the music. A video demo can be seen here:
How to incorporate this with ZipperBan?
Problems/challenges:
The motion of zippers is continuous, yet there is no ‘continuous input’ option on the makey makey i.e the values on the makey makey are either ‘pressed’ or ‘not pressed’ - how do we use these values to measure the displacement of the zipper?
If all the areas of the zipper were connected to a certain input e.g space bar, the system can know and therefore act whenever the space bar is triggered e.g if the zipper is moved and makes contact with a connected area, move the zipper on the screen to the right. There is however no information about the placement of the zipper in the xyz plane - it does not have a state indicating exactly where it is along the zipper. For instance, if the user moves the zipper forward, and it makes contact with connected areas 3 times, on the screen it will move the zipper to the right 3 times. However if the user zips backwards, the connected areas will still trigger the same action. There is no way to distinguish between the connected areas.
Solution: Divide the zipper rail into 6 points, attach a wire to each of these points and connect each wire to the 6 different letters in the makey makey. This ensures that as it touches each point, it will have information regarding its location and can better determine the behaviour. This solution will still have discrete steps, and the movement will be staggered, however a smooth motion between points is not the focus of this prototype.
Algorithm: Zipper passes point connected to letter ‘a’. ‘Point-is-reached’ event is triggered. If ‘a’ is the point reached, move zipper to location (x,y).
Here is a sketch of how the system will be connected:
The pictures below illustrate the iterations of producing the physical interface:
Positioning the wires:
Sticking foil on the zipper to increase chances of making contact with wires:
Sticking the wires into place:
Covering wire ends in foil to increase surface area and attach the wires better. The wire pins kept falling out previously by just threading them through the threads in the material of the pencil case. Attached them as close as possible to the zipper rail.
The main focus for testing for this prototype, will be to see how different levels of sensitivity affect the alarm system and behaviour of the thief. There will be three levels of sensitivity: weak, medium and high. Weak sensitivity means the alarm will be set off once the zipper has reached the end of the rail. High sensitivity means the alarm will be set off after only the second point has been reached. Medium will set off the alarm at point 4. It will test the ability of the thief to retrieve belongings in a bag at different sensitivity levels.
The red circles indicate ‘active spots’ - points on the zipper rail that will trigger an alarm.
Steps for incorporating into the digital prototype:
Divide the zipper rail on the digital prototype into 6 points. Position the zipper element on each point and record the corresponding coordinates.
Create 3 functions in main that set off alarm at different points. i.e at 2, 4 and 6.
Add a switch statement in main constructor corresponding to sensitivity button that executes appropriate function e.g for sensitivity=high, an alarm would be set off at point 2.
Create buttons for the sensitivity levels
In terms of the differences between the previous digital prototype, and the digital prototype companion for the makey makey, most of the functionality that requires interfacing with the user (e.g clicking and dragging the zipper) has been removed as this interfacing has been replaced with the makey makey prototype. Also for this prototype, the user chooses to be either the owner or a thief, and this option is no longer randomised. The basic idea is:
User chooses a sensitivity level and a ‘thief status’. Default is sensitivity=weak, thief=false.
It listens for a keyboard event
Based on the key that was pressed, the zipper object is moved to the corresponding location on the zipper rail
Depending on the sensitivity level chosen, the alarm will be set off at the corresponding point
First we create and add event listeners to the ‘toggleSensitivity’ and ‘toggleThief’ buttons. We also add an event listener to the stage to listen for a keyboard event, which will trigger the default action zipperMoveWeak:
The toggleSensitivity button is implemented - it cycles through the sensitivity settings and listens for the appropriate event:
Depending on the sensitivity level, different actions will be taken. This is a snippet of the zipperMoveStrong function, which sets off the alarm as soon as the second point is reached. As can be seen in the following code, each point also sets the zipper element’s new coordinates to represent its movement along the zipper rail:
The reason it dispatches an event at every point after the second, rather than just at the second point, is because of the volatility of the physical prototype. Because in reality, the zipper may not touch each point, it should still set off an alarm at any point past the second.
Previously, the code did not include the ‘if (!isAlarming)’ checks for each dispatch. This raised an error, since there was an issue involved dispatching multiple events when it was already alarming. This prompted these dispatches to be wrapped in this check, to ensure that it only sets off an alarm if the alarm is not already set.
Overall it seems like the prototype is progressing quite well, despite a few challenges and hurdles along the way. Next week it will be interesting to test the prototype and see how it responds to different users’ input.
What is the existing experience (Restaurant Dining)? From different stakeholder P.O.V.?
Sit down with friends/family, scan menu, order food, wait for the food, eat the food, ask/wait for the bill and pay.
What external/internal factors impact on the experience?
External: weather, financial situation, social situation, education level etc
Internal: the wait for the food, the menu and how easy it is to read/understand, the ambience
What aspects of the existing experience could be enhanced/augmented/supported with technology?
Pressing a button on the table to seek attention from restaurant staff (this is already implemented in some restaurants), interactive menus, technology to enhance splitting bills more effectively.
How would introducing technology into this context change the experience?
It would make the process of ordering->eating->paying for food more efficient. For instance, being able to split the bill with technology and reduce payment time and ease financial pressure (people won’t have to owe others money).
What experience scenarios might you test with the technology?
Get a group of friends to eat at a restaurant together, order different items each and make the order complicated and not easy to split.
Continuation of basic interface prototype:
Following from last week I managed to also implement the alarm system for the prototype. This is outlined below:
When the zipper reaches the end of the zip, this triggers an alarm event. Before it alarms, it decides if the person is the owner or a thief. For the purpose of this prototype, this decision is made randomly:
Once the event is triggered (setOffAlarm), it initialises a timer for 0.5 seconds, and initialises other variables such as the beeping sound. Then it listens for when this timer ends:
After 0.5 seconds the light object alternates between black and red (to simulate flashing), with a beep sound every time the light is red.
Working demo of alarm system and the final prototype:
This week we had tested our prototypes in the workshops to gain some feedback. The testing involved getting the users to interact with the prototype, directly observing their behaviour and asking questions afterwards. Through direct observation, it was noted that some people interact with the zipper differently: some dragged the zipper quite quickly and others more slowly. This also reflects the different ways people open physical zips. This means that in constructing a physical prototype, the system would need to be able to detect movements in the zipper at different speeds. The alarm system was also quite loud, and I observed the testees to be slightly shocked when the alarm sounded - this is a desired outcome, as the point is to attract attention and perhaps even surprise.
Some of the questions that were asked at the end were:
How easy did you find it to use the prototype?
Do you think it effectively encompasses the product’s interactions?
How could the prototype be improved?
Overall the feedback indicated the the prototype was intuitive and easy to understand, it would deter the thief and a companion app works well with the rest of the system. Some had suggested to make the alert more piercing, to further deter the thieves, however this should also be tested in-situ.
Looking back on the previous user testing session, a lot of the feedback received was more qualitative than quantitative, and were not as specific or objective. One of the questions asked was whether they understood the concept of ZipperBan through watching the video, and although many had answered ‘yes’, it was unclear to what extent they understood the concept. Perhaps they thought they understood the concept, but their idea of it is different to how I wanted to portray it. The feedback in this case would need to be more specific to be able to be quantified, and then acted upon for improvement e.g “on a scale of 1 - 10, how easy was the concept to grasp?” etc.
This week I had continued to work on the prototype and simulating the zipper motion in Flash. At the moment the biggest challenge is to create a slider in AS3 that allows the user to drag an element along an arc, to simulate the arc of a backpack zipper. A demo of what I want to achieve can be found here: http://evolve.reintroducing.com/_source/flas/as3/DragAlongArc/
The concept is similar to drag-and-drop, so I researched how to click and drag an element around the screen. The challenge is to constrain the motion so that it follows a defined path no matter where the cursor is. Basic drag and drop within a zipper element:
After some more research, I found some external libraries that seem to do what it is I need for the prototype. The result of this can be found at http://snorkl.tv/dev/pathDrag/:
The problem with this is that I would need to use an external library and I would prefer to try to recreate this without the use of third-party plugins.
I started to look deeper into how a motion like this would work: as the user drags an element left to right, the vertical displacement follows a defined path; as element.x changes, element.y = a predefined function of x. This led me to think about using an x^2 function to create a parabolic shape. I had to revise basic parabola maths for this:
y= ax2 + bx + c
First I sketched where the arc would be (zipper path) in relation to the rest of the screen to get the pixel values so that I can plug these values into an equation to then get a formula for y. It takes 3 points to uniquely define a parabola. Note that because of the origin being the top-left corner, any y-values are negative:
A sketch of using parabolas as a solution are shown below:
The equation found from the second sketch is y = -0.01x2 + 6x - 1000. This was incorporated in the prototype however the position of the elements were adjusted slightly which varied the equation. In order to incorporate this in AS3, I had to create some nested listeners: one on the zipper element to listen for a click, and once this happens there is one on the stage to listen for the mouse coordinates.