Wednesday, May 22, 2013

Week 10 - Conflict

Team Name: DCLD
Team Members: Laleh, Daniel, Chinatsu and David
Wiki Page:               

Clarity of the oral presentation

The presentation was the same as every other presentation as it was not engaging as everyone ready from their piece of paper or straight from the screen. It was the same problem as I was being fed information I could go read straight from another website and it was not being told anything from their perspective.

Clarity of the written presentation

The slides in their presentation were full on in some sections but well done in others where there were many lists. I felt with the lists they were able to provide dot points where I was not lost in a whole lot of text and was able to understand the key points.

Distinctiveness and specificity of the examples:

The examples given linked to real world examples and they provided a list of how conflict has been evident in their project. They provided clear examples but it would have been better if they provided more examples.

Referencing

They provided referencing at the bottom of their slides but I was unable to realise what they were referencing. It has been a common problem throughout all the presentations.

The conceptual context

The group seem to be on a clear path for their project. They have encountered many different types of conflict and I feel they have learnt a lot from this presentation.

The still images:


Some of the images provided were very relevant to their project but unfortunately the others were like images you could find in clipart.

Friday, May 10, 2013

Week 8 - Gesture Control

Steve provided me with the basic code for gesture control for Crysis and I was able to add to it. By adding to the code I was able to create a working gesture control code ready for Crysis incorporating hand wave right, hand wave left, hand wave up and hand wave down. We created HUD messages for each gesture to make sure it was working (fig1). So when we go into the game and face the kinect and perform the gestures (video1) you are able to see the messages on the left hand side of the screen saying that hand wave left/right/up/down are indeed working.

Below is the code working with Crysis HUD Messages:

Figure 1

Figure 2

Video 1




Thursday, May 9, 2013

Week 8 - Intellectual Property

Group One

Team Name: Parametric Architecture
Team Members: 
Wiki Page:

Clarity of the oral presentation

The presentation was unprepared and wasn't engaging as the group members read straight from their papers/ palm cards and the screen. This showed they did not have a sound understanding of the content as they were unable to explain what they had researched. 

Clarity of the written presentation

The prezi presentation was well put together and the slides had a nice flow to them. Unfortunately the information on the slides had too much text on them or the images were too small to be able to have a clear view of. The content on the slides were not thought through as referencing was there but unclear to what they were referencing. 

Distinctiveness and specificity of the examples:

The group provided a sound understanding of the topic as they gave specific examples of how Intellectual Property is used in the industry. Unfortunately they didn't link Intellectual property to their project and how it affected them. Without doing this, it did not seem they understood how Intellectual Property could have been used in their project.

Referencing

References were given but they were not positioned properly and was not clear to what they were referencing. The references were quite small and was unclear.

The still images:

The group provided quite a few images that were well linked to their topic compared to other presentations. However some of the images were quite small and it was quite hard to view them. Even thought most of the images reinforced their topic, quite a few were not needed as they did not add to their presentation but rather took away from it. 


Group 2

Team Name: Geriambience
Team Members: Steven Best, Dan Zhang, Jing Liu, Matthew Kruik and Siyan Li
Wiki Page: http://geriambience.wikia.com/wiki/Geriambience_wiki

Clarity of the oral presentation

Like all the groups, this group read from their papers or read straight from the screen. This lead to them to not having a consistent pace at which they were talking. It was unsure if they were nervous or were just not prepared enough to speak without any guidance from paper or the screen.

Clarity of the written presentation

The group had quite a sound understanding of their topic as they provided very good examples and I felt I was able to learn quite a lot from their presentation. They provided the right amount of information per slide as I wasn't reading from there and mainly concentrating on what they were talking about. Unfortunately I feel the visual aspect of the prezi presentation could have had a bit more work put in to make it more appealing.

Distinctiveness and specificity of the examples:

The examples that the group provided were more related to their project which was very good. This allowed me to understand the topic better as there were examples to something I can relate. 

Referencing

Referencing was provided.

The conceptual context:

It was interested to see how 2 groups perceived the same information and interpreted it in different ways. The group seem to be on a very good track with their project but I seem that some members may be pulling more of the wait than others.

The still images:

Unfortunately they did not provide the 7 images asked for and I felt their could have been better images used to support their argument. I feel the first group's images enhanced their presentation more than this group. 

Saturday, May 4, 2013

My Individual Milestone Submission


My contribution to Kinecting the Blocks:

  • Helping manage the team
  • Help create a schedule to keep our team on track 
  • Research different types of existing mechanical storage systems 
  • Help design our storage system so it will work
  • Create a draft drawing of our storage system so I can hand it over to the visualisers to mock up a model
  • Help edit the wiki page
  • Researched into facial recognition
  • Helped set up Kinect for Crysis
  • Experimented with the Kinect and a webcam for programming purposes

My Focus

For my individual milestone I focused on facial recognition. I researched into the difference between face detection and face recognition, the different types of facial recognition and which would be the best for our project. Facial recognition is an integral part of our project as it provides us with user recognition.


My Research

You're able to find my research into facial recognition in a different post called My Individual Milestone Research or please follow the link: http://januarch1392.blogspot.com.au/2013/04/my-individual-milestone-research.html

My Final Product

I have been focusing on facial recognition using PCA - Principal Component Analysis using Eigenfaces . To begin with I was more focused on being able to recognise between a male and female as my goal. After a lot of research I was able to have a working example up and running of the facial recognition to go beyond and recognise almost everyone as individuals.

Problems Encountered

Currently I am facing two problems. The first problem I have encountered so far is that it's having a bit of trouble differentiating between my mother and me (Figure 1), as we look similar. Since our facial features are similar, if I’m facing the camera on a slight angle or if I’m moving around slightly the program is having a bit of trouble recognising who it is correctly. To fix this problem I will be putting in a counter so when the counter reaches to 5 consecutive identical identifications it will identify that person and move onto the next stage.

 
Figure 1 – Unable to identify between my mother and I

If I'm facing the camera front on (Figure 2) where it’s able to pick up every feature of my face I am getting a better result.

Figure 2 – Facing camera front on

The second problem I have been facing is the difference in lighting is making a big difference in how fast it will identify a person and the accuracy at which it identifies a person. This is a risk when using web cameras as they tend to depend on natural lighting that is not always uniform. As you can see below in video 1, the video was taken outisde in natural light and i have a shadow across my face. Due to the shadow the lighting is not uniform, so when I rotate my head it is dropping out and is not able to recognise my face when it is at a slight angle. I must be looking straight at the camera. In video 2, the video was taken in day light in a room where the light was more uniform. You are able to see the difference between the first and second video on the right hand side as the program is able to identify the face a lot faster. In video 3, I took the video at night with the light on. In that video you are able to see the difference from video 2 to video 3. In video 2 I'm able to rotate my head and the program still picks up my face where as if I rotate my head in video 3, it's either picking up my mum or it is dropping out. 

Video 1 – Natural light - Exterior


Video 2 – Natural Light - Interior - Bright

Video 3 – Natural Light - Interior - Dark

To fix this problem I may need to switch from web camera to the Kinect. I will need to test the output from both and see which would be the better option to work with for the final project.


Reflection

After numerous weeks of working on this section of the project I feel like I could have accomplished it earlier and started working on the next section. It feels great to have this finished and out of the way but i'm very happy with outcome. More than anything I think it was the amount of research that needed to be done to figure out which would be the best method to use to move forward in our project.

Refrences

To learn more about EmgyCV I first went through the following tutorials on the page below:
http://fewtutorials.bravesites.com/tutorials

After that I was able to find another site that provided me with a more enhanced facial recognition that was more accurate. I went through the facial recognition tutorials to enhance the code.
http://www.codeproject.com/Articles/261550/EMGU-Multiple-Face-Recognition-using-PCA-and-Paral

Tuesday, April 30, 2013

Week 7 - Problems encountered with Crysis

Below are a few images of problems we encountered when trying to debug our code. We thought it might be user access issues so we ran it in Administrator mode and changed our security settings around but unfortunately we were still encountering the same problem. We also thought it might be an issue due to the fact my laptop was running Windows 8 therefore I changed the compatibility to Windows 7 and we are still getting the errors. After all these issues we went back to basics and double checked all our paths for the editor in properties > debugging. After about a week of having errors. 

"Unfortunately CryEngine doesn't officially support Windows 8. The reason for this is that Windows 8 has differently-named and / or differently-functioning DLLs to Windows 7, so when CryEngine goes to find a DLL it needs and crashes, the DLL in question is either not there (as is the case here) or doesn't function the way CryEngine thinks it should (this is usually the case when you get a stack trace when crashing). From what I understand, you'll need to manually put copies of each DLL CryEngine says is missing into the Bin32 folder. That should let it compile." Steve



                                      



Saturday, April 27, 2013

Week 7 - Communication


Team Name: Shades of Black
Team Members: Daniel Rickard, Alex Lorenzelli, Andrea Bong, Ben Filler, Shaun Weisbrodt and Rebecca Araullo
Wiki Page: shadesofblack.wikispaces.com

Clarity of the oral presentation:

Each member presented a lengthy speech but I was a bit lost in all the information provided. I feel if they were able to focus on the key points and did not just regurgitate everything they had researched it would have been better and easier to follow. Some of the group members were more clear and concise about what they were talking about but the other members I felt were just reading off their pieces of paper. It was well rehearsed and it had a flow to the presentation.

Clarity of the written presentation:

The prezi presentation was clear and concise with only the key points of the presentation on the slides. Unfortunately I felt I was a bit lost in the presentation as the slides weren’t being controlled according to what was being said. It felt as if this part was unrehearsed with all the team members.

Distinctiveness and specificity of the examples:

The examples used were very useful and helped understand the topics.

Referencing:

Harvard style referencing was clearly used in their presentation and on their wiki.

The conceptual context:

I feel they have just started to find their way around their project. As they are brand new to everything they have been learning it has taken a while for them to research and understand everything they are doing. I feel now they have a better grasp on what is ahead and what needs to be done. They seem to be working well as a group and have very good communication skills as stated in their presentation.

The still images:

The images used I felt did not enhance their presentation. During the presentation I felt there were images lacking and the ones provided came from clip art and had no real meaning behind them. The video presentation was not utitlised to their advantage. The video was of them just talking and this could have just been done in a written format. I would have liked to see more of them dis assembling the bike and talking whilst showing what they were doing.

What information I learnt that will be beneficial to my project:

With my group we have not been communicating as much as I would like. After the presentation we have started using more ways in which we are able to get through to each other. 

Tuesday, April 23, 2013

Week 6 and 7 - My Individual Milestone Research

What I hope to research into and learn more about:

  • The difference between facial recognition and facial detection
  • The different types of approaches to facial recognition
  • Which one I have chosen and why
The difference between facial recognition and facial detection

Face detection: 
Face detection is a computer vision technology that determines the locations and sizes of human faces in arbitrary (digital) images. It detects facial features and ignores anything else such as buildings, trees and bodies. Face detection can be regarded as a specific case of object-class detection. In object-class detection, the task is to find the locations and sizes of all objects in a digital image that belongs to a given class.

Face Recognition:
Face recognition is a biometric identification by scanning a person's face and matching it against a library of known faces. 

The different approaches to recognise a face:

PCA Principal Components Analysis
PCA, commonly referred to as the use of eigenfaces, is the technique pioneered by Kirby and Sirivich in 1988.  With PCA, the probe and gallery images must be the same size and must first be normalized to line up the eyes and mouth of the subjects within the images. The PCA approach is then used to reduce the dimension of the data by means of data compression basics and reveals the most effective low dimensional structure of facial patterns.  This reduction in dimensions removes information that is not useful and precisely decomposes the face structure into orthogonal (uncorrelated) components known as eigenfaces. Each face image may be represented as a weighted sum (feature vector) of the eigenfaces, which are stored in a 1D array. A probe image is compared against a gallery image by measuring the distance between their respective feature vectors. The PCA approach typically requires the full frontal face to be presented each time; otherwise the image results in poor performance. The primary advantage of this technique is that it can reduce the data needed to identify the individual to 1/1000th of the data presented.


Figure 1: Standard Eigenfaces: Feature vectors are derived using eigenfaces.

MIT Media Laboratory Vision and Modeling Group, “Photobook/Eigenfaces Demo”  25 April 2013  <http://vismod.media.mit.edu/vismod/demos/facerec/basic.html>.

LDA: Linear Discriminant Analysis 
LDA is a statistical approach for classifying samples of unknown classes based on training samples with known classes. (Figure 2) This technique aims to maximize between-class (i.e., across users) variance and minimize within-class (i.e., within user) variance. In Figure 2 where each block represents a class, there are large variances between classes, but little variance within classes. 


Figure 2: Example of Six Classes Using LDA
Juwei Lu, “Boosting Linear Discriminant Analysis for Facial Recognition,” 2002. 

EBGM:  Elastic Bunch Graph Matching
EBGM relies on the concept that real face images have many non- linear characteristics that are not addressed by the linear analysis methods discussed earlier, such as variations in illumination (outdoor lighting vs. indoor fluorescents), pose (standing straight vs. leaning over) and expression (smile vs. frown). 


Figure 3: Elastic Bunch Map Graphing.
Laurenz Wiskott, “Face Recognition by Elastic Bunch Graph Matching, ” <http://www.neuroinformatik.ruhr-uni- bochum.de/ini/VDM/research/computerVision/graphMatching/ide ntification/faceRecognition/contents.html>

References:
  1. L. Sirovich and M. Kirby, "A Low-Dimensional Procedure for the Characterization of Human Faces," J. Optical Soc. Am. A, 1987, Vol. 4, No.3, 519-524.  
  2. M. A. Turk and A. P. Pentland, "Face Recognition Using Eigenfaces," Proc. IEEE, 1991, 586-591.  
  3. D. Bolme, R. Beveridge, M. Teixeira, and B. Draper, “The CSU Face Identification Evaluation System: Its Purpose, Features and Structure,” International Conference on Vision Systems, Graz, Austria, April 1-3, 2003. (Springer-Verlag) 304-311. 
  4. “Eigenface Recognition” <http://et.wcu.edu/aidc/BioWebPages/eigenfaces.htm>.
Which one I have chosen and why:

I have chosen to use PCA - Principal Component Analysis using Eigenfaces. The reason i chose this method is because it's a good place for a beginner like me to begin. This method allows me to load the classes and call their functions so I won't need to code the algorithm.

After I chose to go with PCA using Eigenfaces I watched the youtube videos below to gain a deeper understanding behind PCA.

What is PCA:

How PCA Recognises Faces - Algorithm in Simple Steps 1 of 3
http://www.youtube.com/watch?feature=player_embedded&v=n3sDhHH5tFg

How PCA Recognises Faces - Algorithm in Simple Steps 2 of 3

How PCA Recognises Faces - Algorithm in Simple Steps 3 of 3