Tuesday, May 28, 2013

Week 11 - Renumeration

Team Name: Vivid Pirates

Clarity of the oral presentation

The presentation wasn’t well prepared as they were all over the place. They were switching between different screens and were a bit confused within their group as to where they were in their presentation.

Clarity of the written presentation

The information they provided was everything you could find online about the subject. It would have been more interesting if they applied their own touch to it and I guess this is where the examples play a really big role.

Distinctiveness and specificity of the examples

I feel they lacked examples that related to their project that we didn’t already know about. They would have had a lot more interesting examples they could have used that would have been more suited for their topic.

Referencing

Their referencing was evident in a few places but not in all their slides.

The conceptual context:

I felt they were progressing quite well as I saw their coding progress in class. The disadvantage for them being they needed to prototype all their code and have it working perfectly before implementing it into the real world.

The still images:

There were many images and videos evident on their wiki. Some of the images they used were actually quite specific to their presentation eg. Felix’s payslip

Thursday, May 23, 2013

Week 10 - Laser Cutting

After the initial prototype we decided we needed to use another material that is slides a lot easier and looks a lot cleaner when constructing the prototype. Ben made a small plastic rail so we were able to see how well plastic slid against plastic. We decided to have our laser cutting cut out of 3mm white plastic. Unfortunately we didn't have someone double check the file being sent through so the first lot we had laser cut was not to the write measurements we wanted. Luckily enough the laser cutters agreed to give us our second lot that were the correct measurements back to us within a day so we would have enough time to put the prototype together.

Below is an image of the second laser cutting file we sent through.


Wednesday, May 22, 2013

Week 9 - 12 - Facial Recognition to Crysis

Unfortunately due to the fact that I needed my facial recognition to be in C++ and not in C#, I needed to convert my C# code to C++. After some reading I found that converting the languages is possible - though difficult due to differences in approach, the framework calls require porting to different libraries, and is often not a good candidate for a direct translation. Therefore I decided to start re-writing my code in C++ but unfortunately I still encountered problems and because I am very new to coding it tok me a very long time to understand the syntax and not get confused between both the languages. I then decided to start fresh and start another facial recognition code in C++. I found this tutorial online at http://www.shervinemami.info/faceRecognition.html.

The facial recognition according to his code runs as below:

1. Grab a frame from the camera 

// Grab the next camera frame. Waits until the next frame is ready, and
// provides direct access to it, so do NOT modify or free the returned image!
// Will automatically initialize the camera on the first frame.
IplImage* getCameraFrame(CvCapture* &camera)
{
 IplImage *frame;
 int w, h;

 // If the camera hasn't been initialized, then open it.
 if (!camera) {
  printf("Acessing the camera ...\n");
  camera = cvCreateCameraCapture( 0 );
  if (!camera) {
   printf("Couldn't access the camera.\n");
   exit(1);
  }
  // Try to set the camera resolution to 320 x 240.
  cvSetCaptureProperty(camera, CV_CAP_PROP_FRAME_WIDTH, 320);
  cvSetCaptureProperty(camera, CV_CAP_PROP_FRAME_HEIGHT, 240);
  // Get the first frame, to make sure the camera is initialized.
  frame = cvQueryFrame( camera );
  if (frame) {
   w = frame->width;
   h = frame->height;
   printf("Got the camera at %dx%d resolution.\n", w, h);
  }
  // Wait a little, so that the camera can auto-adjust its brightness.
  Sleep(1000); // (in milliseconds)
 }

 // Wait until the next camera frame is ready, then grab it.
 frame = cvQueryFrame( camera );
 if (!frame) {
  printf("Couldn't grab a camera frame.\n");
  exit(1);
 }
 return frame;
}

2. Convert the colour frame to greyscale

 // If the image is color, use a greyscale copy of the image.
 detectImg = (IplImage*)inputImg;
 if (inputImg->nChannels > 1) {
  size = cvSize(inputImg->width, inputImg->height);
  greyImg = cvCreateImage(size, IPL_DEPTH_8U, 1 );
  cvCvtColor( inputImg, greyImg, CV_BGR2GRAY );
  detectImg = greyImg; // Use the greyscale image.
 }

3. Detect a face within the greyscale camera frame

// Perform face detection on the input image, using the given Haar Cascade.
// Returns a rectangle for the detected region in the given image.
CvRect detectFaceInImage(IplImage *inputImg, CvHaarClassifierCascade* cascade)
{
 // Smallest face size.
 CvSize minFeatureSize = cvSize(20, 20);
 // Only search for 1 face.
 int flags = CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH;
 // How detailed should the search be.
 float search_scale_factor = 1.1f;
 IplImage *detectImg;
 IplImage *greyImg = 0;
 CvMemStorage* storage;
 CvRect rc;
 double t;
 CvSeq* rects;
 CvSize size;
 int i, ms, nFaces;

 storage = cvCreateMemStorage(0);
 cvClearMemStorage( storage );


 // Detect all the faces in the greyscale image.
 t = (double)cvGetTickCount();
 rects = cvHaarDetectObjects( detectImg, cascade, storage,
   search_scale_factor, 3, flags, minFeatureSize);
 t = (double)cvGetTickCount() - t;
 ms = cvRound( t / ((double)cvGetTickFrequency() * 1000.0) );
 nFaces = rects->total;
 printf("Face Detection took %d ms and found %d objects\n", ms, nFaces);

 // Get the first detected face (the biggest).
 if (nFaces > 0)
  rc = *(CvRect*)cvGetSeqElem( rects, 0 );
 else
  rc = cvRect(-1,-1,-1,-1); // Couldn't find the face.

 if (greyImg)
  cvReleaseImage( &greyImg );
 cvReleaseMemStorage( &storage );
 //cvReleaseHaarClassifierCascade( &cascade );

 return rc; // Return the biggest face found, or (-1,-1,-1,-1).

4. Crop the frame to just show the facial recognition using cvSetImageROI() and cvCopyImage().

5. Pre-process the face image

// Either convert the image to greyscale, or use the existing greyscale image.
IplImage *imageGrey;
if (imageSrc->nChannels == 3) {
 imageGrey = cvCreateImage( cvGetSize(imageSrc), IPL_DEPTH_8U, 1 );
 // Convert from RGB (actually it is BGR) to Greyscale.
 cvCvtColor( imageSrc, imageGrey, CV_BGR2GRAY );
}
else {
 // Just use the input image, since it is already Greyscale.
 imageGrey = imageSrc;
}

// Resize the image to be a consistent size, even if the aspect ratio changes.
IplImage *imageProcessed;
imageProcessed = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);
// Make the image a fixed size.
// CV_INTER_CUBIC or CV_INTER_LINEAR is good for enlarging, and
// CV_INTER_AREA is good for shrinking / decimation, but bad at enlarging.
cvResize(imageGrey, imageProcessed, CV_INTER_LINEAR);

// Give the image a standard brightness and contrast.
cvEqualizeHist(imageProcessed, imageProcessed);

.....  Use 'imageProcessed' for Face Recognition ....

if (imageGrey)
 cvReleaseImage(&imageGrey);
if (imageProcessed)
 cvReleaseImage(&imageProcessed);

6. Recognise the person in the image

After talking to Steve he told me to use a TCP Server to connect my output from my facial recognition to CryEngine. He told me to do the following to link his TCP Server code to the facial recognition:

"Your main() loop from the C++ should go into the Update() function. Any variables declared outside the main() loop should become member variables (ie, declared in the .h file, then optionally instantiated in the constructor in the .cpp file). Any #include lines you have in your face detection code should go in the .h file, *below* the existing #include lines." Steve

Unfortunately I didn't have a .h file and everything was in my .cpp. As for my main() loop it was unfortunately incorrect as the actual facial recognition wasn't in that loop. I went to Steve to fix up this problem and he said the easiest solution was to use the TCP Server and use the old C# code. Please follow the link below to see Facial recognition code using a TCP Server to Crysis.

Week 10 - Conflict

Team Name: DCLD
Team Members: Laleh, Daniel, Chinatsu and David
Wiki Page:               

Clarity of the oral presentation

The presentation was the same as every other presentation as it was not engaging as everyone ready from their piece of paper or straight from the screen. It was the same problem as I was being fed information I could go read straight from another website and it was not being told anything from their perspective.

Clarity of the written presentation

The slides in their presentation were full on in some sections but well done in others where there were many lists. I felt with the lists they were able to provide dot points where I was not lost in a whole lot of text and was able to understand the key points.

Distinctiveness and specificity of the examples:

The examples given linked to real world examples and they provided a list of how conflict has been evident in their project. They provided clear examples but it would have been better if they provided more examples.

Referencing

They provided referencing at the bottom of their slides but I was unable to realise what they were referencing. It has been a common problem throughout all the presentations.

The conceptual context

The group seem to be on a clear path for their project. They have encountered many different types of conflict and I feel they have learnt a lot from this presentation.

The still images:


Some of the images provided were very relevant to their project but unfortunately the others were like images you could find in clipart.

Friday, May 10, 2013

Week 8 - Gesture Control

Steve provided me with the basic code for gesture control for Crysis and I was able to add to it. By adding to the code I was able to create a working gesture control code ready for Crysis incorporating hand wave right, hand wave left, hand wave up and hand wave down. We created HUD messages for each gesture to make sure it was working (fig1). So when we go into the game and face the kinect and perform the gestures (video1) you are able to see the messages on the left hand side of the screen saying that hand wave left/right/up/down are indeed working.

Below is the code working with Crysis HUD Messages:

Figure 1

Figure 2

Video 1




Thursday, May 9, 2013

Week 8 - Intellectual Property

Group One

Team Name: Parametric Architecture
Team Members: 
Wiki Page:

Clarity of the oral presentation

The presentation was unprepared and wasn't engaging as the group members read straight from their papers/ palm cards and the screen. This showed they did not have a sound understanding of the content as they were unable to explain what they had researched. 

Clarity of the written presentation

The prezi presentation was well put together and the slides had a nice flow to them. Unfortunately the information on the slides had too much text on them or the images were too small to be able to have a clear view of. The content on the slides were not thought through as referencing was there but unclear to what they were referencing. 

Distinctiveness and specificity of the examples:

The group provided a sound understanding of the topic as they gave specific examples of how Intellectual Property is used in the industry. Unfortunately they didn't link Intellectual property to their project and how it affected them. Without doing this, it did not seem they understood how Intellectual Property could have been used in their project.

Referencing

References were given but they were not positioned properly and was not clear to what they were referencing. The references were quite small and was unclear.

The still images:

The group provided quite a few images that were well linked to their topic compared to other presentations. However some of the images were quite small and it was quite hard to view them. Even thought most of the images reinforced their topic, quite a few were not needed as they did not add to their presentation but rather took away from it. 


Group 2

Team Name: Geriambience
Team Members: Steven Best, Dan Zhang, Jing Liu, Matthew Kruik and Siyan Li
Wiki Page: http://geriambience.wikia.com/wiki/Geriambience_wiki

Clarity of the oral presentation

Like all the groups, this group read from their papers or read straight from the screen. This lead to them to not having a consistent pace at which they were talking. It was unsure if they were nervous or were just not prepared enough to speak without any guidance from paper or the screen.

Clarity of the written presentation

The group had quite a sound understanding of their topic as they provided very good examples and I felt I was able to learn quite a lot from their presentation. They provided the right amount of information per slide as I wasn't reading from there and mainly concentrating on what they were talking about. Unfortunately I feel the visual aspect of the prezi presentation could have had a bit more work put in to make it more appealing.

Distinctiveness and specificity of the examples:

The examples that the group provided were more related to their project which was very good. This allowed me to understand the topic better as there were examples to something I can relate. 

Referencing

Referencing was provided.

The conceptual context:

It was interested to see how 2 groups perceived the same information and interpreted it in different ways. The group seem to be on a very good track with their project but I seem that some members may be pulling more of the wait than others.

The still images:

Unfortunately they did not provide the 7 images asked for and I felt their could have been better images used to support their argument. I feel the first group's images enhanced their presentation more than this group. 

Saturday, May 4, 2013

My Individual Milestone Submission


My contribution to Kinecting the Blocks:

  • Helping manage the team
  • Help create a schedule to keep our team on track 
  • Research different types of existing mechanical storage systems 
  • Help design our storage system so it will work
  • Create a draft drawing of our storage system so I can hand it over to the visualisers to mock up a model
  • Help edit the wiki page
  • Researched into facial recognition
  • Helped set up Kinect for Crysis
  • Experimented with the Kinect and a webcam for programming purposes

My Focus

For my individual milestone I focused on facial recognition. I researched into the difference between face detection and face recognition, the different types of facial recognition and which would be the best for our project. Facial recognition is an integral part of our project as it provides us with user recognition.


My Research

You're able to find my research into facial recognition in a different post called My Individual Milestone Research or please follow the link: http://januarch1392.blogspot.com.au/2013/04/my-individual-milestone-research.html

My Final Product

I have been focusing on facial recognition using PCA - Principal Component Analysis using Eigenfaces . To begin with I was more focused on being able to recognise between a male and female as my goal. After a lot of research I was able to have a working example up and running of the facial recognition to go beyond and recognise almost everyone as individuals.

Problems Encountered

Currently I am facing two problems. The first problem I have encountered so far is that it's having a bit of trouble differentiating between my mother and me (Figure 1), as we look similar. Since our facial features are similar, if I’m facing the camera on a slight angle or if I’m moving around slightly the program is having a bit of trouble recognising who it is correctly. To fix this problem I will be putting in a counter so when the counter reaches to 5 consecutive identical identifications it will identify that person and move onto the next stage.

 
Figure 1 – Unable to identify between my mother and I

If I'm facing the camera front on (Figure 2) where it’s able to pick up every feature of my face I am getting a better result.

Figure 2 – Facing camera front on

The second problem I have been facing is the difference in lighting is making a big difference in how fast it will identify a person and the accuracy at which it identifies a person. This is a risk when using web cameras as they tend to depend on natural lighting that is not always uniform. As you can see below in video 1, the video was taken outisde in natural light and i have a shadow across my face. Due to the shadow the lighting is not uniform, so when I rotate my head it is dropping out and is not able to recognise my face when it is at a slight angle. I must be looking straight at the camera. In video 2, the video was taken in day light in a room where the light was more uniform. You are able to see the difference between the first and second video on the right hand side as the program is able to identify the face a lot faster. In video 3, I took the video at night with the light on. In that video you are able to see the difference from video 2 to video 3. In video 2 I'm able to rotate my head and the program still picks up my face where as if I rotate my head in video 3, it's either picking up my mum or it is dropping out. 

Video 1 – Natural light - Exterior


Video 2 – Natural Light - Interior - Bright

Video 3 – Natural Light - Interior - Dark

To fix this problem I may need to switch from web camera to the Kinect. I will need to test the output from both and see which would be the better option to work with for the final project.


Reflection

After numerous weeks of working on this section of the project I feel like I could have accomplished it earlier and started working on the next section. It feels great to have this finished and out of the way but i'm very happy with outcome. More than anything I think it was the amount of research that needed to be done to figure out which would be the best method to use to move forward in our project.

Refrences

To learn more about EmgyCV I first went through the following tutorials on the page below:
http://fewtutorials.bravesites.com/tutorials

After that I was able to find another site that provided me with a more enhanced facial recognition that was more accurate. I went through the facial recognition tutorials to enhance the code.
http://www.codeproject.com/Articles/261550/EMGU-Multiple-Face-Recognition-using-PCA-and-Paral