Monday, March 5, 2007

Bad Results

We finished implementing our behavior descriptor based solely on Displacement lines. We used cross-validation to test it out by testing on one set at a time and training on the other sets. Unfortunately, our descriptor hasn't performed as expected, with most of our sleep dataset being mislabeled as exploring. We're thinking it's due to some bug in our code, but we still haven't found exactly where that bug is. Intuitively, sleeping clips don't produce cuboids. There's no movement taking place during those video clips so the response function doesn't pick up anything. Since there's no movement, our displacement graph is basically all zeros for that type of clip. Conversely, there's no reason why an exploring clip would have a displacement graph consisting of all zeros.

Meanwhile, we've also been looking at different metrics to use besides Euclidean distance.
We're currently considering chi-squared and Mahalanobis in addition to Euclidean.
The reason we're inclined towards using the Mahalanobis distance is that it takes into account the covariance among the variables in calculating distances. Doing this solves problems related to scale and correlation in Euclidean distances. When using Euclidean distance, the set of points equidistant from a given location is a sphere. The Mahalanobis distance stretches this sphere to correct for the respective scales of the different variables, and to account for correlation among variables.



from http://matlabdatamining.blogspot.com/2006/11/mahalanobis-distance.html

We'll be testing other distance metrics once we've fixed this bug.


Update: currently implementing k-nn to do line comparisons, hopefully this will solve the problem.

Wednesday, February 21, 2007

Detecting Movement cont.

We tried to use blob detection in order to approximate the movement of the mouse. We took the centroid of the blobs to represent the center mass of the mouse, and the change in position of the centroid would approximate the change in the mouse position. However, using blobs this way turned out to be a chaos, we detected more noise than the actual movement of the mouse.

To resolve this issue, we tried using the cuboids. We computed a binary image of the cuboids that appear, and then calculate a centroid from the image. The centroid serves the same purpose as before: to approximate the center mass of the mouse. Here is the unfiltered graph of the mouse movement:

Unfiltered x and y displacement for exploring

Unfiltered x and y displacement for grooming

We noted that there is a distinctive pattern between exploring and grooming. The x and y displacement while the mouse is exploring is greater than when the mouse is grooming. Furthermore, the x and y displacement when the mouse is grooming tend to be around zero. This is intuitive because the mouse does not move as much when it is grooming than when it is exploring. We want to add this result as a feature to Piotr's cuboids, but we need to filter out more noise. We used median and average filtering to obtain the following results:

Filtered x displacement for exploring

Filtered y displacement for exploring

Filtered x displacement for grooming

Filtered y displacement for grooming

Next step:
- Create displacement graphs for all training set
- Scale the graphs to 100 frames for consistency

Monday, February 12, 2007

Detecting movement

After last week's class, we've implemented a way of detecting movement suggested by Serge. By subtracting the average background for each frame and binarizing the image, we can get a representation of movement.
We still need to work on the threshold used and clean up the blobs a bit, but it looks promising.

Here is the clip of blobs obtained from explore001 from set00:


Here are the blobs from groom001 from set00:


Current work:
1. Clean up our binary images, adjust the threshold to keep the number of blobs small.
2. Use each blob's centroid to create a displacement graph with respect to time in X and Y directions.
3. Create displacement prototypes based on these graphs by clustering them together.
4. Add this information to our behavior descriptor.

Monday, February 5, 2007

Most commonly mislabeled behavior

The memory problem was solved by saving and clearing the workspace after work on a specific clip is done, unfortunately this makes the code run a bit slower (5+ hrs). For now, we'll only be using a subset of the whole data set; set00-set003. One of the most commonly mislabeled behaviors using the cuboids code is grooming, it's most often labeled as exploring. Grooming is characterized by the movement of the mouse's paws across its face or its face across its legs while the mouse stays in the same place. Drinking is also commonly mislabeled as exploring. We believe the main difference between these two behaviors is the movement of the mouse from one place to another. By keeping track of where cuboids are detected and incorporating that data into the behavior descriptor, we believe we can increase the accuracy of these behaviors.

The following videos show the cuboids as they're detected by the response function.
(still waiting for processing by google video)

Here we have a sample clip for grooming where the mouse stays in the same spot and mostly just moves his paws and face:


A grooming clip where the mouse moves around a bit:



A sample clip for drinking:


A clip for exploring:

Monday, January 29, 2007

Current work

Now that we've gotten familiar with Piotr's code, our next step is to run it using the entire smart vivarium dataset. We'll look at the results to find which video clips were labeled incorrectly and figure out how positional information can help prevent those errors.

cuboids!

We've been going over Piotr's matlab code for cuboids and feel comfortable using it now. We first ran the recognition demo on the face dataset, afterwards, we modified the demo to run on mice behavior dataset. Due to memory constraints, we couldn't finish running it, but we'll fix this by tonight.

Here's a sample clip from the smart vivarium dataset, drink02.avi from set00:



Here are the cuboids obtained from that video clip set to loop 10 times, each cuboid lasts approximately 1 second:


Here we have a sample clip of the cuboids clustered together by prototypes from the smart vivarium dataset:




copyright info:
This database is Copyright © 2005 The Regents of the University of California. All Rights Reserved. Permission to use, copy, modify, and distribute this database and its documentation for educational, research and non-profit purposes, without fee, and without a written agreement is hereby granted, provided that the above copyright notice, this paragraph and the following three paragraphs appear in all copies. Permission to incorporate this database into commercial products may be obtained by contacting:

Technology Transfer Office
9500 Gilman Drive,
Mail Code 0910
University of California La Jolla,
CA 92093-0910
(858) 534-5815
invent@ucsd.edu

This database and documentation are copyrighted by The Regents of the University of California. The database and documentation are supplied "as is", without any accompanying services from The Regents.

IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS DATABASE AND ITS DOCUMENTATION,

EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE DATABASE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

Wednesday, January 24, 2007

Cuboids Code

Piotr gave us the access to his cuboids package. We will start learning how the code works and will show the cuboids on Monday.