Uncertainty and Video Analysis

This is for commenter JimP. How do you take into account uncertainty when using video analysis? A great question. The first thing to think about is where does the uncertainty come from? My first guess would that it would be from the user. Where does the user click? Is it right on the object in each frame? Is the scale set correctly? I guess there could be other sources of error - maybe there are repeating frames that are a result of encoding. Maybe there is interlaced video frames.

Well, what to do? I will just look at one motion in particular and do the analysis several times. I am going to analyze the following video:

Horizontal Projectile Motion from Rhett Allain on Vimeo.

Note that you can download the video from vimeo (in the lower right hand of the video page). By repeatedly analyzing this video, I have an idea of the uncertainty due to the human input. Actually, I did this analysis 5 times before I realized a mistake I was making. So, let me show you the data I have from that. This is the velocity in the horizontal direction and the acceleration in the vertical direction that I obtained from fitting functions to the horizontal and vertical data.

The "RMS" is the root-mean square deviation of the data from the line fit. This is created from Tracker Video. If I use the standard error as the uncertainty, I get the following for the horizontal velocity and vertical acceleration:

i-6c68675a78750a4bfd05b874f4ab5250-la_te_xi_t_11.jpg

So, I am not getting the expected value for the vertical acceleration. Maybe this is because in this shot, there is a small number of data points. Or it could possibly be due to some parallax or scaling issues. Either way, I still like this method of finding the uncertainty in these values. What is causing the uncertainty? Is it clicking error, or scaling error? It doesn't actually matter. This just gives me the uncertainty.

But what if I want to look at the uncertainty due to clicking on data points? For this next set, I did not scale the video (because that would add another layer of error). Here are the pixel values of the ball for the 8 times I did it. I have included all the data in case you want to play with it.

And here is the trajectory of the ball (in pixels vs. pixels) with error bars.

i-62a585be068be525f71290ef99fde81b-untitled1.jpg

So, what can I say now? First, it seems like the uncertainty from clicking is less than 1% of the value. I don't think clicking uncertainty is going to be the problem. More likely, the uncertainty will come from other things - like where is the object? This is especially true for extended objects - like people.

If a student was doing a lab report with video analysis, probably the best thing to do would be to repeat the video analysis a few times and find the standard error and use that as the uncertainty.

More like this

SteelyKid is spending a couple of days this week at "Nerf Camp" at the school where she does taekwondo. This basically consists of a bunch of hyped-up kids in a big room doing martial activities-- taekwondo class, board breaking, and "Nerf war" where they build an obstacle course and then shoot…
Todays programming pathology is programs as art. Start with a really simple stack based language, add in a crazy way of encoding instructions using color, and you end up with a masterpiece of beautiful insanity. It's not too exciting from a purely computational point of view, but the programs are…
When we look at a the data for a population+ often the first thing we do is look at the mean. But even if we know that the distribution is perfectly normal, the mean isn't enough to tell us what we know to understand what the mean is telling us about the population. We also need to know something…
I followed up my ranty-y post about "Sports Science" with an experimental investigation over at Forbes, tossing a football around on the deck out back and then doing video analysis of the bounces. This provided a wealth of data, much of it not really appropriate for over there, but good for a…