This last experiment is about image processing using the frames of a video-captured scene. Videos are merely a string of still images captured at a certain frequency. This of course is still ruled by the Nyquist criterion to ensure a non-aliased signal (the video itself) in the perception of our eyes. This frequency is more known as the frame rate or frames-per-second (fps) in digital electronics.
With those in mind, this activity taps to the capability of video for time-dependent phenomenon. Our experiment is a basic one: Capture a colored ball as it falls down the floor and using video analysis, experimentally determine g. To do this, we have first to be able to capture the individual still frames of an image. We use the Stoik Video Converter 2.0 to adjust the initial video (i.e. decrease the frame size, standardize the fps, remove audio) before chopping it to the still frames using VirtualDub 1.6.19.
Our selected time frame consisted of 14 still images.
Figure 1. Sample of a still frame from our video
Fig. 1 shows a sample image. We have used an orange ball against a white wall to facilitate an easier way for color segmentation later. The height used was ~1m.
Finally, these 14 images were processed in Scilab 4.1.2 using the Non-Parametric segmentation method we used for Activity 14 - Color Segmentation. The colored ball was now identifiable and by averaging the coordinates we find the location of the approximate center of the ball (centroid).
Figure 2. Centroid locations extracted from the 14 images
Plotting this in a single image gives us Fig. 2. The distance between two points increases in time due to the gravitational acceleration. Converting the coordinates (specifically the y) to the actual values through the method we have previously used in Activity 1, we can now plot the height vs. time plot of our experiment.
Figure 3.Height vs. Time plot of the experiment
Fig. 3 shows the exponential trend of our data. The time interval between images is simply obtained from the fps of our video, which is 30 fps. This gives us 0.03s gap per frame. The trend line's coefficient 5.767 equals to 50% of our experimental g. This is by virtue of the simple kinematic equation:
Since we have no initial velocity applied to the ball, we end up with the 2nd term that relates the distance (s) and time (t) with the acceleration (a) of the system. The negative sign constitutes the downward movement. As such, the expected value is around 4.903. With our 5.767 experimental value, this yields a 17% experimental error.
In summary, we have successfully used a video to extract still images and perform color segmentation. This enabled us to obtain the coordinates of our colored ball and experimentally measure g by converting these to actual length and using the frame rate of the video as time steps. The large error can be attributed to parallax in the recording. Also, the small distance (1m) could be a factor to the resolution of this experiment.
Self-Assessment: 10/10
In summary, we have successfully used a video to extract still images and perform color segmentation. This enabled us to obtain the coordinates of our colored ball and experimentally measure g by converting these to actual length and using the frame rate of the video as time steps. The large error can be attributed to parallax in the recording. Also, the small distance (1m) could be a factor to the resolution of this experiment.
Self-Assessment: 10/10