Home

Video analysis of head kinematics in boxing matches using

image

Contents

1. algorithm in OpenCV As described in 16 the focal length in 15 defined by the multiplication of the pixels per inche in the image and the actual focal length of the camera Its value can also be determined by camera E ccn T calibration process in OpenCV When the calibration rig 1s not avail Er TT M able we have to apply findFundamentalMat function in OpenCV to obtain the fundamental intrinsic matrix of camera in order to fi nally obtain the focal length However the findFundamentalMat function assumes that the camera used in two views owns the same focal length and other parameters 1 e the same camera This 15 not a property that 15 held in this project Further noticing that the image point is hard to match between the two views Considering these factors this method is not used in this paper An alternative heuristic method to obtain the focal length could be created using Posit algorithm by observing that the focal length has a direct impact on the value of Tz of the head distance values introduced in Section 3 2 For example the relationship between the input focal length and the result of the head pose using the sample person can be shown in the following chart 100 0 045 0 378 3 846 128 191 2000 0 123 3 346 2000 0 106 3 292 2000 2000 200 0 001 238 288 300 0 010 342 948 400 0 015 446 083 500 0 017 548 65 0 019 650 958 0 021 753 126
2. 12 12 13 13 15 17 18 18 19 22 21 30 3l 3l 22 22 Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 25 26 27 28 29 30 3 32 33 34 35 The process of the selection of the image point using the Right ear model Automatic image point searching option in MotionTracker with the usage of template method The process of editing image points in MotionTracker Example of output of head pose values in MotionTracker Representation of roll and yaw values in 2D cartesian space in MotionTracker Representation of pitch values in 2D cartesian space in MotionTracker Example output of head distance values in MotionTracker Interpolated head velocity alone three axis in video 1 Rotation of the yaw value from 45 degree to 345 degree Front View of the sample person is selected with the image points that is used to construct the head model Side View of the sample person 15 selected with the image points that is used to construct the head model 32 34 34 35 35 36 47 58 64 65 Tables Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table Table 10 12 13 14 15 16 17 18 19 Listing of boxing match movies to be analyzed Functionality implemented in MotionTracker Model Point Dictionary Image Point Dictionary Rotation and translation from OCS to CCS Assigned model
3. NSMutableString class in Cocoa library enables the concatenation of formatted string onto the Log ger console 1 5 Structure of thesis e Chapter 1 makes introduction of basic information of the thesis e Chapter 2 tells about the theory and method of image preprocessing of research material e Chapter 3 talks about the theory and method of kinematic analysis of research material e Chapter 4 makes clear how the kinematic information can be obtain using the software developed e Chapter 5 illustrates the accuracy of the motion analysis performed and make a comparison to re lated studies Chapter 6 concludes the work undertaken and the delimitation which is found the process of the study 1s revealed 2 VirtualDub video capture and OpenCV image preprocessing 2 Objective Q What is the definitions of the objects that we are going to analyse To understand the research problem a clear definition of the objects we are going to analyse 19 es sential A movie or footage or video 1s a file in the computer system which contains a set of sound tracks and video tracks The video tracks of a movie can be exported to a set of images which represent the video content of the movie The frame per second or 5 15 a crucial variable for time measurement which 1s defined as number of images that can be extracted from a video track of the movie in one second The image from the perspective of the computer 15 a file which co
4. 0 022 855 211 0 023 957 245 0 024 1059 243 0 024 1161 217 0 025 1263 173 0 025 1365 116 61 2000 2000 2000 2000 We found that the value of Tz of the output grows approximately linear to the increasing focal length This 15 not beyond our expectation because the focal length reflects the distance between the object and the camera When the input focal length has changed the estimated distance changes ac cordingly 5 1568 973 1670 891 1772 804 1874 713 1976 618 2078 520 2180 420 2282 317 2384 213 2486 107 Table 17 Error of Tz with different input of focal length It is also interesting to find that the output rotation angles and Tx Ty distances demonstrate unstable values when the input focal length 15 smaller than 200 pixels This result reflect the fact that the algorithm require a large focal length which 15 large enough so that the internal depth of the object is so small compared to the distance between the camera and the object 6l Method Given the fact that the value of Tz of the output grows almost linear to the increasing focal length an estimation can be conducted when the real distance between the object and the camera has al ready known Assume distance from the origin of OCS and CCS has a real depth value of S in meters Assume we have a sample person image that the model points has been assigned and the sample person has a tilt pitch roll value w
5. UPPSALA UNIVERSITET IT 12 046 Examensarbete 30 hp September 2012 Video analysis of head kinematics in boxing matches using OpenCV library under Macintosh platform How can the Posit algorithm be used in head kinematic analysis Liyi Zhao Institutionen for informationsteknologi Department of Information Technology UNIVERSITET Teknisk naturvetenskaplig fakultet UTH enheten Bes ksadress Angstr mlaboratoriet Lagerhyddsvagen 1 Hus 4 Plan 0 Postadress Box 536 751 21 Uppsala Telefon 018 471 30 03 Telefax 018 471 30 00 Hemsida http www teknat uu se student Abstract Video analysis of head kinematics in boxing matches using OpenCV library under Macintosh platform Liyi Zhao The division of Neuronic Engineering at KTH focuses the research on the head and neck biomechanics Finite Element FE models of the human neck and head have been developed to study the neck and head kinematics as well as injurious loadings of various kinds The overall objective is to improve the injury prediction through accident reconstruction This project aims at providing an image analysis tool which helps analyzers building models of the head motion making good estimation of head movements rotation speed and velocity during head collision The applicability of this tool is a predefined set of boxing match videos The methodology however can be extended for the analysis of different kinds of moving hea
6. newdatas spline t datas newt Finally the graph of the splined function and the interpolated data 1s saved into the file system The spline new interval as assigned in the third parameter of the spline function 1s 100 in this pro Ject After the data has been interpolated the roll pitch and yaw angular velocity can be depicted as the following picture for the video instance 45 Velocity rad s Velocity rad s 10 o 10 15 10 1 o us o o 0 06 0 06 Head Rotation X 0 08 Time s Head Rotation Y 0 08 Time s 46 0 1 0 1 Head Rotation Z T T Velocity rad s 0 0 02 0 04 0 06 0 08 0 1 0 12 0 14 0 16 Time s Figure 32 Interpolated head velocity alone three axis in video 1 Using the interpolated data the peak velocity during the impact can be described as cem sem 47 Video ID Peak Row velocity Peak Pitch velocity Peak Yaw velocity 13 7835 38 5748 52 3415 11 3362 29 2913 20 1618 23 2900 17 6000 20 9400 Table 14 Peak values during impact of analyzed video in radian per second The peak values are obtained by observing the output matrix of the spline function The absolute value of all the numbers in the matrix 15 considered and the original value of the maximal value of them are shown in the table above Researches points out that the output that is calculated using spline function with the
7. 16 6 36 9 6 7 0 0 3 7 1 3 6 9 7 8 3 2 4 3 3 4 11 4 2 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 0 6 2 4 5 9 9 7 1 2 5 5 0 7 3 7 23 1 2 3 0 1 3 2 3 1 3 6 3 4 19 2 18 0 7 6 4 9 4 5 5 7 1 5 0 3 19 6 5 6 4 7 12 7 6 3 8 9 6 7 4 8 1 6 3 3 3 7 Video 3 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 2 3 1 3 3 7 3 3 3 5 3 4 2 7 17 4 19 5 19 2 4 5 13 4 5 2 10 1 17 4 5 6 6 0 3 5 8 1 7 3 4 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 6 4 1 3 2 3 12 2 1 2 4 1 0 3 2 7 8 5 2 3 0 0 0 0 0 0 0 0 3 4 7 6 16 6 15 6 4 5 4 5 16 4 3 4 1 4 6 5 5 6 11 4 0 5 7 1 3 9 6 7 16 5 2 9 13 3 2 7 38 Video 5 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 3 4 1 5 2 1 5 2 1 2 0 5 2 0 1 9 1 5 2 3 10 0 16 9 0 2 22 6 3 4 6 8 2 7 2 6 7 0 4 5 3 8 15 9 13 8 4 6 5 6 2 5 8 0 1 6 8 0 6 7 0 0 0 0 0 0 0 0 8 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 0 1 3 4 1 0 17 6 1 2 0 5 1 8 1 5 6 8 2 3 0 7 0 1 0 8 0 8 3 4 1 7 2 7 1 1 1 2 4 5 0 0 0 0 0 0 0 0 5 6 0 0 0 0 0 0 0 0 6 7 0 0 0 0 0 0 0 0 7 8 0 0
8. 7 8 33 4 51 6 99 3 14 15 0 0 0 0 0 0 0 0 15 16 9 2 45 8 24 7 24 2 16 17 0 0 0 0 0 0 0 0 17 18 11 6 12 2 22 6 34 0 18 19 0 0 0 0 0 0 0 0 19 20 9 9 25 2 16 4 22 2 20 21 0 0 0 0 0 0 0 0 21 22 1 5 65 8 19 5 51 1 22 5 23 0 0 0 0 0 0 0 0 23 24 2 6 19 0 2 7 43 6 24 25 0 0 0 0 0 0 0 0 25 26 107 1 11 1 106 0 13 8 26 27 0 0 0 0 0 0 0 0 41 Video 20 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 3 4 0 1 2 5 0 4 1 6 4 5 6 5 8 0 25 6 40 2 5 6 0 5 62 6 1 7 85 3 6 7 12 5 18 2 20 3 3 2 7 8 25 9 36 4 6 8 19 5 8 9 34 9 28 8 6 0 51 5 9 10 63 9 15 2 46 5 13 0 10 11 0 0 0 0 0 0 0 0 Video 21 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 6 6 1 7 17 3 10 4 1 2 4 9 8 1 5 1 10 9 2 3 0 0 0 0 0 0 0 0 3 4 1 3 38 4 51 9 25 2 4 5 5 7 1 7 6 1 3 6 5 6 13 7 31 0 3 2 39 7 6 7 0 3 7 0 0 5 7 5 23 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 8 9 0 0 7 7 0 6 0 3 9 10 0 4 5 4 20 0 8 8 10 11 10 5 29 1 9 7 13 9 11 12 9 5 7 6 10 3 2 3 12 13 7 3 6 3 3 3 3 9 13 14 3 8 9 7 3 4 9 5 14 15 8 9 20 5 7 9 15 3 15 16 1 3 7 6 8 2 8 9 16 17 3 5 6 5 4 2 6 7 17 18 4 6 2 2 9 0 13 8 18 19 3 1 7 8 0 0 10 3 19 20 0 0 0 0 0 0 0 0 Video 24 first secon
9. 0 0 0 0 0 0 8 9 10 2 19 1 2 5 5 2 9 10 15 4 20 2 12 7 41 4 10 11 2 3 16 3 15 9 2 8 11 12 2 5 5 2 7 5 31 7 12 13 4 4 5 6 0 1 38 9 13 14 0 2 0 7 0 6 4 0 9 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 2 3 0 0 0 0 0 0 0 0 3 4 0 0 0 0 0 0 0 0 4 5 0 0 0 0 0 0 0 0 5 6 4 2 4 3 7 5 12 2 6 7 2 9 14 0 11 9 18 1 7 8 6 9 36 0 2 8 52 8 8 9 1 8 0 8 0 5 28 0 9 10 11 6 2 1 5 3 1 8 10 11 0 0 0 0 0 0 0 0 11 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 2 3 0 2 1 8 21 3 1 2 7 2 5 0 2 4 14 9 2 3 7 8 0 1 9 4 33 3 39 Video 11 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 3 4 0 2 8 6 4 7 26 5 4 5 9 8 1 3 1 7 25 5 5 6 0 0 0 0 0 0 0 0 6 7 0 0 0 0 0 0 0 0 7 8 8 9 0 0 0 0 0 0 0 0 9 10 16 6 0 3 6 3 21 6 10 11 15 9 12 5 1 6 22 8 11 12 0 0 0 0 0 0 0 0 12 13 0 0 0 0 0 0 0 0 13 14 0 0 0 0 0 0 0 0 12 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 0 0 1 3 0 4 6 4 1 2 2 1 0 9 6 5 17 6 2 3 8 3 0 3 0 5 23 8 3 4 2 1 2 7 1 7 23 1 4 5 0 7 0 8 0 0 17 4 5 6 10 8 4 5 6 9 25 5 6 7 0 8 2 9 0 5 2 8 7 8 0 5 2 7 0 7 26 5 8 9 0 0 0 0 0 0 0 0 9 10
10. 0 0 0 0 0 0 0 0 10 11 32 1 12 5 16 0 87 2 11 12 4 0 15 1 1 4 0 9 12 13 2 3 8 9 3 5 50 0 13 14 1 1 7 9 0 6 11 0 14 15 0 0 0 0 0 0 0 0 15 16 0 0 0 0 0 0 0 0 Video 13 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 0 0 0 0 0 0 0 0 1 2 3 4 1 4 2 5 1 9 2 3 1 7 3 5 4 7 13 4 3 4 3 0 2 2 5 1 6 6 4 5 0 0 0 0 0 0 0 0 5 6 0 0 0 0 0 0 0 0 6 7 15 8 36 3 27 6 13 9 7 8 5 3 8 5 14 3 25 1 8 9 0 0 0 0 0 0 0 0 9 10 0 4 10 8 14 3 26 3 10 11 1 7 0 8 14 3 8 1 40 Video 14 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 1 3 0 7 4 8 2 9 1 2 4 0 1 4 4 2 2 2 2 3 0 0 0 0 0 0 0 0 3 4 0 0 0 0 0 0 0 0 4 5 37 3 11 3 1 6 13 3 5 6 6 5 3 5 1 8 15 4 6 7 2 5 4 6 5 7 8 9 7 8 11 3 17 7 0 6 2 6 8 9 7 9 17 5 3 1 19 5 9 10 6 7 7 3 6 1 17 6 Video 16 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 2 3 0 9 0 0 4 4 3 7 3 4 139 4 8 1 142 2 34 9 4 5 0 0 0 0 0 0 0 0 5 6 3 1 16 8 6 8 6 2 6 7 9 9 20 5 25 0 24 2 7 8 6 8 8 9 28 8 17 7 8 9 18 7 5 0 27 6 10 5 Video 18 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 10 11 0 0 0 0 0 0 0 0 11 12 0 0 0 0 0 0 0 0 12 13 0 0 0 0 0 0 0 0 13 14
11. 15 created using method in Appendix II The focal length 15 measured using method in Appendix I utilizing the created head model The result of the estimation errors for the head poses can be illustrated in the following table pope pep e o af mp mp 50 Le o o pose sem 69 98 71 23 59 93 50 12 31 75 21 83 0 65 20 73 77 43 45 55 53 62 93 70 33 74 87 2 76 45 0 8 66 37 43 64 78 51 E Yaw 4 69 7 70 5 66 4 03 24 78 6 17 0 53 0 38 1 52 1 30 2 19 1 09 2 13 4 52 4 18 5 56 4 36 3 66 15 84 3 42 0 74 0 76 1 41 1 21 0 95 1 12 0 81 3 38 6 27 9 30 opp Roe om x im e JEA af mp amp um 39 11 345 ww 365 94 74 26 12 92 20 94 15 74 12 92 14 05 18 48 10 68 12 81 eel 56 52 47 19 37 09 23 83 Uo CA 95 353 46 12 80 io 351 31 0 64 Uo CA 351 53 10 44 350 63 22 02 Uo CA I Uo 350 44 40 85 GN I D gt GN 355 79 52 12 Uo CA 359 15 63 96 LAA CA m CA 359 00 69 90 LAA CA 353 48 66 14 LAA Uo LAA LAA UJ I CA 1 348 42 57 29 347 58 43 05 Uo Uo 52 330 3
12. 2 models one 15 applicable when the left ear 1s visible an other 15 applicable when the right ear 1s visible such as what 15 shown in Table 9 We would like to refer them as right ear model and left ear model In this project the left ear model and the right ear model would be used as the input M array in the Posit algorithm Model Object Model Key Model Object Model Key eov v 25 9533 9 70 5 39 3 33 3 40 5 39 3 33 3 40 5 39 8 33 3 40 5 39 3 75 0 7 1 142 8 75 0 7 1 142 8 Table 9 The left ear and the right ear model used in MotionTracker The left ear and right ear model can also be illustrated in the following Matlab plots Comparing to the fine grained head model in Figure 11 this model has been simplified significantly The left ear model The right ear model Table 10 The left ear and the right ear model plotted Matlab 3 4 The persistence of head model In order to facilitate the process of pose estimation MotionTracker the model point created in the previous section should be saved into the file system for future use This process could normally be done in the Xcode property list editor A property list is a XML structured file to store structured data It can store dictionary numbers and arrays which makes it very suitable to for the storage of model points When the property list file is created in Xcode the model points should be edited i
13. For example the level of noise should be reduced and the interlaced pattern should be deinterlaced Q Where the image is loaded for analysis Should there a platform for the motion analysis A tool or platform must be constructed for motion analysis of the head kinematics The loading of image sequences traverse of image sequences should also be implemented Q Figure 1 demonstrates a set of images describing a concussion in a boxing match Given this set of images how the 3D head motion be captured using computer vision library 0000 HOTROOOS png Figure 1 Example of boxing match image sequence The process behind the analysis of head kinematic information could include the construction of head models the representation of the head motions using a set of features in the images and the setup of these features 1 3 Previous studies Enrico described a method of performing head kinematic analysis using the Skillspector software The process of Enrico s method 15 based on videos in DVD collections captured from television re cording In his analysis two camera angles of the boxing matches are captured and calibrated into the same timeline In order to undertake the head kinematic analysis one calibration rig was created for each of the two views and a body model should also be defined The process of the analysis includes to find the human body joints in a sequential order for every image in the video
14. Kernel Size ea sss 3 Kernel Shape Rect Show Morphological Image Image Point V Head Breadth mm 150 Create Left Ear Model Create Right Ear Model Supposed Distance m 2 Estimate Posit Focal Length Frame Per Second 29 97 Generate Pose Focal Length mm 1 600 V Show result Clear Log Figure 13 Interface of MotionTracker tool Q What is the functionality of the image analysis tool used in the head kinematic analysis The functionality of MotionTracker can be summarized in the following table Image Loading Load a path of images into memory Image Traverse image sequences using slider or mouse wheel Traversing Accessing Images A file contains set of images can be dragged directly into the interface for proc Drag and Drop eons Previous Image Select the previous image in the image sequence Next Image Select the next image in the image sequence Box Blur Convolve image with the variable sized boxing kernel the image noise is sup pressed Convolve image with the variable sized Gaussian kernel image noise is reduced Gaussian Blur the image is sharper than the simple Box Blur 15 Bilateral Blur Convolve image with variable sized bilateral kernel image noise is reduced the image has a painting effect after applying this filter Convolve image with the variable sized median kernel image noise 15 reduced Median Blur edge pattern 15 better preserved applying this filter Breaks narrow ist
15. boxing match videos gives hint on how to simplify the model to meet our needs The criterions behind the simplification of the model includes the following More than 4 points should be included in the model for the running of problem The model points should be easy to be selected in the images of the video 24 e The model points should be as few as possible to simplify the selection of image points in Mo tionTracker After examining the boxing matches videos it can be observed that the components on the head such as the nose eyes and ears are comparatively easier to recognize than other points on the head The sample head model created using Appendix II could be shown in Table 6 Model Key Model Object in millimeter poe peoe e Table 8 Simplified head model of the sample person The model points defined in Table 8 is regarded as the initial selection of the head model For each of the model points defined in the face the corresponding image points must be found in the image For example if the nose is selected as the model points the nose position of the observed person in the image needs to be found In the skewed view of the face however such as what Table 9 shows either the left ear or the right ear can be invisible from the camera view This situation raised diffi culty on the selection of image points in the image sequence In order to solve this question the ini tial selection could be decomposed into
16. can be expected that the interlaced video would have strong negative effect on the overall proc essing and performance of the head kinematic analysis Techniques should be involved to address this problem Q How to deinterlace the video What are the things to concern when deinterlacing the video In this project a video deinterlacing technique called Progressive Scan is used to deinterlace the video Progressive scan technique scans the two fields of a frame and deinterlace the field regions on de mand When a 25 FPS movie 15 served as input the output would be a 50 FPS movie which has the same size of the input movie There are also advantages and disadvantages of this method This method of video deinterlacing produces fluid motion in images with moving objects and the resolution is kept with quiet scenes The drawback is that it usually produces large image files be cause the image size is kept 112 One procedure to do the deinterlacing of an AVI video under the Windows platform using the pro gressive scan can have the following steps Install VirtualDub for frame and field manipulation e Install DivX 7 for movie encoding and decoding Install AviSynth for field extraction and parity manipulation e Create an Avisynth script txt file with the following format AVISource FileName separatefields FileName is the image path of the avi movie to deinterlace The Avisynth command sepa ratefields separate each
17. object and image object for every image in the image sequence The procedure of the Posit algorithm simplified head model of the sample person The left ear and the right ear model used in MotionTracker The left ear and the right ear model plotted in Matlab The way the template image 15 compared to the sliding window in the template method Head Rotation and Translation velocity in analyzed video using focal length equals to 1600 Example output of head distance value Peak values during impact of analyzed video Peak value of L2 norm of the velocity during impact of analyzed video in radian per sec ond Error of the head pose values Error of Tz with different input of focal length x and y component of the left ear model Z component of the left ear model 10 16 20 20 2 24 24 25 26 26 33 43 43 48 49 53 62 65 65 This paper 1s based on 7 Enrico Pellegrini Kinematic evaluation of traumatic brain injuries in boxing 2011 1 Introduction 1 1 Background and motivation Head injury of different kinds 1s one of the main causes of disabilities and deaths 1n the world Traumatic brain injury referred to as TBI 1s one category of head injuries which occurs when the external forces traumatically injures the brain The report from World Health Organization WHO estimates that 2 7876 of all deaths in the coun tries within WHO region are related to the car incidents and unintentional falling High portion of the
18. of all of the points on the rigid body relative to the reference point Find the orientation of the rigid body relative to a reference frame To obtain the position and orientation of 3D rigid objects two coordinate systems would be intro duced The first coordinate system 15 where the 3D modeling of the rigid object takes place The 3D model of the rigid object describes the structure of it The picture below for instance described a 3D model of the head Each intersection of the meshes on the head 15 a point on the 3D head model We would like to call the space where the 3D head model are defined the object coordinate system OCS To analyse the motion of the object it is necessary to introduce a reference coordinate system RCS which 1s fixed in the ground The physical position of the head object is measured relative to the RCS RCS defines and represents the world coordinates where the motion of the object can be measured by locating the model points in RCS Figure 14 Example of the 3D model of the head The points on the head lives in the space the we call the object coordi nate system courtesy of www google com One category of method to perform the rigid body kinematic analysis 1s to discover the relationship between the point coordinate values living in RCS and OCS For example one may prepare for the analysis of a head object by firstly constructing a 3D head model of a person in OCS followed by finding the changes of
19. of the frame into fields The frame 0 1s separated into field 0 and field 1 The frame 1 15 divided into field 2 and field 3 and so on e Open the script file in VirtualDub The script file 1s executed and the first field of the video 15 shown in the window like what is shown in the figure below Each of the field contains only half the number of the rows in the original video because the field is obtained by grabbing interleaved lines of the original video T This can only be performed under Windows platform since the VirtualDub is not available in Mac OS X 11 BH VirtualDub 1 9 11 movi avs 5 gt gt Alas Frame 0 0 00 00 000 K Figure 9 VirtualDub software the window shows the first field of the video e Adding the Deinterlace Smooth 131 filter by going to filter menu of Virtualdub leaving filter parameters unchanged The video 15 deinterlaced and the frame number 15 doubled after this operation This change 15 noticeable when counting the frame timeline in the bottom of the Virtualdub main window e Moving the frame sliders the movie should be shown interlaced In some cases however the re sultant movie could be jumpy with the object moving in the movie sway back and forth This phenomenon happens when the first field of one frame 15 taken temporally earlier than the second field of the frame If that happens add the co
20. sure that the above thresholds are maintained These skills could involve the understanding of relative position of different feature points on the head object They deserve future study and research 56 6 Delimitation and conclusion In this section the delimitation and the summary of the project will be discussed The delimitation part is labeled the pattern of X Y which means it is the Y th delimitation in Chapter X 6 1 Delimitation Delimitation 3 1 The head model used in the project 1s simplified to the left ear model or the right ear model with only 4 points needed to find the head pose values we need It 1s not surprise to find noticeable val ues in the error measurement in some cases because of the very simplified model It 15 not easy for human being to pick up the feature points in low resolution images so we can only rely on some dominant features such as the eye position and nose position for easier operation The manually picked up image points also raised inaccuracy issue of the algorithm although some measures has been taken to speed up the selection of points The method to overcome this issue 15 not addressed in this project Furthermore we have to point out that the model used for boxing match analysis 1s created using the sample person in 20 not the actual boxer It 15 practically difficult to find the head model for the boxer unless we have high resolution of the boxer s head images Delimitation
21. the mouse Move the mouse to the second key in the model do the same mouse clicking as before 67 Do the same steps until all the keys in the model has been marked For example for the following picture and the right ear model attached O O JUsers zhaoliyi evaluation new LmodelGenerator M g 0 the marked image would be Users zhaoliyi evaluation new LmodelGenerator How to automate feature point selection Motiontracker provides functionality to undertake automatic feature point detection during the fea ture selection phase of the motion analysis This means the feature point can be automatically se lected in the image There are prerequisite of using this method Current image must not be the first image in the image sequence The image which 15 previous to the current image must have its feature points fully selected according to the model keys Current image must not have its feature points fully selected To perform the automatic feature selection UJ N I da Open Motiontracker Load image folder Use mouse wheel or the image slider in the Motiontracker panel Traverse to the image whose feature points need to be find out Make sure the current image fulfill the aforementioned requirement Click into the Image Point Tab Select clear this clears the feature point 1n the current image Select Automatic Search this selects the feature point according to the previous image clues 68 A
22. these point coordinate values in the 2D image plane of RCS When the head object 1s moving in the scene the point coordinate values in RCS changes The observation of the motion in RCS can be described in the following picture 17 Figure 15 The camera space 15 a typical selection of RCS See next section In the picture we get to know that the boxer s hand is moving by observing that the hand coordinates in the camera space is changing The rigid property of the analyzed object simplifies the kinematic analysis In the 2D kinematic analysis for example only three non linear points on head are needed to recover the position and orientation of the entire head structure l In the 3D kinematic analysis at least four non planar points are needed 116 Q How the rigid body kinematic analysis related to the video image sequence of the moving rigid objects A typical selection of the RCS 15 the camera space The pinhole camera model defines the relationship between a point in the 3D space and the mapped point in the 2D image plane Specifically speaking this model defines a mapping of a point from the OCS to the camera coordinate system CCS When the CCS are fixed in space it can be used as the RCS according to the aforementioned requirement Combining the CCS with the 3D model of the rigid head object the pinhole camera model can be regarded as a good choice for the head kinematics analysis In this project the head points when thei
23. 0 340 22 1 00 1 00 330 45 0 343 64 0 98 0 98 344 37 0 82 0 82 327 08 19 73 2 07 2 07 328 18 7 76 1 47 1 47 16 26 26 26 38 16 45 98 Uo Uo 349 26 59 03 95 I J N 346 96 63 96 LAA U 346 32 62 65 Oo Ww GN Uo CA 336 70 54 63 335 00 39 38 U 330 47 28 98 LAA LAA 02 55 amp m 326 43 19 33 o 331 61 29 79 LAA I UJ 330 12 Table 16 Error of the head pose values R real P estimated E error 34 74 o2 I 41 86 48 65 57 48 0 64 The table 1s sorted by the real roll values The roll values described in a set includes 90 60 30 15 0 345 300 270 in degree For each roll value in the real roll value set the estimated yaw pitch and roll error all tend to reach their peak value when the real pitch angles come close to 90 or 90 in degree The peak 15 some times sharp e For each roll value in the real roll value set the estimated pitch error usually hit its minimum value when the real pitch angles come close to 0 in degree The minimum values are usually less o3 than 10 degrees The error is small lt 5 when the real pitch values lays between 30 to 30 de gree and medium 10 between 45 to 45 in degree For each roll value in the real roll value set the esti
24. 3 2 In this project a fixed focal length 15 used for the Posit algorithm for the analysis of the boxing matches The reasons we are not able to carry out the focal length measurement in Appendix I for the boxing match analysis includes The distance between the boxer and the camera is not known Although there are documents says a standard boxing ring 1s between 4 9 and 7 6 m to a side between the ropes the clue 15 not strong enough for the determination of the actual distance t is hard to find the front view of the face as required by the process introduced Appendix I Even if we did found the face tends to be too small for the accurate calculation of Tz However the focal length should not be taken great concern since the nature of Posit algorithm pro duces stable head pose value and instable head distance value regardless of focal length chosen as long as the object 15 far away from the camera This fact 15 obtained by the observation in Appendix I which can be summarized as When the focal length are large enough the variation of it would have little impact on the head pose values and Tx Ty component of the head distance values Delimitation 4 1 When we are using the left ear model we made the assumption that the left ear 1s visible to the ob server for the pick up of the image points This 1s not always true because the feature points could be occluded sometimes although the left ear model 1s apparently better tha
25. 326 4 1996 Wikipedia Boxing ring http en wikipedia org wiki Boxing ring Rafael C Gonzalez Richard E Woods Digital Image Processing Prentice Hall ISBN 7 5053 7798 l pp 528 532 72
26. 7 has some degree of inaccuracy The observation from this comparison would not provide a reason behind the incomparable result be tween the two studies This deserves future study 5 3 Accuracy from real pose An evaluation In order to evaluate the accuracy of the stated algorithm the error between the real head poses and the estimated head poses should be observed 49 The real head poses database used in this project are obtained from the result of 20 This database contains 2790 monocular face image of 15 persons One of them is used to evaluate the Posit algo rithm in this project According to the Euler angle definitions in this project the database contains the head poses with the tilt roll angles ranges in ISISISIBISISISDSDe it contains pan pitch angles ranges and it contains the single roll value of 0 in degrees delimitation 5 2 The singleton of the roll value means the head 15 not rotating in the z axis in the database There are totally 93 face pictures for a single person The error of the head poses 15 calculated using the difference of the real head pose values and the estimated head pose values so error roll estimated head pose roll head pose roll error pitch estimated head pose pitch head pose pitch error yaw estimated head pose yaw head pose yaw The head model used for evaluation
27. Tracker represents the rotation speed and translation speed of head The process of head pose estimation in MotionTracker includes the following steps e Load the images in the image sequence e Load the model points in the property list file Load the image points in the OpenCV XML file of the images e Edit the image points using MotionTracker model creator e Compute the head pose according to the model and image points using Posit algorithm Representation of the result of head motion In this chapter the steps of head pose estimation in MotionTracker will be discussed in detail 4 2 The loading of inputs of algorithm with drag and drop operation Chapter 2 Section 2 3 described the method to obtain the image sequence of a video The image sequence 15 represented as a list of PNG files in the file system In order to make the video analysis easier these image files are put into one folder Before the image folder is loaded into MotionTracker the property list file of the model Section 3 4 should also be added to the image folder 29 As mentioned in Section 3 3 the head model used in this project are either right ear model or left ear model They are represented by two property list files created by Xcode To determine whether the left ear model or right ear model should be used the image sequence should be inspected to see if the right or the left ear 15 occluded from the camera It is o
28. aking the drag and drop operation triggers event during different stages of the drag session when the mouse 15 entering moving and dropping into the Drag and Drop destinations The image path names 15 passed into the event handler of these events and the 1mage file can then be loaded e Preferences NSUserDefault class in Cocoa library helps saving the user preference of the motion analysis tool into the system preference database The user preference could include the kernel size of the boxing filter in OpenCV the block size of the adaptive thresholding the operation type of the image mor phology operation the maximum level of hierarchy in the cvFindContours function and so on e OpenCV Image Processing Facility In order to make the image analysis possible the understanding of the image file and the operation that could be performed onto these files 1s necessary OpenCV library enables the understanding of image file by providing with image loading and im age saving operations Each image 15 represented by a C structure which 15 essentially a multi dimensional matrix that saves the pixel data The data could be either single channel or multi chan nel which represents the gray scale image and color image respectively The functionality of OpenCV Image Processing facility includes the image blur operation image morphology operation image thresholding operation Sobel and Canny operators etc The motion analysis tool created this p
29. an 1 is perfect match 1 is perfect mis match 0 15 no correlation Table 11 The way the template image is compared to the sliding window in the template method The square difference method 15 used in this project The pseudo code of the template method can be described as Set index 0 For index from 0 to the number of image points in the current image Finds the image point P with index index Finds the window rect roi rect around the P with width and height selected as 20 Obtain the template image of the current image using roi rect Match template image against the next image obtain the comparison matrix CM Obtain location of the point in the next image where CM has the minimum value index index 1 To select the image point automatically we completely select the image points in one image of the image sequence then choose the next image using either the mouse wheel or the image slider After that we can choose the search button in the MotionTracker panel asking it to select the image point of the second image for us Operation Posit Automatic Search Search Figure 26 Automatic image point searching option in MotionTracker with the usage of template method The capability to edit the image point 1s a crucial convenience method for the creation of image points in MotionTracker The editing of image points can be performed easily by selecting the created image points on the image screen And drag the point around the ima
30. analysis dme 1 analysis dme 1 Figure 2 Calibration rig left and human joints should be created and assigned for Skillspector motion analysis cour tesy of 7 In the picture above for example the calibration rig is assigned for the camera in the view on the left side The human body points are assigned corresponding to a certain kind of model defined in the Skillspector They are shown on the right side When all the feature points are assigned for every picture of the video Skillspector should be ready for the motion analysis The result of the analysis is the rotational velocity translation velocity rotational acceleration translation acceleration of the head object and the hand object One example is shown below analysis dme 3 ol x Figure 3 Result of Skillspector shows the 3D head acceleration with respect to time in radians per second from 7 Mentioned in the discussion section of Enrico s paper the drawback of this method could be The selection of feature points is difficult The video is interlaced e The angular velocity of the axis is not defined clearly in Skillspector software Lack of evaluation method In this paper we are trying to overcome the disadvantage of the method in the Enrico s paper and also taking advantage of the captured and calibrated videos from Enrico s work Daniel described an algorithm where the head kinematic information such as the orientation and position can be ex
31. as taking the sym72 metric property of the head model into consideration The model has shown in Section 3 3 We would not discuss the process behind internal shifting and scaling of the model points 65 Appendix III MotionTracker User Manual MotionTracker manual 15 the place where the instruction 1s given on how to use this software This manual 15 the excerpt from the MotionTracker help information How to load a sequence of image into Motiontracker Motiontracker enables you to load a set of 1mages for motion analysis The folder that contains the set of 1mages you want to analyze 1s called the image folder The image folder could contain the fol lowing content e Required PNG format image sequence labeled as AAAA AAAA identify the con tents of the image sequence BBBB defines the order of the image sequence e Optional Created from Motiontracker the image point list for each of the image in the 1m age sequence e Optional The plist file containing the model of the object for analysis in the image se quence This is required 1f you need to perform motion analysis Open Motiontracker 1f it 1s not Open Finder In Finder find the image folder you want to perform motion analysis Drag the image folder to the Motiontracker panel It 15 easier to do so when the Finder and Motiontracker 1s both visible I UJ B e How to create simplified object model for motion analysis Motiontracker enables motion anal
32. ationship between the transition matrices calculated using different models The creation and selection of the model 1s based on the property of the image for analysis In the simplified model such as the left and right ear model for instance the one that 15 going to be chosen as the input for the algorithm depends on the accessibility of the corresponding point in the image Specifically speaking when the left ear 1s accessible from the image left ear model 15 used when the right ear model 1s accessible right ear model 1s used 28 4 Posit head pose estimation in MotionTracker 4 Objective The MotionTracker Posit module 15 the place where the head pose estimation 1s actually carried out MotionTracker helps the running of the Posit algorithm in the following ways MotionTracker can load image folder which contains set of ordered images using drag and drop operation MotionTracker can load the 3D head model from the model property and store it 1n the M ar ray MotionTracker creates the files for each image in the image sequence for storing image points store it in the I array MotionTracker supports the visual editing of I array on image MotionTracker helps the assignment of the focal length and FPS of the movies used for motion calculation e MotionTracker performs algorithm MotionTracker represents the result of the R mat and T mat about head orientation and position Motion
33. bject is given in the world coordinate space and the correspond ing image points of the object is given in the camera coordinate space how the pose of the object be obtained The interface in OpenCV for head pose estimation in the C language is void cvPosit CvPositObject modelPoints CvPoint2D32f imagePoints double focalLength CvTermCriteria criteria CvMatr32f rotationMat CvVect32f translationMat 27 Here modelPoints 15 the input M dictionary imagePoints 1s the input I dictionary focalLength 15 input focus length rotationMat_ and translationMat 15 the output R mat and T mat Q How the face pose values and head distance value be obtain using the C interface of the Posit algorithm where the definition of both can be found in Section 3 2 When the R mat 15 obtained the head pose values can be obtained by the following lines of C code yaw asinf rotationMat 6 pitch atan2 rotationMat 7 cos yaw rotationMat 8 cos yaw row atan2 rotationMat 3 cos yaw rotationMat 0 cos yaw While head distance values can be illustrated by tx translationMat 0 ty translationMat 1 tz translationMat 2 Q How does the selection of model related to the usage of Posit algorithm R mat and T mat are calculated according to the input of the algorithm When different models are used different result would be given It 15 generally a bad 1dea to find the rel
34. bvious that the right ear model should be chosen if the left ear is occluded and vice versa In the analysed videos it 15 uncommon but possible that both the struck boxers ear can be occluded from the camera we make a compromise in 92 4 1 the process here so that the less occluded model is chosen After the appropriate model 15 chosen the property file of that model is put into the image folder To load the image sequence and the corresponding model files into MotionTracker the image folder should be located in the file system and drag and dropped into MotionTracker interface The picture below shows the concept of this action Notice that the mouse cursor 15 converted into a cross cursor indicating that we are performing a drag and drop operation J 9j ie RH ME ae Input Picture Folders Drag images to the Panel Models plist qid Processing Blur Binary Mor Edge de Velocity Model Creation Operation Posit Automatic Search Search Clear Model Number 1 r 11 Skeleton Number 1 Log 00001 Welcome to MotionTracker 1 0 00002 Before start tracking Drag Image to the Panel Show result Figure 20 Drag and Drop operation enables fast loading of image sequence and model files into MotionTracker Only Portable Network Graphics PNG image format 15 supported in MotionTracker When loading the images MotionTracker checks the extension of the images to see if it
35. c dynamics in the 1mage sequence of the sports video A computer software used for this purpose 15 designed and implemented The func tionality of this software includes capture of the video footages creation of head model points 1m provement of video images and focal length estimation The result of the project or the output of the software is a representation of the head kinematics in the analyzed video taking advantage of the knowledge from the computer vision library The motion analysis tool 15 developed under Mac OS X using Xcode The main library used for the motion analysis is OpenC V 1 2 The research questions The incentive of the project can be demonstrated by asking the research questions we are facing in the project Q Given a sports video what is the intended video format for the motion analysis or image proc essing It is very important to get the right material for research Given a motion video of the head object it is not convenient or possible to make head kinematic analysis directly on video A platform for cap turing the image sequence from the video should be implemented Q Given an image sequence does the quality of the images satisfiable How should we improve the quality of the images Ordinary TV footages usually have lower resolution compared to high definition videos The inter laced pattern is a another major flaw of quality in TV images The quality of the videos should be improved in some way
36. can be opened Further more it looks for the property list file in the folder This process fails 1f no image files can be opened or the property list file is not found in the folder MotionTracker creates files for the storage of image points in the file system When the image se quence 15 first loaded into MotionTracker OpenCV XML files is created for every image in the im age sequence of the containing folder When the same folder 15 loaded again the image points 15 loaded automatically by MotionTracker alone with the model property file and the image them selves The image points files created and loaded in the same folder as the image folder The name of the image points file is the same as the corresponding image file Here 15 what the image points file in might look like in the Finder 30 SHSV0000 txt SHSV0001 txt 7 SHSV0002 txt SHSV0003 txt SHSV0004 txt 5 5 0000 5 5 0001 5 5 0002 SHSV0003 png 5 5 0004 a ew a Figure 21 Image points are saved alone with the image sequence 4 3 The image point visual editing and the template method This section describes the MotionTracker model creation module and demonstrates how the OpenCV template method could be used to facilitate the image point selection When the image sequence is loaded into MotionTracker the image screen demonstrates the first image in the image sequence to the user Users zhaoliyi tracking Matches Analysi
37. city compared with the related result of 7 can be il lustrated the following table Video ID ene ne do EY Peak L2 velocity from 7 29 97 29 1645 29 8900 29 97 27 4542 52 8400 48 Peak L2 velocity Video ID pt Cel Peak L2 velocity from 7 pee oo mw 9 pe o pe o oo Los poe cemere pe cem 89 9 26 9141 83 1600 Table 15 Peak value of L2 norm of the velocity during impact of analyzed video in radian per second On the whole the order of magnitude of the result in the two studies matches to each other Furthermore the L2 velocity appears to be larger with the increase of the frame rate both of the results Research in 7 has also pointed out this phenomenon and the undersampling at the lower frame rate 1s regarded as the main reason for this phenomenon There are some cases in video number 1 12 18 for example where the differences between the two studies are coherent Meanwhile there are some cases in video number 2 8 and 16 for exam ple where the differences are incoherent Research in 7 regards some cases when the result of velocity 15 especially high as outlier cases The author also pointed out that this could be related to the low frame rate of some videos He discarded the analysis of the result for videos below 90 fps 1n the frame rate because it could result in higher error in his analysis For example the video 16 The estimation of both of the study performed this and
38. click the points you want to edit the points turn red after it is se lected Re click the point to deselect it You can perform one of the following operations o Using arrow keys to move the selected feature point o Using mouse dragging to move the selected feature points How to perform basic image processing operation in Motiontracker You can perform some basic 1mage processing operations in Motiontracker 1 Open Motiontracker Load image folder 3 Use mouse wheel or the image slider in the Motiontracker panel Traverse to the image which needs to perform image processing After that you might perform the following operations N To perform image thresholding 1 Click Thresholding Tab 2 Make selection of different options and see result in the image viewer 3 Click Show Threshold image button and see result in the image viewer To perform image smoothing 1 Click Blurring Tab 2 Make selection of different options and see result in the image viewer 70 3 Click Show Blurred image button and see result in the image viewer To perform image morphology 1 Click Morphology Tab 2 Make selection of different options and see result in the image viewer 3 Click Show Morphological image button and see result in the image viewer To obtain contours and lines in the image 1 Click Contours Tab 2 Make selection of different options and see result in the image viewer 3 Click Show Hough Line or Show Contours but
39. ct for every image in the image sequence As mentioned in previous section there should be at least 4 keys model points to use the Posit algo rithm Generally speaking the more keys are used the more accurate the result will be However adding co planar model points would not help improving the accuracy of the algorithm 181 After the focal length has also been determined Appendix I the Posit algorithm 1s ready to do its work The T Mat and R Mat can then be obtained for each of the images in the image sequence To conclude the workflow of the Posit algorithm can be illustrated 1n the following diagram For a video performing head kinematic analy sis create M array For every image in the gt Posit Algorithm gt video create I array For a video performing T Mat head kinematic analy sis find focal length R Mat Table 7 The procedure of the Posit algorithm The estimation and evaluation of the head kinematic information would be describe in Chapter 5 3 3 Model simplification criterion and model division Q How to create the head model used for the Posit algorithm How to simplify it It 15 ideal to create a fine grained head model for each person s head we are going to analyze Be cause of the low resolution of the images we have this idea is hard to implement Considering this a simplified head model using a sample person is created using the method in Appendix IJldelimitation 3 1 The property of the
40. ction of HoTr_0012 png for example can be performed in the following step flow Selection of the nose Selection of left eye Selection of right eye Selection of right ear Figure 25 The process of the selection of the image point using the right ear model Manually selecting the image points in every image of the image sequence could be a tedious task In order to make the work easier the template method in OpenCV is used for automatic point selec tion Template method helps us answer the question Given a picture which has selected the image points what will possibly be the image points in the picture next to it Consider the nose point in 0012 for example the template method select a small image area around the nose point and regard it as the template image In 0013 png which is next to HoTr 0012 png it sets up a look up window with the same size of the template image and slide 32 through whole image area It computes the difference between template image and the sliding win dow and finds the one that is closest to the template image according to some method The avail able comparison method includes SODIFF The squared difference 15 computed between the template and the image for perfect match CV CCORR The multiplicative of the template against the image 15 used Large value for better match Template relative to its mean is match against the image CV TM CCOEFF relative to its me
41. d objects The user of the analysis tool should have basic ideas of how the different functionalities of the tool work and how to handle it properly This project is a computer programming work which involves the study of the background the study of methodology and a programming phase which gives result of the study Handledare Svein Kleiven Amnesgranskare Svein Kleiven Examinator Lisa Kaati IT 12 046 Tryckt av Reprocentralen ITC Contents 1 Introduction 1 1 Background and motivation 1 2 The research questions 1 3 Previous studies 1 4 Methodology 1 5 Structure of thesis 2 VirtualDub video capture and OpenCV image preprocessing 2 1 Objective 2 2 Video information extraction 2 3 Video deinterlacing boosting and image sequence export using VirtualDub 2 4 Image preprocessing using OpenCV 3 Head kinematic analysis with head pose estimation using Posit algorithm 3 Head pose estimation and head kinematics 3 2 Head pose estimation and the Posit algorithm 3 3 Model simplification criterion and model division 3 4 The persistence of head model 3 5 Fast pose estimation using ear models 4 Posit head pose estimation in MotionTracker 4 Objective 4 2 The loading of inputs of Posit algorithm with drag and drop operation 4 3 The image point visual editing and the template method 4 4 Euler angle representation and its transformation into rotation velocity 4 5 Translation representation and its transformatio
42. d roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 2 3 3 4 4 5 9 1 11 6 7 1 98 7 5 6 0 0 0 0 0 0 0 0 6 7 4 0 17 6 4 4 25 7 7 8 19 2 12 2 8 9 32 7 8 9 8 7 14 5 20 9 17 4 42 Table 12 Head Rotation and Translation velocity in analyzed video using focal length equals to 1600 It is noticeable that not all of the 25 videos in Table are included the result this is because some videos only contain the back view of the struck boxer As required by the algorithm the image points should be visible to make the algorithm running The videos which are not fulfilled with this requirement has to be discarded Even in stable scenes during the boxing matches 1 e before the actual impact the depth component of the head distance value appears to be unstable Considering the following image sequence and the calculated head distance values Image Sequence 5457 15 5758 54 6760 84 7122 22 Table 13 Example output of head distance value We found that the value of Tx and Ty follow smoothly with the movement of the origin of OCS in the image plane In this case the nose position of the head However the depth of nose from cam era as understood by is 1000mm larger in the later two images than in the first two im ages This should not be true in the real scene Consider first two images only where the image points looks almost not changed the value of Tz stil
43. d the pattern in the image that is closest to the template image used Image deinterlacing should be used to improve the image quality of the TV images Example of a sequence of head concussion images in a boxing match footage AVI files taken from PAL or NTFS camcorder which shows interlaced pattern at odd and even lines VirtualDub software the window shows the first field of the video Deinterlace Smooth filter control panel which is used to fine control the video deinterlace process HSV filter in VirtualDub can be used to boost the saturation of the image sequences Image before and after deinterlacing and color boosting operation Interface of MotionTracker tool Example of the 3D model of the head The points on the head lives in the space the we call the object coordinate system Camera space 15 a typical selection of RCS Mapping between the model points in OCS on the left and image points in RCS on the right Rotation matrix and translation matrix transform the OCS Yaw Pitch and Row in form of head rotation Saving of head model into property list files using Xcode Drag and Drop operation enables fast loading of image sequence and model files into Mo tionTracker Image points are saved alone with the image sequence Image screen in MotionTracker demonstrates an image in the image sequence of the video Image slider in MotionTracker Mouse cursor represents where the image point is going to be selected
44. e the position and orientation of the head can be revealed by these two matrices In this project a set of boxing match videos of head concussion would be studied The head kine matic of struck boxers head would be measured To fulfill the aforementioned requirement we would like to assume that the CCS 15 used as a fixed reference frame It 15 noteworthy that this as sumption is normally not true in the TV sports match recording because it usually involves the mov ing of the camera In the boxing match videos however it 1s generally a reasonable assumption since the camera would normally move slowly in a short period of time Camera Coordinates Object Coordinates Figure 17 Rotation matrix and translation matrix transform the OCS which defines the object s position and orienta tion in the world space into CCS courtesy of 16 3 2 Head pose estimation and the Posit algorithm Q How the head pose estimation method is selected What is the input and output of the Posit al gorithm 3 OCS represents the 3D object When the object is translating and rotating in 3D space OCS moves accordingly 19 Surveyl 7 has been conducted on the different methodologies of head pose estimations One cate gory of these methods are carried out by linking the human eye gazes with the head poses through visual gaze estimation The gaze of the eye 15 characterized by the direction and focus of eyes It should be reminded however the TV images c
45. en the result in this paper 1s coherent with the one it 1s based on 7 However there are also cases when unstable and large variations of the result do exist between the studies The piece of information that revealed in the evaluation section of this paper appears to be a nice verification of the property of Euler angle The error of MotionTracker can be assumed to have a larger error estimation when the real pitch or roll value of the sample person 15 greater than a certain limit In this study we have not focused on finding out the reason behind the difference of results between the two studies However this also means the comparison of these two reports could be an interest ing area in the future study This study 1s expected to be extensible by supplying different research materials into the Motion tracker o9 To put it together this study created an effective way to obtain the head kinematic information us ing OpenCV library under Macintosh platform The order of magnitude of some of the result this study 1s comparable with the related study which this project 15 based on When carrying out the motion study applying the software in this project the researcher should be aware of the property of the Euler angle which 15 demonstrated in the evaluation part of this paper 60 Appendix I OpenCV focal length prediction using Posit algorithm hints Theory Described in Section 3 2 the focal length should be determined to use
46. erred to the head distance val ues in this report It is found that the units of the translation vector revealed by the Posit algorithm is the same as unit of model points in OCS This implies that the units of translation vector can be manually de fined It 15 obvious that keep in mind the units of T mat 15 crucial in understanding the order of magnitude of the head translational movement Q How to use the Posit algorithm in the REAL application What are the steps The first step to use Posit algorithm in the real head pose estimation application 15 to establish 3D model of the head The head model can usually be created by 3D modeling tools such as meshLab and Blender The points the 3D model is served as the model points in algo rithm described above They would be stored the M array In this project simplified head model is created using the method described in Appendix II In the second step the image points corresponding to every model points are found in the image plane They are stored in the I array for every image in the image sequence created in Chapter 2 When step one and step two are done M and I array can be illustrated 1n the table below In this project the 1mage points are selected using Model creation module in MotionTracker The process of it will be discussed Chapter 4 23 Image Sequence Model Object Image Object Image 1 Table 6 Assigned model object and image obje
47. erson 1s named alphabetically in front of the side view picture For example 0 png stands for the front view and 1 png stands for the side view Load image folder into Motiontracker Mark left ear model keys for the front view and side view of the analyzed person Assign in the panel the head breadth of the person for example 150 millimeter Click the Create Left Ear Model button The model plist file is then created inside im age folder with the name NewLeftEar plist How to mark feature points in the image sequence Motiontracker makes it intuitive to mark feature point in the image manually The feature point can be automatically saved into the file system in a one to one bases which means there 1s one feature point record for every image in the image sequence When the image 15 loaded the feature point record 1s reloaded and ready for use in the software To mark the feature point 1 2 3 Open Motiontracker 1f it is not Load image folder The image viewer is shown when the image is loaded Use mouse wheel or the image slider in the Motiontracker panel Traverse to the image whose feature points need to be marked out When the mouse is hovering over the image view the cursor would turn into a image marker With the cursor in this state mark the keys sequentially according to the order of model keys in the plist file This process can be decomposed into Move the mouse into the first key in the model hold key and click
48. ether where the rotation is de fined always in the direction with a absolute delta value smaller than c delimitation 4 3 The angular velocity of the yaw angle can be calculated in the similar way as the roll angle Substi tuting the alpha value with the gamma value would be sufficient The calculation of pitch angle 1s easier because the direction of rotation 15 only determined by the delta of the pitch values 35 Suppose pitch angle 15 J in the 7 th image and f2 in i th image the angular velocity of the pitch angle from the 1 th image to the 11 image could be computed in the pseudo code like follows find the absolute delta of b1 b2 so delta abs b1 b2 the angular velocity V along y axis is calculated as delta 1 FPS if a2 a1 0 the head is moving along y axis clockwise in the velocity V else the head is moving along y axis counter clockwise in the velocity V The result of the angular velocity calculation has the following pattern of output in MotionTracker 2 rotation speed 0 000000 0 000000 0 000000 7 317708 0 447545 25 566202 1 675840 20 322647 6 825848 5 965726 46 508179 0 000000 0 000000 We can see from the output that it represents the angular rotation velocity around the z axis From the 7th image to the 8th image for example it 19 estimated that the head 19 rotating around z axis at the velocity of 46 508179 rad s 4 5 Translation representation and its transformat
49. ey are representing the roll pitch and yaw value of the struck boxer s head pose in the n th image The range of roll and yaw angles are 0 They can be represented in 2D cartesian coordinates as follows 34 0 8 0 4 0 0 4 0 8360 195 x 345 210 0 4 225 3307 240 315 0 300 255 70 285 Figure 29 Representation of roll and yaw values in 2D cartesian space in MotionTracker The range of pitch angles 0 5 0 57 They can be represented 2D cartesian coordinates as follows Figure 30 Representation of pitch values in 2D cartesian space in MotionTracker Suppose roll angle is in the i th image and ao in the i 7 th image the angular velocity of the roll angle from the 1 th image to i 1 th image could be computed in the pseudo code like fol lows find the absolute delta of al a2 so delta abs al a2 if delta is bigger than pi delta 2 pi delta the angular velocity V along x axis is calculated as delta 1 FPS if a2 a1 0 and the delta is smaller than pi the head is moving along x axis clockwise in the velocity V else the head is moving along x axis counter clockwise in the velocity V The concern regarding the delta of and ao is raised because it is hard for MotionTracker to tell whether the head 1s rotating clockwise or counter clockwise only referring to the Euler angles This property is determined by the value of abs ao and ao a1 altog
50. g 15 to create a image processing tool that give the opportunity for fine tuning the quality of the image which makes motion analysis easier Figure 7 Example of a sequence of head concussion images in a boxing match footage 2 2 Video information extraction Q What videos are we going to export image sequences from What are the parts of the movies we are interested in What are the properties of the movies How these properties affect the research decisions The list of the boxing match videos are shown in Table 1 The Quicktime player and the VLC player are used to extract the FPS and the resolution of the video The basic information related to the box ing matches such as the striking and struck boxer 15 also shown in the table The images in the movie that are going to be analysed are just a portion of the image sequence of the whole movie The images of the severe impact and head concussion are the part in the image sequence we are interested in this project Videos would be trimmed so that only the interested part 15 contained for future analysis In this project the VirtualDub software 1s used for this purpose When that 1s done the Frame Count prop erty in Table 1 tells about the number of images in the movie after the trimming of the movies Hn 5 ei Struck Boxer FPS G Resolution ID injury Boxer Count Arguello 2997 9 60480 640 480 HE E REX B Ls 594 o omo var
51. ge screen to the desired position The same opera tion can also be done by pressing the arrow keys in the keyboard when they are available to use A typical usage of this operation can be illustrated in the following picture 33 Move the point After the mouse by mouse drag moving points ging are edited Move cursor to Select the point by the point to edit mouse click Figure 27 The process of editing image points in MotionTracker When the image points are assigned in this stage the Pose button can be pressed to perform the Posit al gorithm delimitation 4 2 The image points need not to be fully selected for every picture This is reasonable because the face points might be occluded from the views The images which the image points are not fully selected is assumed to have the same image points as the previous image of this picture The representation of the head pose would be discussed in the next section 4 4 Euler angle representation and its transformation into rotation velocity The head pose values in MotionTracker has the output of the following format 39 74 39 74 2 0 00 Figure 28 Example output of head pose values in MotionTracker The n R T portion surrounded by the asterisks represents that this is the n th image in the image sequence of the movie Three Euler angles are shown in the next line Th
52. gth See Appendix I e We need to know the estimated real distance from the head object to the camera We need to know the front view of the head object of the analysed person e We need to know the head model of the head object of the analysed person To create head model See Appendix II We need to obtain the front view of the head object of the analysed or sample person e We need to obtain the side view of the head object of the analysed or sample person e We need to provide with a head breadth of the aforementioned head object To create image points e The model keys for each image in the image sequence are expected to be fully recognized either using template method or done manually If some of the model keys cannot be recognized the image 1s considered a failure to provide useful information The failure rate is expected to be lower than 10 Applicability of the analysis To create good result in a single image we have to make sure that the real roll and pitch value of the analysed head object are constrained in Real roll values Real pitch value To create acceptable result in a single image we have to make sure that the real roll and pitch value of the analysed head object are constrained 1n 55 Real roll values Real pitch value The applicability 1s examined by the inspection of the image in the image sequence Expertise in the estimation of human pose value from images using other tools 1s required to make
53. hich is all zero We predict the value of local input value by iterating or testing with different value of local length as the input of the Posit algorithm We calculate the difference between the value of Tz of the output from Posit algorithm and the real depth S The heuristic value of the Tz could be discovered by minimizing the difference between these two values So the process can be described as find focal length of the input in Posit algorithm such that the output Tz S 62 According to the table for example 1900mm would be an good value for focal length because the error would bottom out at that point 63 Appendix II The creation of the simplified head model using MotionTracker Theory The model points that are going to be arranged in the OCS has the pattern shown in Figure 11 We would like to assign the blue axis as the z axis red axis as the x axis and green axis as the y axis In the front view of the head model the XOY plane 1s shown and the z axis 1s occluded from the viewer in the side view of the head model the YOZ plane 15 shown and the x axis 15 occluded from the viewer Given the model keys in the M array the creation of the simplified head model involves finding the coordinate values of these model keys Method The predefined model keys as described in Section 3 3 can be listed as Nose Left Eye Right Eye Left Ear and the Right Ear in order To obtain the model point coordinates we first try
54. hmuses and eliminates small islands and sharp peaks Image segmenta Ones image boundaries using derivative of the image functions zeros others Feature Extract line patterns in the binary image Detector Extract contour sequences in the binary image Create load and save model created by assignment of points in the image Estimate head or object poses using predefined head or object models The yaw Motion pitch and roll value is extracted and demonstrated Tracker Estimate the focal length using the Posit algorithm Model Creator Create the left and right ear model of the head object B The documentation that contains tutorials for the usage of the software Table 2 Functionality implemented in MotionTracker Open Close Erode Dilate TopHat BlackHat Basic Thresholding Adaptive Thresholding Canny Operator Line Detector Contour Detector Model Assigner Posit pose es timator Focal length estimator functionality and how MotionTracker helps to understand the head motion 16 3 Head kinematic analysis with head pose estimation using Posit algorithm 3 1 Head pose estimation and head kinematics Q What is the rigid body kinematic analysis The rigid body in physics represents a body structure that the distance between any two points on the body remains constant regardless of the physical forces performs on it gt The task of the rigid body kinematic analysis includes e Find the position
55. ion into translation velocity The head distance values in MotionTracker has the output of the following format 00001 00003 15 49 250 92 2478 20 00004 00006 15 49 250 92 2478 20 00007 0 00 0 00 0 00 00009 Figure 31 Example output of head distance values in MotionTracker Suppose head distance value 15 tx1 tyl tz1 in the i th image and tx2 ty2 tz2 in the 1 1 image the angular velocity of the pitch angle from the 1 th image to the j th image could be com puted like follows V sart tx1 tx2 tyl ty2 121 tz2 100 1 FPS m s The result of the translation velocity calculation has the following pattern of output in Motion Tracker translation speed 0 000000 0 000000 0 000000 316 414154 1 587922 36 40 235405 85 297241 3 200737 19 541864 51 495308 13 000901 0 000000 0 000000 37 5 Head rotation estimation and evaluation 5 Representation of head rotation and translation speed The result of the head pose estimation of the boxing matches using a fixed focal length can be illus trated 1n the following tables Video 1 first second roll pitch yaw translational frame frame velocity rad s velocity rad s velocity rad s velocity m s 0 1 4 2 2 0 2 8 8 7 1 2 1 8 2 5 0 4 7 8 2 3 4 8 0 6 2 5 9 1 3 4 0 0 0 0 0 0 0 0 4 5 0 0 0 0 0 0 0 0 5 6 1 9 23 6
56. l had depth difference of 300mm This should not be considered true in the real scene either The level of inaccuracy 15 not acceptable from the applicability point of view We further observed that the instability of Tz values is a common phenomenon among all the videos that are analysed Since the head distance value has a direct impact on the translational velocity we would like to regard the result of translational velocity in Table 12 as the reference for future re search only and abandon it for further discussion in this reportldelimitation 5 1 Despite of that we are not able to find unstable values and inaccuracy issues of the rotation velocity in this stage so we would like to make the evaluation of the head pose values in Section 5 3 We noticed a pattern in Table 12 that there are a lot of values equal to zero Noted in Section 4 3 there exists occluded points in the image sequence upon which we are not able to mark the image points Since the image point 1s assumed to have the same coordinate values with the previous im age 1f it 1s invisible the velocity would be zero in this case These points will be ignored when mak ing data interpolation in Section 5 2 43 From direct observation of the pitch velocity shown in Table 12 we found there are some incorrect assignment of the direction of velocity after comparing with the supposed direction in the image sequence The susceptible values are shown in Table 12 with the bold rectang
57. le around them We believe this is a consequence of gimbal lock since the incorrect value always take place when the head got a comparatively large pitch value relative to the camera The evaluation part in Section 5 3 further proved and explained the existence of the gimbal lock issue The table below shows one of these situation when a struck boxer s face 15 hit During this impact the head pitch velocity 15 ex pected to be positive but the estimation given in Motiontracker 19 negative Image Sequence Pitch Velocity 5 2 The interpolation of the velocity data In order to obtain a better representation of the result the velocity data can be further smoothed us ing cubic spline data interpolation in Matlab A Matlab function 15 created for this purpose function motiontrack fps datapath granularity titles xlabels ylabels extent given fps datapath and granularity Compute the spline of the data fps 1s the frame rate of the video datapath is the path of the datafile return return the array of the spline data using granularity note choose granularity larger than 100 choose extent 10 to 100 datas load datapath txt Switch titles case Head Rotation X datas datas 1 case Head Rotation Y datas datas 2 case Head Rotation Z datas datas 3 end t 0 1 fps length datas 1 fps newt t 1 t end granularity t end newdatas spline t data
58. lick the Posit tab assign frame rate of the image sequence In the Posit tab assign focal length of the camera Click Generate pose to get the result Ou UU J e It is very important to get frame rate right in procedure because it affects order of magnitude of the result How to edit feature points in Motiontracker Feature points which 15 loaded from the feature point files in the image folder or marked in the Motiontracker either automatically or manually are editable in Motiontracker To edit feature points 69 EB Open Motiontracker Load image folder 3 Use mouse wheel or the image slider in the Motiontracker panel Traverse to the image whose feature points need to be edited 4 Click into the Image Point Tab After that you might perform the following operations N To clear all the feature point in the current image 1 Hold Option Key and click the mouse in the image viewer To translate all the feature point in the current image 1 Click and hold the right mouse key drag the mouse To edit the position of a single feature point in the current image 1 Click the point you want to edit the point turns red after it is selected You can perform one of the following operations o Using arrow keys to move the selected feature point Using mouse dragging to move the selected feature points To edit the position of multiple feature points in the current image 1 Hold Control Key and
59. llowing set of methods would be used to build such a motion analysis tool Progressive Scan deinterlacing Using the progressive scan video deinterlacing TV image can be deinterlaced before motion analy SIS e Cocoa Application Kit In order to build a motion analysis tool user interface of the tool should be created and the event coming from mouse and keyboards should be handled Cocoa Application Kit framework provided a way to perform this Job User interface in the tool enables the user to perform various functionalities by clicking mouse to the buttons These functionalities could include loading the image apply boxing linear filters per form dilation and erosion operation and undertake head pose analysis on the head object Event handling in the tool enables the user to handle mouse event and keyboard event coming from the window server in Mac OS X The user might load pervious and next image in the image se quence by sending mouse wheel event Cocoa Application Kit makes this process possible and eas ler e C standard template library This project makes extensive use of standard libraries in C The most noticeable one is the usage of the std vector class for high performance push back and referencing operations of feature points in the image Drag and Drop Operation Drag and drop operation is a facility in the Cocoa library in Mac OS X that enables us to load im ages into the image analysis tool Specifically spe
60. low pass fil ters such as Gaussian and median filter which may be used to smooth the image so that the noise level of the image can be reduced e Binary image operations such as image thresholding adaptive thresholding and canny operator are useful way to extract edge and contour information in the image After adopting image thresh olding in the human head surfaces for example it would be easier to locate the eye center High level features such as line segments and contour trees be extracted directly using hough transformation and Suzuki contour detector in OpenCV e OpenCV provides with pose estimation algorithms that enables the estimation of head motion It is natural to combine the feature extraction and motion tracking together in this tool t is beneficial to combine the aforementioned features into a single toolbox Motivated by the above statements a tool called MotionTracker is developed implementing these features using OpenCV It is developed under Mac OS X using Xcode with the following motiva tions There is a rich set of frameworks related to graphics under Mac OS X which makes this platform very good environment for image processing It is hard to find a OpenCV image processing tool that combines head motion tracking under Mac OS X and preferably easy to use 14 Drag images to the Panel Thresholding Bluring Contours Operation Open Threshold D 47 7 Iterations p E
61. mage The difference of the result using different image points as the input 1s noticeable even if the image points are adjacent to each other in pixels The result sensitivity to the image points can be found not only on the yaw pitch roll of the head pose values but more obvious when we look at the value of Tz of the output This observation implies that Posit algorithm 15 not a very suitable method for the calculation of object depth information using small and noisy pictures See Section 5 1 Delimitation 5 2 The evaluation undertaken in Chapter 5 uses the database which 1s based on high quality resolution images This evaluation demonstrates the accuracy of the Posit algorithm without considering the yaw variations in the image because of the lack of yaw variation in the database itself 6 2 Summary and future studies 58 In this study a computer program 15 developed to extract the head kinematic information from a set of boxing match videos Specifically speaking the head distance values and head pose values of the struck boxers head are obtained with the model points and image points assigned The database which contains 25 videos of knock out boxing matches are analyzed Methods of the preprocessing of the videos using progressive scan video deinterlacing technique are carried out to make the research material more suitable for video analysis In this project the Posit algorithm are used to extract the head motion info
62. mated error for the roll angles tends to fluctu ate around a value The value is large 715 when the real roll values are among 90 60 300 and 270 in degree medium 10 15 when they are among 30 and 330 in degree and small 10 when they are among 0 15 345 in degree For each of the roll value set the estimated yaw error is normally smaller than 5 degrees except only in some cases when the real pitch values come close to 90 or 90 in degree After overall consideration the important result that could be reflected the evaluation 15 that the head pose estimation in Motiontracker would result in e Good estimation when the real roll value of the head relative to the camera lays between 15 to 15 in degree and when the real pitch value of the head relative to the camera lays between 30 to 30 in degree when both of them are converted to the sequential domain Acceptable estimation when the real roll value of the head relative to the camera lays between 30 to 30 in degree and when the real pitch value of the head relative to the camera lays between 45 to 45 in degree when both of them are converted to sequential domain Unacceptable estimation in other domains The words such as good and acceptable are mainly served as a recommendation of the usage sce nario of MotionTracker From the information we could obtain the sources of errors could be The input of algorithm Namely the error in the M array I array a
63. mplementparity command in the Avisynth script file this changes how Avisynth interpret the order of the fields AVISource FileName complementParity separatefields Filter deinterlace smooth Colorcode Blend instead of interpolate Alternate field order Interlace threshold 24 Edge detect 20 Static threshold 35 Static averaging 80 Log frames with gt 0 interlaced area Cancel Figure 10 Deinterlace Smooth filter control panel which is used to fine control the video deinterlacing process 12 e By adjusting Saturation value chromatic region of the image can be boosted The chromatic regions can be served as hint of the possible existence of contour or skeleton of objects To per form this operation the HSV filter is added and the saturation parameter of this filter 1s adjusted to a larger value for example multiplied by 200 0 filter HSV Adjust Hue Saturation Value Hide preview Figure 12 Image before and after deinterlacing and color boosting operation e Because of the complex environment of the image capturing devices Some videos might contain both the interlaced and non interlaced frames For the non interlaced frames both the field sepa ration process and the deinterlacing process described would produce duplicate sequential images Before the image sequence can be exported an visual introspection of the deinterlaced image frames should be performed The method of
64. n into translation velocity 5 Head rotation estimation and evaluation 5 Representation of head rotation and translation speed 5 2 The interpolation of the velocity data 5 3 Accuracy from real pose An evaluation 5 4 Summary of requirements and applicability of the method 6 Delimitation and conclusion 6 1 Delimitation 6 2 Summary and future studies Appendix I OpenCV focal length prediction using Postit algorithm hints Appendix II The creation of the simplified head model using MotionTracker Appendix III MotionTracker User Manual References oo a tA N e m d LAA C p D N NO ala OQ B 0 J Ww WW 9 N N N A A oO SO A A UU A oo Un J Un tA GO N NI GO N 2 A Figures Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure Figure 1 2 21 22 23 24 Example of boxing match image sequence The calibration rig left and human joints should be created and assigned for Skillspector motion analysis Result of Skillspector shows the 3D head acceleration with respect to time in radians per second Head pose can be estimated using only seven image points using solvePnP function Given a image in the left a template image in the middle the template method could be used to fin
65. n the right ear model anyway The effect of the occluded points 1s discussed in Section 4 3 57 Delimitation 4 2 The occluded feature points in the image boosts difficulties in the selection of the image points As discussed in Section 4 3 we assume in the project that 1f there are unacceptable number of occluded feature points that we are going to select this image is discarded for computation of the head kine matics The error imposed by this should not be underestimated because of some pieces of information 1s lost because we discarded the image Delimitation 4 3 Look at the yaw value palette on the right and sup pose the yaw value of the head is 45 degree in the first image and 345 degree in the second image The head rotation is determined by finding the smallest possible rotation that would make the head rotate from first pose to the second pose The pic ture on the right demonstrates that the yaw angle from 45 degree to 345 degree is a counterclockwise rotation And a negative value would be assigned as the rotation quantity Because the heuristic nature regarding to the deter mination of rotation direction this 1s considered as a delimitation in this project Figure 33 The rotation of the yaw value from 45 degree to 345 degree Delimitation 5 1 The result of the Posit algorithm is sensitive to the input of the image points In this project a pixel based method is used for marking the image points in the i
66. n the structure like the picture below 26 s OpenCVTest xcodeproj LeftEar plist u 4 gt LeftEar plist No Selection Key Type Value Y Left Ear Model Array 4 items Item 0 Diction 3 items x Number 0 Number 0 z Number 1 Yv Item L Diction 3 items Number 31 1 y Number 33 1 z Number 45 9 Y 2 Diction 3 items x Number 31 2 y Number 33 z Number 45 8 Y 3 Diction 3 items x Number 71 2 y Number 4 4 z Number 122 4 Ee Hn a t 2 MotionTracker Figure 19 Saving of head model into property list files using Xcode The structure of the property list in Mac OS X 15 tree based The root items the model we are cre 66 29 66 22 ating has type of NSArray The items in this array contains a dictionary with key y and 7 representing the three coordinates of the model points In the above picture for example the name of the model is called the Left Ear Model It has four model points with the order 0 1 2 3 They have the coordinates of 0 0 1 31 1 33 1 45 9 31 2 33 45 8 and 71 2 4 4 122 4 As described in Section 3 2 the unit of the model points 1s millimeter To perform Posit algorithm in MotionTracker for each of the models created from Appendix II the property list representing that model should be created 3 5 Fast pose estimation using ear models If a set of model points of an o
67. nates should be scaled to the unit of millimeters In this paper the left ear and the right ear is assumed to 64 have the length of 150mm for scaling When these processes are done the x y coordinates of the head structure can be represented as Low of ow Table 18 x and y component of the left ear model That 1s two thirds the story of the model creation process In order to obtain the z coordinates of the internal structure of the head the side view of the face should be loaded into MotionTracker The side view of the face 1s defined as the head image which has the head pose pitch value equals to 90 in degree and 0 for the other two values Figure 35 Side View of the sample person 15 selected with the image points that is used to construct the head model similar to the front view case we pick up the points on the side view of the head with respect to the model keys The picture above shows the interface of MotionTracker before and after the selection of model points in the side view We should pick up the model points of nose left eye and left ear which is the same as the front view case Likewise the model points must also be shifted and scaled for correct representation of the data The z coordinates of the head structure can be represented as Low ow Table 19 z component of the left ear model The final head model coordinates can be created by combining the values shown in Table 17 and Table 18 as well
68. nd focal length assignment We have done most part in this project to make it more accurate e The gimbal lock effect where the accuracy of Euler angle inevitably suffers froml The evalua tion part demonstrates that there are comparatively larger error of estimation when the real roll and pitch angle are large This 1s a phenomenon that will result in when the gimbal lock are taking effect It could be related to the person s real pose errors in the database itself e The inaccuracy from Posit algorithm 5 4 Summary of requirements and applicability of the method The summary of requirements and applicability of the method becomes clear and suitable for dis cussion after we have done the evaluation in the previous section Summary of Requirements for analysis Firstly the set of tools should be prepared for the analysis MotionTracker VirtualDub 54 e DivX e AviSynth Research Material e We need to obtain a video containing head object and head motion e Make sure that the internal depth of the head object is so small compared to its distance from the camera To perform the Posit algorithm e We need to provide with the estimated focal length or a focal length which is large enough so that its value has a tiny impact on the output head pose values e We need to provide with the head model of the person in the video We need to provide with the image points of the person in the video To estimate the focal len
69. ntains a 2 dimensional array of brightness information The array 15 a mapping from a point P in the space Q to an intensity value denoted by x y The space 2 1s called the domain or size of the image Since the image size in the movie 15 usually fixed we can also call 2 the resolution of the movie Q What is the objective of video capture and image preprocessing Video footages of different sports activities are fine materials for motion analysis These footages has different resolutions noise levels and frame rates This project makes analysis of a database which contains a set of box matching videos that are cap tured in different locations and at different time The content of these videos contains the knock out hits between the two boxers The head concussions and impacts of the head object during the hit 1s what we are going to analyze Before the image analysis of motions can be carried out the image sequence must be captured from the sports video and the quality of image must be high enough for the motion analysis These two preparation steps are called video capture and image preprocessing process The objective of these processes can be further described as follows e The goal of video capture this project is to obtain the image sequence from the video of differ ent formats deinterlace the TV image if necessary and try to improve the image quality during image deinterlacing The goal of image preprocessin
70. of the video In the following picture for example a cat in the middle is given as the template image The picture on the left is the image we are going to search for using the template image The pattern of the tem plate image is found in the image on the right side Figure 5 Given a image in the left a template image in the middle the template method could be used to find the pat tern in the image that 15 closest to the template image used from http en wikipedia org wiki Template matching The video deinterlacing technology is very important for TV image enhancement The progressive scan for 121 is a good method for video deinterlacing for TV images Majority of images and videos in this project do require video deinterlacing since they are captured using PAL NTSC camera recorders The picture below shows the instance where the image on the left 1s interlaced and the one on the right is deinterlaced 17 m MI Z n we il INI NI Wl K 13 Ni TN m Figure 6 Image deinterlacing should be used to improve image quality of the TV images From transition from left image to the right we provide a better image for motion analysis courtesy of www 100fps com 1 4 Methodology In this paper a motion analysis tool would be created for the head kinematic analysis on the Macin tosh platform The fo
71. of these two coordinate system Table 5 Rotation and translation from OCS to CCS Q How the R mat is related to the head poses When the R mat are decomposed into the three ordered and separate rotation around the three prin cipal axis it has a fixed presentation 21 Consider the OCS that is rotated by radians counter clockwise around the z axis f radians counter clockwise around the y axis and y radians counter clockwise around the x axis the R mat has the form of Rij R a fy Ra R y cosa cos cosa sinf siny sina cosy cosarcsinff cosy sinacsiny sina cos sina 5 cosa cosy sina sinfi cosy cosa siny sinf 5 5 The R mat that is represented after performing rotation of axis in order of Z Y and X axis 1s called the R mat in Z Y X convention When different orders are used the representation of the R mat would have to change The Z Y X convention would be used in this project The a f and y value described above can also be called the yaw pitch and roll angles of the head poses which is traditionally defined as the head pose values The yaw pitch roll can be depicted in the form of the head motion in the following picture Figure 18 Yaw Pitch and Row in the form of head rotation courtesy of 17 Process has been given on how to calculate the head pose values using the R mat in Z Y X con venti
72. on A brief code piece in Objective C can be described as if R31 1 fr arcsin Rai p p n cz arctan2 R32 cos f1 R33 5 1 y arctan2 R32 cos f2 R33 5 2 arctan2 R21 cos fi Ra cos n1 22 2 cos f2 Ra cos f2 a2 Notice that there are two possible combination of the head poses a1 01 y1 and a2 02 y2 they are representing the same head pose but with the different direction of rotation around the y axis The first combination would be used in the project for clarity and simplicity To avoid gimbal lockl l the value of R31 is constrained to be not equal to 1 which means pitch value 1s defined not equal to 90 and 90 in degrees Q How the T mat related to the head poses The T mat is performed on the origin of OCS after its rotation The 3 by 1 T mat has the form of following lt 4 Tx Ty Tz OC Define the origin of CCS as O the origin of OCS as C The T mat represents the vector OC Tx Ty Yz are the transitional component of the origin along the 3 principle axis of the camera coordinate system T mat illustrates how far it is from the head object to the camera For example when the head is moving towards the center of the image space Tx and Ty would be closer to 0 When the head 15 getting nearer to the camera the value of Tz would expect to shrink since T mat describes distance information Tx Ty Yz would be ref
73. ould failed to obtain enough resolution for the detec tion of human eye focus such as the footages in this project This situation makes the methods of this category impractical in this project The Projection from Orthography and Scaling with ITerations Posit algorithm in OpenCV library is another head pose estimation method which 15 based on feature selection The inputs and output of the algorithm makes it suitable for the head pose estimation in this project To use the Posit algorithm two input arrays are needed One of them is a pre defined 3D model of the head this 1s described with a vector of the head model points in OCS denoted by M The latter is the one to one mapped image points in CCS denoted by 7 From the point of view of the computer language the M and I vector can be defined using class NSDictionary in the AppKit framework in Mac OS X The key of these two dictionaries is de scription of the points whereas the object of these dictionaries 15 the coordinate value of the points corresponding to the keys One example of the M and I array can be described by following tables Model Keys Model Objects Left Eye 12 0 13 Right Eye 12 0 13 Table 3 Model Point Dictionary Image Keys Image Objects Left Eye 87 28 Right Eye 34 32 Table 4 Image Point Dictionary Apart from M and I array the camera focal length should also be determined in order to use the Posit algorithm In this project a fixed val
74. r coordinates are measured in OCS are called the model points of the 3D head structure while the same points when their coordinates are measured in CCS are called the image points of that structure The mapping between these two set of points can be illustrated in the following picture Z oF Right Eye 7 7 Figure 16 Mapping between the model points in OCS on the left and image points in RCS on the right 18 On the right side the image points in the CCS reveals the physical information of the head object while the model points in the OCS on the left side defines the structure of it Q What is the head pose estimation How is it related to the head kinematic analysis The head pose estimation 1s the process of inferring the position and orientation of the head object after analyzing the head point coordinate values in OCS and CCS The result of the head pose esti mation 1s the rotation matrix R mat and translation matrix T mat The rotation matrix and the translation matrix have two functionalities e For a point in space it transforms its coordinate values in OCS into corresponding values in CCS e When the rotation matrix is applied to the axis of OCS and the translation matrix is applied to the origin of OCS it transfers OCS to CCS see picture below That is also to say T mat and R mat defined the position and orientation of OCS relative to CCS Since OCS holds the head object s kinematic property in 3D scen
75. rmation from these vid eos Various techniques are used to prepare for the input of the Posit algorithm namely the image points the model points and the focal length A motion analysis software is created to load the created image points and model points into software automatically The Posit algorithm can be carried out in this software and the output can be represented in the logger of the software for further analysis and evaluation The simplified model creation and automatic feature point matching makes the selection of image points easier Normally it would take less than five minutes to make one video clip ready for motion analysis In order to smooth the data representation the interpolation of the velocity data 1s undertaken in Matlab This process also enable us to produce useful peak velocity values that can be used to make comparison with the same quantity in the related studies In this study compared to the 7 which it is based on there are several minor improvement it Made the selection of image points easier by template method in 1mage processing Performed the video deinterlacing using VirtualDub e Made the angular velocity be represented alone 3 separate axis when taking advantage of algorithm e Shows a evaluation that identified the accuracy of this method The comparable result of the study in this project in shown in Table 14 The table demonstrates that there are cases wh
76. roject combines these functionality into a single unified user interface which the analyser can use for different purposes e OpenCV Algorithm Posit algorithm in OpenCV extracts the position and orientation of the object in the image by as signing the image points and model points in the image The rotation and translation matrices as the output of the Posit algorithm reveals the head kinematic information of the object The velocity of the object can also be captured with that piece of information Posit algorithm OpenCV is the key technology used in this project where the technical details would be described in the following chapters e Matlab spline data interpolation Matlab gives us a set of tools for the manipulation of the data obtained from the computer vision library It enables developers to create a function with the input and output we desired It provides the function to load the data file from the file system either row wise or column wise New variable can be created in Matlab easily and when assigned to appropriate function the output can be shown instantly to the user In this project the spline function in Matlab 15 used for the spline data interpolation of the velocity data e Mac OS X Text System In order to provide feedback information to the user a logging system 15 created in the tool The logging system takes fully advantage of the formatting syntax similar to the printf function in li brary
77. s 59 2 a 640 480 i L Tua O l TKO Table 1 Listing of boxing match movies to be analyzed EN 10 2 3 Video deinterlacing boosting and image sequence export using VirtualDub Q What is the quality of the video in this project Why the video interlaced What is the property of the interlaced video Why video deinterlacing important After the visual inspection of the research videos some of them has interlaced patterns that are sus ceptible to interlaced videos The interlaced videos are field based videos which are usually cap tured with a PAL or NTFS camera recorder Each of the frames it captures contains two fields taken consecutively at time t and t The first field taken at t constructs the even line of the frame while the second field taken at tz constructs the odd lines of the frame Considering the ideal case when the two fields of a frame 15 captured at the same time interval The following equation would holds for each of the frame in the movie t2 ti 2 x FPS This equation reveals that the field based counterpart of the frame based interlaced videos doubles the frame rate of the original video in this ideal situation The deinterlaced video which displayed on the Macbook 5 3 would have the interlaced pattern like the following 10 Figure 8 AVI files taken from PAL or NTFS camcorder which shows interlaced pattern at odd and even lines It
78. s newt r newdatas switch titles case Head Rotation X save datapath X newdatas figure_n 1 case Head Rotation Y save datapath Y newdatas figuren 2 case Head Rotation Z save datapath Z newdatas 44 figuren 3 end plot figure figure_n plot t datas o newt newdatas title titles xlabel xlabels ylabel ylabels axis t 1 t end min min newdatas min datas extent max max newdatas max datas extent saveas figure n datapath titles png end Firstly the data file containing the velocity data with respect to the x y z axis is created in a single file This file has the format of 45 87 23 29 9 09 4 01 19 18 8 73 5 73 7 26 11 56 17 60 12 20 14 48 19 06 6 67 7 13 4 37 8 94 20 94 The first row records roll angular velocity The second row records the pitch angular velocity The third row records the yaw angular velocity In the next step the data file 1s loaded 1nto Matlab the FPS and granularity of the output graph 1s assigned The spline function as shown below has three parameters The first parameter 1s the time frame of the velocity function This is calculated using the FPS assigned Datas are velocity val ues indicated in the velocity data file The interpolant 1s assigned as the third parameter of this func tion which defines the time frame of velocity function it wants to interpolate into
79. s KOs 03 Hopkins vs Trinidad Im1D HoTr_0000 png Figure 22 Image screen in MotionTracker demonstrates an image in the image sequence of the video The title of the image shows the path of the image folder in the file system The line immediately below the title shows the name of the image file When the user scrolls the mouse up and down MotionTracker traverses among the image sequence sequentially and shows the previous or the next image in the image sequence The traverse of the image sequence can also be achieved through the image slider in the MotionTracker panel Drag images to the Panel 31 Figure 23 Image slider in MotionTracker The selection of the image points are performed by marking the key object of the M array sequen tially on the image screen When the left ear model is used for instance the image points are marked in the order of nose left eye right eye and finally the left ear When the mouse 15 hovering over the image screen a cursor indicating the position of the marker on the image 1s shown clearly the cursor 1s used to help the user select the point on the image Users zhaoliyi tracking Matches Analysis KOs 03 Hopkins vs Ho Tr 0012 png i lom4 zc Figure 24 Mouse cursor represents where the image point is going to be selected When we are selecting the nose point in 1mage for instance we hover the mouse onto the nose of the image and alt click the mouse The point sele
80. same data set but with different assignment of frame rate would result in different peak values that was stated in the table With a low value of fps it is generally impossible to detect the peaks in the measured quantities such as rotational acceleration and velocity This leads to very distorted results This observation tells us that it would be more appropriate to use videos with high 790 frame rate for the spline function However the inaccuracy result from low frame rate 1s not measured in this report In order to achieve a more intuitive understanding of results the L2 norm of the velocity data with respect to the rotation of the three principal axis are further captured The maximum value of the L2 norm velocity along the timeframe are calculated for each of the videos This procedure also en able us to make a comparison with the previous study 7 which aims at obtaining the same motion quantities A Matlab function 1s created for this purpose function calcnorm datapath CALCNORM datapath Compute the L2 norm of the velocity datapath is the path of the datafile return return the L2 norm of the velocity alone x y z axis dataX load datapath X mat dataY load datapath Y mat dataZ load datapath Z mat datas dataX newdatas dataY newdatas dataZ newdatas rows cols size datas for m 1 cols r m norm datas m end end The peak L2 norm of the head rotation velo
81. se death numbers are related to TBI WHO has also projected that by 2020 traffic accidents will be the third greatest cause of the global burden of disease and injury It can be estimated that a lot of them would be TBI cases Furthermore TBI 15 also the most common cause of disability and death for young people in the USA in 2001815 According to a report from the department of defense in the USA TBI can be divided into three main categories in terms of severity mild TBI moderate TBI and severe TBI The severity can also be measured in the level of Glasgow coma scale post traumatic amnesia and loss of conscious ness 4 TBI can also be classified in terms of the mechanism that caused it namely the closed TBI and the penetrating TBI In the sport matches most TBI injuries are closed cerebral injuries that is caused in the form of direct impact on the head Prove has been shown that the type direction intensity and duration of forces all contribute to the characteristics and severity of TBI To reduce the possibility of injuries and find new head protective measures the mechanics of the impacts during concussion 15 therefore very worth studying In this report a research is carried out by the attempt to capture the head kinematic information through the analysis of a set of sports match videos from television which contains severe or mild level head concussions The main objective of this project 1s to find the head kinemati
82. the introspection process 15 to eliminate these dupli cate images manually How to export the image sequences from the footage e The image sequence of deinterlaced video can be exported after introspection Click port menu item in the file menu of VirtualDub Naming the files in the pattern NAME ABCD where NAME 1s a four letter sequence from A to Z identifying the content of the video and ABCD is a 4 digit number sequence from 0000 to 9999 representing the order of the frames in the image sequence 2 4 Image preprocessing using OpenCV 2 It should be noted that the choosing the interested portion of the movies is important since what we are interested to analysis is usually a small portion of the whole movie The trimming of the movie can be done in VirtualDub but is not described in this report 13 Q After the image sequence has obtained from the video why further improving the image quality How to further improve the image in the image sequence How to extract the useful image feature from the images When the image sequence with good quality 1s captured from the video it 1s necessary to perform image processing operations on them There are several reasons for this e There should be a way to load image sequences into memory and traverse the contents of the im age sequence easily by sliding the mouse wheel back and forth e Some of the videos in the database have high level of noise OpenCV provides with
83. to load the front view of the sample person Figure 34 Front View of the sample person 15 selected with the image points that 15 used to construct the head model The front view of the head is defined as the head image which has the head pose values all equal to zero with respect to the camera The front view of the head tells us about the x and y coordinates of the internal structure of the head If we pick up the points on the front view of the head with respect to the model keys the x and y coordinates of the model keys can be calculated and obtained In this project we regard the sample person s head as symmetric which means we only have to pick up half of the points on the front view of the head to create the entire head model Namely the nose left eye and the left ear In the above picture for example the front view of the head 1s loaded on the left hand side We pick up the model points in the image plane with respect to the model keys order When that is done we would see the picture on the right hand side The marked 0 on the picture represents the nose point 1 represents left eye 2 represents the left ear In order to better represent the head structure the nose point is selected to be located at the origin of OCS We shift all the points in the model so that the nose point has the coordinate value 0 0 Furthermore since the model points is initially unit less the value of the model point coordi
84. ton and see result in the image viewer Make sure function is activated by clicking check box beside button Show Blurred Image 71 References 1 2 3 4 5 6 7 8 9 10 1 12 15 14 15 16 17 18 19 20 21 22 23 World Health Organization Annex Table 2 Deaths by cause sex and mortality stratum in WHO re gions estimates for 2002 The world health report 2004 Andrew IR Maas Nino Stocchetti MDb Ross Bullock Moderate and severe traumatic brain injury in adults Volume 7 Issue 8 2008 pp 728 741 simon R Finfer Jeremy Cohen Severe traumatic brain injury Resuscitation Volume 48 Issue 1 2001 pp 77 90 Department of Defense and Department of Veterans Affairs Traumatic Brain Injury Task Force http www cdc gov nchs data icd9 SepO08TBI pdf 2008 J Ghajar Traumatic brain injury Lancet Issue 356 2000 pp 923 929 Yi Ma Stefano Soatto Jana Kosecka Shankar S Sastry An Invitation to 3 D Vision Chapter 1 Springer 2004 Enrico Pellegrini Kinematic evaluation of traumatic brain injuries in boxing 2011 Daniel F DeMenthon Larry S Davis Model Based Object Pose in 25 Lines of Code Computer Vi sion Laboratory University of Maryland Fischler M A and R C Bolles Random Sample Consensus A Paradigm for model fitting applica tions to Image Analysis and Automated Cartography Comm ACM vol 24 1981 pp 381 395 R Br
85. tracted using similar pattern as Enrico s method Daniel named this algorithm the Posit algorithm The similar pattern means the kinematic information of the head object can be obtained by finding the correspondence between the image points and the model points of the object This conception 1s firstly described and coined by Fischler with the term Perspective n Point problem PnP prob lem There are also technical report which inspires the way how we are going to refine the model in the Posit algorithm The head model could be simplified into 4 points which makes Posit an intui tive and feasible method for head kinematic analysis In the pictures below for example the human figure on the right is assigned with only seven feature points of the head The head pose of this person can be extracted in the picture on the left using the Posit algorithm Figure 4 Head pose can be estimated using only seven image points using solvePnP function courtesy of http www morethantechnical com 2010 03 19 The selection of points should be made easier when using PNP related algorithms The process of finding the feature points between images in an automatic way is then crucial Template matching in computer vision which 1s described in 10 is a nice tool for searching template image features in images Template matching can be used in PNP algorithms to search for image points that are close to each other in several sequential images
86. ue of the focal length 1s used to do the motion analysis for boxing matches delimtation 3 2 heuristic method that enables us to predict the input focal length of the algorithm 15 created but due to delimitation 3 2 this method 15 used in the motion analysis process but used in the evaluation process of this project The focal length prediction 15 de scribed in Appendix I 20 After the Posit algorithm has done its work the output would be one 3 by 3 R mat and one 3 by 1 T mat Discussions above tells us that these matrices reveal the location and orientation of the head object relative to the camera The R mat and T mat are denoted Rj and where 1 and represents the row and column indices respectively lt 1 lt 3 and 1 lt x 3 Q How the object coordinated system and the camera coordinate system be transformed by the R mat and T mat The transition from OCS to CCS is ordered It is performed firstly by rotating the 3 principal axis of OCS so that each of its three axis will be parallel to the principal axis of CCS followed by translat ing the origin of OCS to the origin of the CCS It is very important to notice that the image points CCS has the unit of image pixels while the model points OCS has no initial units When defining the model points the actual scaling of OCS 15 chosen by the user In this project millimeter is used as the unit for OCS The following picture illustrates the transition
87. unelli Template Matching Techniques in Computer Vision Theory and Practice Wiley ISBN 978 0 470 51706 2 2009 Roy Quick and easy head pose estimation with OpenCV http www morethantechnical com 2010 03 19 quick and easy head pose estimation with opencv w code 100fps What is Deinterlacing Facts solutions examples http www 100fps com Gunnar Thalin Deinterlace smooth http www guthspot se video deinterlacesmooth S Suzuki and K Abe Topological structural analysis of digital binary images by border following Computer Vision Graphics and Image Processing 30 1985 pp 32 46 Wikipedia Rigid body http en wikipedia org wiki Rigid body Gary Bradski Adrian Kaehler Learning OpenCV Chapter 12 O Relly ISBN 978 7 302 20993 5 2009 Erik Murphy Chutorian Head Pose Estimation in Computer Vision A Survey Pattern Analysis and Machine Intelligence Volume 31 Issue 4 pp 607 626 Gregory G Slabaugh Computing Euler angles from a rotation matrix http www gregslabaugh name publications 1999 Wikipedia Gimbal Lock http en wikipedia org wiki Gimbal lock Nicolas Gourier Daniela Hall James L Crowley Estimating Face orientation from Robust Detection of Salient Facial Structures Proceedings of Pointing 2004 ICPR International Workshop on Visual Observation of Deicti Gestures Cambridge Stephen Pheasant Bodyspace Anthropometry Ergonomics and the Design of Work Taylor amp Francis Ltd ISBN 978 0 748 40
88. wen the result of automatic feature selection is not satisfactory feature points still need to be edited in a manual fashion after the automatic feature point selection How to estimate focal length of the camera There are several ways to provide the focal length information for the motion analysis in Motion tracker The focal length has already known for use in the OpenCV posit function e focal length which is large enough so that it has a tiny impact on the result A reference value is 2000 pixels Make an estimation of the focal length using the following procedure 1 Prepare for the image folder The image folder should contain the content of o Front view of the analyzed object o Model plist file of the analyzed object Load image folder into Motiontracker Mark model keys according to the model plist file for the front view of the analyzed object Assign in the panel the supposed distance from the viewer to the analyzed object Click the Estimate Posit Focal Length button the estimated focal length 1s shown in the focal length text input box in the panel BK WN e How to undertake motion analysis in Motiontracker Motiontracker is a motion analysis tool The process of the motion analysis could contain the fol lowing steps To perform motion analysis in Motiontracker Open Motiontracker if it is not Load image folder Click the Image Point tab mark feature point for every image in the image sequence C
89. yzer to create two specific model for the human head You are required only to provide with two images of the analyzed person s head of particular view angles mark the key points of the models and Motiontracker would create the model plist file for you The two models are called the right ear model and the left ear model The key for the models are Right ear model keys and Left ear model keys Left Eye Right Eye 66 To create right ear model 1 Prepare for the image folder The image folder should contain images of Front view of the analyzed person s head b Right side view of the analyzed person s head 2 Make sure that the file containing the front view of the person is named alphabetically in front of the side view picture For example 0 png stands for the front view and 1 stands for the side view OURA UW Load image folder into Motiontracker Mark right ear model keys for the front view and side view of the analyzed person Assign in the panel the head breadth of the person for example 150 millimeter Click the Create Right Ear Model button The model plist file is then created inside the image folder with the name NewRightEar plist To create left ear model 1 OURA Ww Prepare for the image folder The image folder should contain images of o Front view of the analyzed person s head o Lef side view of the analyzed person s head Make sure that the file containing the front view of the p

Download Pdf Manuals

image

Related Search

Related Contents

製品マニュアル(詳細スペック)  診療ユニットEU-55N  Samsung P2250 User Manual  お客様各位 取扱説明書添付の廃止について URL:http://www.melec  Xerox CX User's Manual  

Copyright © All rights reserved.
Failed to retrieve file