Home

Visualising the Visual Behaviour of Vehicle Drivers

image

Contents

1. s eyes is not always a pleasant experience Other data such as speed acceleration steering angles thrust and brake pres sure are visualised with the Active X components mentioned before Higher order glance based measures i e the segmented clusters of data can be visulized using small LED like displays The program is able to playback pause rewind fast forward and jump any where in the loaded data set This is supported though buttons with ordinary video recorder symbols and a time slider bar All models in the program is exchangeable provided that they are supplied in a format that OpenSceneGraph is capable of reading The static displacement of the models can also be changed This means that a car interior quickly can be replaced with the interiour of a lorry compensating for the added height of the driver cabin in the lorry The background colour can be interactivly changed and the fog effect can be turned on and off All these settings are stored as register keys in the Windows register Timing of playback When visualising the data an algorithm is necessary to ensure timing of the play back The algorithm should ensure that the playback rate stays coherent with the recording rate This means that the algorithm must take the rendering time into account Although skipping of frames is not allowed since we do not want to lose any data This could give a slightly slower playback of the data if the recording frequency exceeds the r
2. Document View architecture This proved to give no advantages at all and only made pro gramming more complicated Especially the timing routines became very hard to overview since this algorithm had to be divided into two classes The developed program has received good reviews of the employees at VTEC Especially the feature to view the world from the test subjects eyes has been very appreciated Future work and recommendations Although the program developed could be used in it s present state some improve ments would be good to implement e Enable the plotting of signal diagrams would give information regarding the history of the signals e Network playback Instead of playing back recorded data the program could be used to show the present states during a simulation by visualizing the current data received over an UDP stream e Binary file readers This would remove the need to convert the logged files into tab separated text files e Enable saving of screenshots and videos from the program 33 34 Chapter 6 Final Comments and Conclusions e A bigger project would be to replace the infinite straight road with a more realistic one One solution could be to use the curvature values from the lane tracker to dynamically create a road that looked more like the road where the actual test took place References 10 ISO 15007 2 Road vehicles measurement of driver visual behaviour with respect to transport inf
3. by the small muscles which control the eyes are measured using small electrodes See fig 2 2 The method is not very accurate and only measures the rotation of the eyes If the position and rotation of the head is desired additional headtracking equipment has to used Scleral search coils This method uses small coils attached usually with vacuum to the eyes Their positions independent of head movements are measured by placing the subject between two larger coils which induce voltage in the small coils See fig 2 3 This method could be painful for the test subject and usually the eye has to be anaesthetised externally to be able to attach the search coils The fact that the method could be painful to the test subject makes it less reliable since the test person could change his visual behaviour because certain movements of the eye cause pain Since the method is based on measuring small induced voltage the test has to be performed in a controlled environment to be accurate Corneal reflection In this method the test subject is exposed to infrared light A small camera measures the amount of the infrared light that is reflected in the cornea The amount of reflection depends on the angle of the eye 10 Chapter 2 Measurement of Visual Behaviour Figure 2 3 Scleral search coils The method is very accurate but only measures the rotation of the eyes If the position and rotation of the head is desired additional head
4. else I detta arbete presenteras utvecklingen av ett verktyg gjort f r att visualisera var f raren har sin koncentration under k rning Detta innefattar uppspelning av data loggad till harddisk men ven metoder f r att sl samman och spela upp data ifr n flera olika k llor VII VIII Preface Thesis outline This report first gives a brief introduction into methods for measuring visual de mand defining some concepts that are used throughout the thesis Then data recording from other sources are explained focusing on test cars and simulators After that the problems arising when joining two data files are explained and a possible solution to the problem is presented Following these chapters the imple mentation of the visualisation is presented In the end possible improvements to the implementation are discussed Acknowledgment I would like to thank a number of people which helped making this thesis possible e Dennis Salu r my supervisor at Volvo Technology Corporation e Peter Blomqvist for helping me with all sorts of coding problems e Dr Bj rn Gudmundsson my examiner at Link ping University I would also like to thank the whole Human System Integration Department at Volvo Technology Corporation for their great support IX Contents VII VII IX 1 5 15 19 23 33 35 37 B FaceLAB Software Loop 39 41 XII List of Figures 1 1 Diagrams of visual behaviour ars ria a x 3 2 1 A typical vid
5. line detection 3D gaze direction 3D gaze direction Figure B 1 FaceLAB software loop Appendix C Scene Graph 41 Appendix C Scene Graph 42 eponpeoy uedpeoy yegpeoy yegpeoy paulquio sue peoy epoN uejvuonoesJeyl WJOJSUBJ yIejyuonoesJeju oiueu quuojsuei PEOY epoNJe9 2 epoNeurmubis LUJOJSUBI eurmufiis 9ILBU QUUOJSUBI jujap yaA onejsuuojsueJ 1u spiusA yooyydeigausss aponsce4 olweuAQUJojsued 9984 IJEISWIOJSUBI 39984 spon ed Figure C 1 The scene graph of the program
6. test the visual demand usually some kind of eye tracking has to be done Earlier these method has been either very intrusive inaccurate or very hard to use outside a laboratory environment But now other methods based on computer vision are available These methods are neither intrusive like some of the earlier methods nor do they require special lighting conditions but have until now been very noisy outside a laboratory environment The computer vision method used in capturing the data that are used in this thesis is uses a template matching technique This method is robust and can be used outside a laboratory environment A study 16 has been made that statistically verified the computer vision eye tracking method with the ISO and SAE standards and work 7 has been done to fully automate the process described in ISO 15007 2 1 Motivation The result from the tracking data is normally presented as diagrams or tables of data See fig LI Such diagrams give good overall information of the test but give no information related to transition patterns neither does it give any feedback on how the visual distraction affects the drivers behaviour Performance data from the simulator or test car is often recorded i e speed steering angle etc The idea of this thesis is to use that data combined with the eye tracker data to achieve even more knowledge about the drivers behaviour The goal of this thesis is to develop a piece of software that vis
7. www sense8 com ttp www sgi com N ttp www stlport org 5 ttp www tooltech software com Appendix A FaceLAB Performance Table Head tracking volume height width depth 0 2 m 0 3 m 0 5 m Head position accuracy x y z 1 mm 1 mm 2 mm Head rotation accuracy heading pitch roll 2 2 1 rms 1 Head rotaion tolerance heading pitch roll 90 30 30 ff Gaze direction accuracy 2 rms tt Working frequency 60 Hz Temporal latency 33 ms 1 Best case when head is motionless i Best case worst case estimated to be about 5 tt Maximum rotation before tracking is lost FaceLAB can recover tracking when lost tt Worst case 15 Reference 37 38 Appendix B FaceLAB Software Loop 40 1 Initialization 2 Face Tracking 3 Gaze Detection Error Recovery Appendix B FaceLAB Software Loop START Main Loop 2D face 2D face searching template 2D face position 3D stereo matching 3D face position 2D projection 2D feature position face found yes 3D facial feature model 2D feature tracking 2D feature position 3D stereo matching 3D feature position e tracking ok y S 3D model fitting 3D head pose 3D head pose 2D projection 2D feature position 3D eye model calculate eyeball position 2D eyeball position iris detection 2D iris position gaze
8. Examensarbete LITH ITN MT EX 2002 19 SE Visualising the Visual Behaviour of Vehicle Drivers Bj rn Blissing 2002 10 21 iw A Q Nos yn TEKNISKA H GSKOLAN LINK PINGS UNIVERSITET Institutionen f r teknik och naturvetenskap Link pings Universitet Department of Science and Technology 601 74 Norrk ping Link pings Universitet SE 601 74 Norrk ping Sweden LITH ITN MT EX 2002 19 SE Visualising the Visual Behaviour of Vehicle Drivers Examensarbete utf rt i Medieteknik vid Link pings Tekniska H gskola Campus Norrk ping Bj rn Blissing Handledare Dennis Salu r Examinator Bj rn Gudmundsson Norrk ping den 21 Oktober 2002 Avdelning Institution Division Department Medieteknik Institutionen f r teknik och naturvetenskap Media Technology Department of Science and Technology 2002 10 21 Rapporttyp ISBN Language Report category ISRN LITH ITN MT EX 2002 19 SE Serietitel och serienummer ISSN Title of series numbering Svenska Swedish Licentiatavhandling P3 Engelska English Examensarbete C uppsats P3 D uppsats vrig rapport URL f r elektronisk version www ep liu se exjobb itn 2002 mt 019 Title Visualising the Visual Behaviour of Vehicle Drivers Titel Visualisering av visuellt beteende hos fordonsf rare F rfattare Bj rn Blissing Author Sammanfattning Abstract Most traffic acciden
9. asks on eye fixations while driving Journal of Experimental Psychology Applied 6 1 31 42 2000 35 36 11 12 18 19 20 21 22 23 24 25 26 27 28 References T Ross Safety first In Transport Technology Ergonomics Center Traffic Technolgy international Loughborough University UK Feb Mar 2002 D Salvucci and J Goldberg Identifying fixations and saccades in eyetracking protocols In Proceedings of Eye Tracking Research amp Applications Sympo sium Palm Beach Gardens FL USA 2000 Seeing Machines FaceLAB 1 0 Frequently Asked Questions 2001 Seeing Machines User manual for FaceLAB v1 1 2002 W Van Winsum M Martens and L Herland The effects of speech ver sus tactile driver support messages on workload driver behaviour and user acceptance TNO report TM 00 C003 Soesterberg The Netherlands 1999 T Victor O Blomberg and A Zelinsky Automating driver visual behaviour measurement 2002 J S Wang R R Knipling and M J Goodman The role of driver inattention in crashes New statistics from the 1995 crashworthiness data system In Proceedings of the 40th Annual Meeting of the Association of Automotive Medicine Vancouver Canada October 1996 P ttp www bhenden org istool N ttp www jrsoftware org ttp www mathworks com ttp www multigen com HE ES ttp www opensg org 5 ttp www openscenegraph org ttp plib sourceforge net 5 ttp
10. ceLAB automatically pauses as well and vice versa This can give holes See fig in one of the datasets which can result in problems when joining the two datasets Four solutions were considered to mend the holes All methods start with making sure that both datasets start and end at the same time If one of the datasets starts before the other the preceding data is removed The same thing is made in the end of the datasets the longer dataset gets trimmed to give the datasets a common start and end time The simplest method considered was to omit data that did not have any data at the corresponding time in the other dataset See fig KI One problem with this method is that it could give big skips in the time when playing back the data To provide a continuous time in the joined dataset some mending of holes or interpolation have to be applied The second method considered involved interpo lation between the last cell before the hole and the next cell after the hole See fig 4 1 c But this is not appropriate in this case It could easily give the illusion that the dataset has more information than it actually has So instead a third method was suggested The last data cell before the hole is copied into empty cells in the hole See fig 4 1 d p This would give the illusion 21 a b c d e Figure 4 1 Mending holes that the recording of the data freezes but can still give the illusion that there is data that in fac
11. cording to the standard This is a problem for OSG since it makes proper use of Standard C The solution chosen to solve this problem is to use the STL library from STLport Consulting instead of Microsofts STL implementation When implementing the program a choice had to be made between a pure dialog based application or to use the Document View architecture Since the program should load files documents and the visualise view them it seemed like a sensible choise to use the Document View architecture The installer program was created using a freeware program called InnoSetup 3 0 2119 This is an install script compiler For easier creation of these scripts a GUI script editor called ISTool was used 26 Chapter 5 Implementation a Speedometer b Levelme ter Figure 5 1 The developed Active X components User Interface Even though OSG supplies a small set of its own GUI components the choice for the GUI was to make use of MFC components Mainly because the users of the system quickly should feel acquainted with the standard windows GUI But this choice also gave the possibility of using the GUI editor inside Visual C ActiveX Components Some data are inappropriate to visualise in 3D Ordinary 2D meters are sometimes more accepted i e speedometers and level meters Therefore these two meter types were developed as Active X components and used in the program See fig Visualization The program developed can show
12. d region Saccades are the rapid movements up to 700 s when the eye changes the point of view During saccadic movement our brain does not perceive any in formation because the light moves too fast over the retina la This means that saccadic movements of the eyes should be minimized while driving and all user interfaces should be designed so they require a minimum of change in the point of view Another type of fixation is smooth pursuit This is when the eyes track a moving object such as an approaching road sign or a pedestrian on the sidewalk These movements of the eyes are usually slow within 80 to 160 s 5 6 Chapter 2 Measurement of Visual Behaviour Ocular measure There exists standards le that give definitions and metrics related to measure ments of ocular motion of vehicle drivers These measures are used when analyzing the data captured during tests of driver visual behaviour Basic ocular measures The most basic ocular measures are e Saccade The rapid movement changing the point of view e Fixation Alignment of the eyes so that the image of the fixated target falls on the foeva for a give time period e Average fixation duration The average time the fixations lasts e Eye closure Short eye closures indicate blinks whereas long closures could indicate drowsiness Glance based measures The glance based measures are considered a higher level description of eye move ments These measures re
13. endering frequency But it is still preferable to losing data in the playback The program uses a error diffusion like algoritm for keeping the playback rate coherent as possible with the recording rate but ensuring that each data cell get displayed at least once The algorithm See fig 5 2 starts at the first cell in the data It then checks the timestamp of the next cell and calculates how 28 Chapter 5 Implementation long the current cell should be displayed before advancing to the next cell The acculmulated rendertime is set to zero and the rendering of the frame is done The time of the render is added to the accumulated rendertime If the acculmulated rendertime is less than the time the current cell should be shown no advance is made but if the rendertime is larger or equal it advances one step in the dataset Before rendering the new frame we subtract the accumulated rendertime with the time the cell has should be shown This is done to ensure that the data playback does not lag behind more than necessary Yes Restart at first cell 29 Start at first cell Get time before advancing to next ce Accumulated rendertime 0 Accumulated rendertime time to render last frame Is Accumulated rendertime lt time to next cell Yes Accumulated rendertime time to next cell Is current cell gt maximum number of cells ON Get time before advancing
14. eo setup 2 22 2 oo nn nn 8 22 Electro oculography RRRAR 9 23 Scleral search coils oo om 10 4 aceLAB oordinate systems o 12 2 5 FaceLAB 2D coordinate systems 2 222 2 2 none 13 d The VTEC simulator lt lt e 64203455 o6 59 E39 90 DE 16 4 1 Mending Roles pads bs de rr RA kkk A 3 21 5 1 The developed Active X components 2 22 20 26 5 2 The flowchart for the timer algorithm 29 53 Data pipelines 2RRee 30 5 4 A screenshot from the developed program 31 40 42 XIII XIV Chapter 1 Introduction Volvo Technology Corporation Volvo Technology Corporation VTEC is an innovation company that on con tract basis develops new technology new products and business concepts within the transport and vehicle industry The primary customers are the Volvo Group companies and Volvo Car Corporation but also some selected suppliers The R amp D work involves a number of basic areas e g transportation telem atics internet applications databases ergonomics electronics combustion me chanics industrial hygiene and industrial processes and using techniques such as systems engineering multi physical and chemical modelling programming and simulation Beside the R amp D areas VTEC offers specialist services in the areas of intellectual property protection standardisation and informa
15. ere also existed some problems with material attributes when loading OpenFlight models e OpenSceneGraph 23 a free and open source project Supports many oper ating systems object and image formats Although still in beta state 23 24 Chapter 5 Implementation Finally OpenSceneGraph was chosen for the project Mainly because of the good support of the OpenFlight format but also because VTEC expressed an intrest in seeing what capabilities were available in OpenSceneGraph Furthermore VTEC wanted to obtain knowledge about how difficult it would be to use in development of new software OpenSceneGraph OpenSceneGraph is an open source initiative to try to make a platform indepen dent scene graph package Currently it supports IRIX Linux FreeBSD Solaris and Windows Some work has been done on HP UNIX and Mac OS X Among its features is both quad buffered stereo and anaglyphic stereo More interesting features are multi texturing cube mapping occlusion culling small feature culling anisotropic texture filtering and particles OSG has loaders which supports 3D studio object files Lightwave objects SGI Performer OpenDX Terrapage and Open Flight It also supports many image formats bmp gif jpeg SGI rgb pic png tiff and tga Converting coordinates The coordinate system in OpenSceneGraph is a right hand system just like Face LAB The difference is that FaceLAB defines it s coordinate system as X sideways Y up a
16. ere are four eyes in total in the stereo image pair therefore four gaze direc tions are detected independently However each measurement is not sufficiently accurate mainly due to the resolution of the input images from the cameras The width of the eye is only about 30 pixels Therefore it is hard to determine the gaze point in a 3D scene by calculating the intersection of the detected gaze lines Instead those four vectors are averaged to a single normalized gaze vector in order to reduce the effect of noise See Appendix for flowchart of the FaceLAB algorithm The method used in FaceLAB is robust to occlusions deformations and fluc tuations of lighting Other advantages are that it works in real time and it is accurate See Appendix A FaceLAB Coordinate Systems FaceLAB makes use of five coordinate systems It s world coordinate system has its origin between the two cameras All position output is in world coordinates See Fig 2 4 a All rotations are outputted as Euler angles about un rotated axises with the order pitch heading roll The next system is the stereohead coordinate system See Fig 2 4 b This coordinate system is coincident with the world coordinate system but is attached to the stereohead so it tilts with the stereohead The head coordinate system See Fig has the head as origin with the x axis aligned with the center of the eyes The two last coordinate systems are the screen coordinate system and pi
17. flect different properties such as time sharing workload and visual attention Typical glance based measures are e Glance duration The time from which the direction of gaze moves towards a target to the time it moves away from it e Glance frequency The number of glances to a target within a predefined sample time period or during a predefined task where each glance is sepa rated by at least one glance to a different target e Total glance time The total glance time associated with a target e Glance probability The probability for a glance at a specified target e Dwell time Total glance time minus the saccade initiating the glance e Link value probability The probability of glance transition between two targets e Time off road scene ahead The total time between two glances to the road scene ahead which is separated by glances to non road targets e Transition A change in eye fixation location from one target location to another target location i e the saccade initiating a glance e Transition time The duration from the end of one fixation to the start of new fixation on another target e Total task time Total time of a task defined as the time from the first glance beginning to last glance termination during the task Measurement of Visual Behaviour There exists a number of different method to measure the visual behaviour of the driver Notes The simplest method to record the dri
18. ime at a maximum of 60 frames per second using OpenGL Performer The road environments are modelled using MultiGen Creator 21 Data can be sent from the simulator to a logging computer by UDP 17 Test bench When experimental displays or gauges are to be tested physical models are usually not built Instead a test bench is used This is usually a simple mock up were TFT screens display the gauges The flexibility of such a mock up is one of the advantages new displays could easily be moved changed replaced or removed Another advantage is that a test bench is very cost effective compared to physical models The test bench can perform logging either by storing data directly on the test computer or sending the data over UDP as the simulator 18 Chapter 4 Combining Data The main idea with this thesis is to visualise head and eye tracker data together with performance data from the simulator or a test car It is obvious that there is need for some form of merge of the data from the different sources The datasets FaceLAB outputs data in either binary form or as MatLab files The simulator outputs its data as tab separated textfiles Both FaceLAB and the simulator have the possibility to output its data as a network stream over UDP Since the FaceLAB and the Simulator are two independent systems the data output is not completely homogenous The only data they have in common is system time Interpolation Since the data from the
19. iour Transcription Figure 2 1 A typical video setup The PDT method is a statistical method where mean reaction time and hit rates are considered This gives good information about the overall mental workload when performing tasks but it does not give any information on ocular measures Video The method described in il is based on four video cameras preferably small one chip CMOS cameras These cameras can be set up differently but the most common is to have one camera viewing the driver one viewing the road one viewing the vehicle controls and one camera monitoring the lane position See Fig 2 1 These images are the multiplexed to a single frame and recorded together with a time code on a high quality video recorder After the test the video has to be analyzed where interesting parts of the data is transcribed into notes This requires that the person transcribing the video can manually determine the drivers gaze direction which can be difficult The work of transcribing the video can be very time consuming One hour video takes approximate one day to transcribe Figure 2 2 Electro oculography Factors like change in speed and lane offset are normally omitted when using this method but an external logging system can be used to capture the data although no combined playback of video and logged data can be performed on existing systems EOG EOG electro oculography is a method where the potentials induced
20. n lens flares due to direct sun light into the cameras appears Other problems such as undetected eye closures could also result in occasional glances towards objects that are unlikely to be focused e g test subjects knees inner ceiling of the vehicle Signal processing of FaceLAB data FaceLAB provides its own methods for identifying fixations and saccades but it is an ad hoc implementation and does not concur with any standards Work 7 has been done to implement a version of the fixation and saccade detection that follows the ISO standard This work also includes filtering the data since the data from FaceLAB can be noisy and contain illegal fixations i e detected fixations that do not last at least 150 ms At the same time as the filtering the gaze data can be segmented into information for glance based measures That means dividing the fixations into clusters of different classes depending on glance direction e g on road off road center stack instrument cluster etc Chapter 3 Car Data Sources Besides the head and eye tracker data other data sources could be used to get even more information of a driving test The tests are performed either on a test car test bench or in the VTEC simulator Typical logging signals captured are speed gear steering angles brake and thrust power Test Car The test car is usually a standard car with a computer running xPC Target con nected to the CAN bus The CAN bus contains most of the
21. nd Z out Whilst OpenSceneGraph defines it s as X sideways Y in and Z up i e the difference is a rotation of 90 about the X axis This difference can easily be overcome in the implementation by setting the following conversions FaceLAB OpenSceneGraph X gt X Y gt Z Z gt y 25 Using Euler angles In 3D space rotations consists of three components heading sometimes called yaw pitch and roll When using the euler angles the order of successive rotations is significant A simple example to illustrate this e Rotate 90 about X axis e Rotate 90 about Y axis e Rotate 90 about X axis This would give a 90 rotation about Z axis even though no rotation about the Z axis were specified whereas e Rotate 90 about X axis e Rotate 90 about X axis e Rotate 90 about Y axis This would give a 90 rotation about Y axis The first two lines cancel out each other So clearly the order of the rotation matters a lot The rotation output from FaceLAB uses rotations about the three axes in the order X Y Z i e pitch heading roll Which in the implementation us ing OpenSceneGraph gives the order X Z Y But since OpenSceneGraph defines X sideways Y in and Z up this is still equivalent with the order pitch heading roll Tools In the implementation Microsoft Visual C 6 0 has been used One problem with this is that the STL version which comes with VisualC 6 0 is not implemented ac
22. ormation and control systems 2001 AssistWare Technology SafeTRAC owner s manual 1999 P C Burns E Knabe and M Tevell Driver behavioural adaptation to col lision waring and aviodance information In IEA HFES International Er gonomics Association San Diego August 2000 R H S Carpenter Movements of the Eyes Pion London 2nd edition 1998 J L Harbluk and Y I Noy The impact of cognitive distraction on driver visual behaviou and vehicle control Technical report Ergonomics Division Road Safety Directorate and Motor Vehicle Regulation Directorate Minster of Transport Canada 2002 SAE J 2396 Definition and experimental measures related to the specification of driver visual behaviour using video based techniques 1999 P Larsson Automatic visual behavior analysis Master s thesis ISRN LiTH ISY EX 3350 2002 Department of Electrical Engineering Link pings Univer sitet Sweden 2002 Y Matsumoto and A Zelinsky An algorithm for realtime stereo vision im plmentation of head pose and gaze direction measurement In Proceedings of IEEE Fourth International Conference on Face and Gesture Recognition pages 499 505 Grenoble France March 2000 S Olsson Measuring driver visual distraction with a peripheral detection task Master s thesis ISRN LIU KOGVET D 0031 SE Department of Be havioural sciences Link pings Universitet Sweden 2000 M N Recarte and L M Nunes Effects of verbal and spatial imagery t
23. orting objective evaluation of such systems in order to design more effective and simpler driver environments At the moment video is the most used technique for capturing the drivers visual behaviour But the analysis of these recordings is very time consuming and only give an estimate of where the visual attention is An automated tool for analysing visual behaviour would minimize the post processing drastically and leave more time for understanding the data In this thesis the development of a tool for visualising where the driver s at tention is while driving the vehicle This includes methods for playing back data stored on a hard drive but also methods for joining data from multiple different sources Sammanfattning De flesta trafikolyckor orsakas av m nskliga faktorer Designen av f rarmilj n har visat sig mycket viktig for att underl tta s ker k rning I och med introducerandet av tillbeh r s som mobiltelefoner GPS navigationssystem och liknande system har f rarmilj n blivit mer och mer komplicerad F r att kunna designa effektiva s kra och enkla gr nssnitt s beh vs det metoder f r att objektivt utv rdera dessa F r tillf llet anv nds oftast video f r att f nga f rarens visuella beteende Men analysen av dessa inspelningar tar v ldigt l ng tid och ger inte s rskilt noggranna resultat Ett automatiserat verktyg f r att analysera gonbeteende skulle mini mera efterarbetet drastiskt och ge mer tid f r f rst
24. signals in the car From the xPC computer signals can be sent over UDP to a logging computer CAN Controller Area Network CAN is a serial data communications bus for real time applications and specified in ISO 11898 It was originally developed by Bosch for use in cars but is now being used in many other industrial automation and control applications Volvo has been using CAN since the release of Volvo S80 xPC Target xPC Target is a system that enables a user to run an application created in Simulink or Stateflow on a real time kernel It is created by MathWorks 15 16 Chapter 3 Car Data Sources Figure 3 1 The VTEC simulator Lane tracking Lane tracking is supported with a product from AssistWare Technology called SafeTRAC 2 This uses a small camera to register the road From the registred picture data such as lane offset and lane width are derived through the use of image processing The logged data is stored on a PCMCIA card drive connected to the unit The VTEC driving simulator The VTEC simulator is a fixed based simulator which consists of a complete Volvo S80 mock up and a cylindrical screen with a height of 3 meters and a radius of 3 5 meters See fig 3 1 The visual field of view presented for the driver subject was 135 degrees The simulations are run on a Silicon Graphics Onyx2 with 16 CPUs each running at 250 MHz and three graphics channels of type Infinite Reality 2 The graphics is rendered in real t
25. simulator and the data from FaceLAB are sampled in dependently there can be a difference in the rate of sampling and or exact time of the collection of the sample This is mainly due to different latencies in the systems To combine the data some interpolation must be done Two different solutions have been considered Either interpolating one of the two data files to the sample rate of the other file or to interpolate both files to a new common sample rate Since both data streams are sampled at an non uniform rate it was judged best to interpolate both of them into a new common sample rate 19 20 Chapter 4 Combining Data Nearest neighbour The simplest interpolation is the method of nearest neighbour This method does exactly as it says it takes the value in the point of the nearest neighbouring point Nearest neighbour is suitable to interpolate Boolean data Linear Linear interpolation takes one point above and one point below the searched value The searched value is then calculated as a linear approximation from these two values Higher order interpolation Higher order interpolation types such as spline cubic or sinc interpolation could be also used But the computational time and added implementation complex ity needed for these does not improve the quality of the interpolation enough to motivate their use in this case Mending holes If recording of the simulation data for some reason pauses it does not mean that Fa
26. t do not exist The fourth method just fills the holes with zeros See fig 4 1 e This rules out that any new data is applied to the dataset This could give a somewhat discontinuous data but is the method most suitable for this application since only data that has been recorded is presented 22 Chapter 5 Implementation The demand from VTEC was that the final program ran on a Windows PC com puter Since most of the visualisation models at VTEC is in OpenFlight format there was a desire for the program to read these models without any converting The development platform was a PC running at 1 3 GHz with a GeForce3 graphics card The development was made using Visual C 6 0 Graphics Before starting the implementation several graphic packages were considered Among those considered were e World Toolkit from Sensesl23 Quickly rejected because of very high licens ing cost Development license 10000 e Gizmo3D from ToolTech Softwarea Swedish Looks promising but was not chosen because of licensing cost Development license 1000 and runtime licence 100 per machine e OpenSG 23 NOT the same as OpenSceneGraph was not chosen because it did not have support for reading OpenFlight models Neither was Visual C 6 0 supported o PLIB 24 Steves Portable Game Library supports both Visual C 6 0 and loading of OpenFlight models But the view frustum setting was hard to set to get a correct perspective projection Th
27. three different 3D views All views contain two elements the road and the vehicle interiour The road is a small road piece model about 20 m that repeats infinitely whilst driving down it The first view which is the explore view contains both the road and vehicle interiour but also a head model This view lets the user zoom pan and rotate around in 3D space The head is shown with a normal vector pointing out in front of it The gaze vector is visualised as a line emerging from a point between the eyes An object line intersection is calculated between the gaze vector and 27 vehicle interior and this point is highlighted This dramatically helps the user to see were the driver s attention is The gaze line intersection test is done in real time This was one of the calculations in the prototype that required the extensive preprocessing The second view puts the user inside the driver s head No panning zooming or rotation is possible in this view Instead this view automatically pans zooms and rotates with the driver s head i e the recorded data from FaceLAB The third view gives the view from the driver s eyes Changes in heading and pitch follow the movements of the driver s eyes Other movements such as panning zooming and rolling follow the movements of the head This view can be very jerky because of noise from FaceLAB but also because our brain normally filters our own microsaccades out Seeing the world from someone else
28. tion retrieval The department of Human System Integration works with optimising the in teraction between the driver and the vehicle The principal goal is to ensure a safe and efficient relationship between the user and the vehicle system Human System Integration has a multidisciplinary staff of employees skilled in ergonomics and engineering sciences The research and development activities are within the areas of User impres sions and satisfaction Driving support Driver awareness support Interaction de sign and Driving simulation and Virtual Reality Background It has been shown that inattention accounts for an estimated 25 56 of all acci dents 17 In order to design safe vehicles it is important to be able to evaluate in vehicle systems to determine how distracting they are while driving There exist numerous studies that have established the relationship between eye movements and higher cognitive processes E These studies argue that the 2 Introduction eye movement reflects to some degree the cognitive state of the driver Also in several studies eye movement are used as a direct measurement of driver attention and mental workload 5 m As the number of in vehicle system increases so does the visual demand on the driver The design of these in vehicle systems must therefore require a minimum of visual demand e g good ergonomics In order to define good design some objective test methods have to be used To be able to
29. to next cell Figure 5 2 The flowchart for the timer algorithm 30 Chapter 5 Implementation Head tracker data Simulator data Figure 5 3 Data pipelines Data input The program reads the data from tab or space separated textfiles The data can have it source either from FaceLAB or the simulator But the program will be able to give more information when both sources are combined which was the main idea when developing the program See fig 5 3 for a conceptual drawing of the data pipeline The dotted lines shows possible data pipelines while the drawn line represent the prefered data pipeline JE File View Data Model Help FR FR Eye Behaviour Fixation Saccade Smooth Pursuit amp OlfRoad OnRoad i Instrument Cluster Road Ahead Center Stack Rear Minor Other Other Road Acceleration Steer Angle r Throttle Brake r Time Time 00 01 48 23 Ready E ONM Figure 5 4 A screenshot from the developed program Data output So far the only output from the program is the renderings to the screen See fig 5 4 32 Chapter 6 Final Comments and Conclusions Conclusions The use of OpenSceneGraph proved to be a successful choice Although still in beta state it proved to be stable and worked smoothly together with Windows One regret concerning the implementation was the choice of using the
30. tracking equipment has to used It is also very sensitive to changes in light making it hard to use outside a laboratory environment FaceLAB FaceLAB LA is a product for non intrusive measuring of head pose and gaze direc tion It is developed by Seeing Machines with sponsoring from Volvo Technology Seeing Machines is a spin off company from the Research School of Information Sci ences and Engineering at the Australian National University This is the method used to capture all the head position and gaze data used in this thesis The method 8 they have developed is based on two monochromatic video cameras which give a stereo pair image These cameras are mounted either on a steel frame called stereohead or directly into the car The two cameras register the drivers face and track a number of features in the face Features especially suitable for tracking are corners of the eyes and mouth Markers on the face can be used as they are even better features to track than ordinary facial features Since the system uses two cameras 3D coordinates for each feature can be calculated For tracking of gaze direction the system uses a model of the eyes where the eyes are considered as perfect spheres The gaze direction is determined based on both the pose of the head and the position of the irises of the eyes By calculating the position of the iris on the eye the horizontal and vertical rotation can be estimated i e the gaze direction 11 Th
31. ts are caused by human factors The design of the driver environment has proven essential to facilitate safe driving With the advent of new devices such as mobile telephones GPS navigation and similar systems the workload on the driver has been even more complicated There is an obvious need for tools supporting objective evaluation of such systems in order to design more effective and simpler driver environments At the moment video is the most used technique for capturing the drivers visual behaviour But the analysis of these recordings is very time consuming and only give an estimate of where the visual attention is An automated tool for analysing visual behaviour would minimize the post processing drastically and leave more time for understanding the data In this thesis the development of a tool for visualising where the driver s attention is while driving the vehicle T his includes methods for playing back data stored on a hard drive but also methods for joining data from multiple different sources aa o 3D Visualisation Visual Attention Visual Behaviour OpenSceneGraph Vehicle Drivers eywords Abstract Most traffic accidents are caused by human factors The design of the driver en vironment has proven essential to facilitate safe driving With the advent of new devices such as mobile telephones GPS navigation and similar systems the work load on the driver has been even more complicated There is an obvious need for tools supp
32. ualises this data as graphical elements in both 2D and 3D hence making the understanding of the data easier and better At Volvo Technology a prototype of a similar program has been developed but since it was made using Performer on an IRIX workstation there was a request for a Windows version The beta program also lacked a GUI and required some preprocessing of the data saa bap ul yayid azeg LO m 1 20 Gaze yaw in degrees a Fixation and transition Gaze pitch in degrees 20 Ir k fl 9 8 Y Aysuag 15 10 Gaze yaw in degrees b Density plot Figure 1 1 Diagrams of visual behaviour Chapter 2 Measurement of Visual Behaviour In order to define good design some objective test methods have to be used It is also important that those tests uses common metrics to be able to compare one test to another Ocular motion Ocular motion can be divided into different categories e g saccades microsaccades smooth pursuit vergence tremor drift etc The understanding of all different types is beyond this thesis In this work a generally accepted high level description of ocular motion is used dividing ocular motion into two fundamental categories saccades and fixations There are different opinions on what defines fixations and saccades u 13 In this thesis fixations are defined as pauses over informative regions To be a valid fixation the pause has to be at least 150 ms at the specifie
33. ver s attention is for the test leader to manually make notes of where the driver is looking This can be an exhausting task and it is hard to get accurate gaze direction The accuracy highly depends on the ability of the test leader to make a good estimation of the gaze direction Many factors like change in speed and lane offset are omitted which may leave out data that could be important for making a good analysis The advantage of this method is that it does not require any special hardware to be purchased and mounted in the test car PDT The Peripheral Detection Task PDT is a method for measuring the amount of mental workload and visual distraction in road vehicles It is a task where the drivers must respond to targets presented in their peripheral view As drivers become distracted they respond slower and miss more of the PDT targets The PDT targets usually are a number of light emitting diodes placed on the dashboard at approximate 11 to 23 degrees to the left of the drivers forward view The LEDs light up with a random variation of 3 6 seconds The drivers then have to respond within 2 seconds otherwise it is considered a miss The response is usually given by pressing a button that is placed on the test subject s index finger The method records both reaction time and hit rate It has been shown in numerous tests 3 that both reaction time and hit rate drops when the driver is distracted 8 Chapter 2 Measurement of Visual Behav
34. xel coordinate system These are 2D systems used to measure where the person is looking at the screen or environment The first 3 of these coordinate systems are right handed coordinate systems Since rotation of the eyes is given only in 2D it is not possible to roll the eyes it is not possible to specify this as a left or right handed system FaceLAB outputs the rotation of the eyes as pitch and heading independent of the rotation of the head FaceLAB can also output gaze data in a screen coordinate system See Fig 2 5 a and a pixel coordinate system See Fig 2 5 b The screen coordinate system is chosen so it defines positive x as right and positive y as up with the origin in the middle of the screen while the pixel coordinate system defines the origin in the top left corner with x as right and y down 12 Chapter 2 Measurement of Visual Behaviour a FaceLAB world coordinate system b FaceLAB stereohead coordinate system c FaceLAB head coordinate system Figure 2 4 FaceLAB 3D coordinate systems 13 a FaceLAB screen coordinate system b FaceLAB pixel coordinate system Figure 2 5 FaceLAB 2D coordinate systems 14 Chapter 2 Measurement of Visual Behaviour Problems with FaceLAB Although FaceLAB is more robust than its competitors it still has some problems The biggest problem is error in tracking accuracy when the cameras are exposed to quick changes in lighting conditions and even more whe

Download Pdf Manuals

image

Related Search

Related Contents

Lenovo ThinkPad W541  Laboratory Experiment 6 EE348L Spring 2005  USER MANUAL  OEHLBACH i-Jack 35  Owner`s Manual Guide du Propriétaire Manual del Usuario  Optiva DVI  Manuel d`utilisation de FOS Manuel d`utilisation de FOS  Lifts Part 4: Operational Management [PDF 707Kb]  Benutzerhandbuch    

Copyright © All rights reserved.
Failed to retrieve file