The Velocity of Visual Attention in Vehicle Accident

Imagine driving home from work while listening to a radio interview with an actor you can’t quite place. Your spouse calls you on your cell phone and you answer on your blue tooth headset as the radio plays. Can you pick up the dry cleaning? He sounds like the action star. The blue suit. Was he in that Oscar movie last year? The cleaners by the drug store; use your GPS. Dang, the drug store you just passed. Change lanes. Crunch! How could you not have seen that truck pulling out from the stop sign?

Collision hazards are often visible, but not seen, because the driver was not attending to the hazard at the critical time or location.  These are not uncommon lapses; 78% of all crashes are estimated to involve driver inattention (Dingus, et al, 2006). How can inattention account for such frequent lapses in perception, especially for conspicuous and obviously visible hazards? Determining the cause of a vehicle collision requires determining what each driver saw, or could have seen, in the moments immediately preceding the collision. The purpose of this article is to review how principles of visual attention limit drivers’ ability to detect traffic hazards, and how educating judges and juries on these principles can assist the court in assigning liability.

Perception Response Time (PRT)

Although drivers may say they hit the brakes the “instant” they saw the hazard, this is impossible. Responding to even the most apparent hazard takes time. We must detect the object (e.g. oncoming car, pedestrian), identify whether it poses a hazard, decide how best to respond and, finally, respond (e.g. brake or steer). The time it takes for a driver to detect, identify, decide and respond to a hazard is called the perception response time (PRT).

The PRT of a driver who has been in a collision can often be calculated. For example, a vehicle stopped at a stop sign can be considered a hazard the moment it begins to accelerate across the lane of the oncoming vehicle. Based on the time required for the initially stopped car to accelerate across the intersection and length of the skid mark left by the right-of-way vehicle, the oncoming driver‘s PRT can be calculated to be, say, 1.7 seconds. In other words, 1.7 seconds elapsed between the instant the incurring vehicle could first be detected as a hazard to when the oncoming driver responded with emergency braking.

Is a 1.7 second PRT within the bounds of reasonable driving behavior? To answer this question, it is important to appreciate the variability of PRT across drivers, and even within a single driver. Some drivers respond more quickly than others, and even the same driver exposed to the same situation will have a different response time with each exposure. Sometimes experts suggest that a single PRT value, such as 1.1 seconds, should be used for all drivers and all situations. Judges and juries may then infer that drivers with PRTs longer than 1.1 seconds are negligent, failing to respond as quickly as the expert suggests drivers can respond. This inference would be incorrect.

A more accurate analysis would include the range of PRTs across drivers in similar situations and would evaluate an individual driver’s response time as a percentile across the driving population. For example, Mazze et al (2002) recorded the PRT of drivers who drove a test course and had a foam car accelerate from a stop sign into their lane. Drivers had a range of PRTs from 0.7 to 2.4 seconds. A PRT value of 1.7 seconds in the example above represents the 78th percentile of response times for drivers in a similar experimental situation (e.g. out of 100 drivers, 78 would respond more quickly and 22 would respond more slowly).  It is then up to counsel to convince the judge and jury whether this percentile is acceptable or reflects driver negligence. Arguing in favor or against a client’s PRT for a particular collision can be helped by understanding how the variation in PRT is due to fundamental properties of visual attention.

Drivers attend where they look

The above example underscores a simple point: visual detection of hazards requires attention. Unfortunately, our capability of attending to hazards is sometimes undermined by our decisions to attend to other stimuli, events and thoughts. Some of these distractions are not needed to drive (e.g., talking on a cell phone or tuning the radio), but some distractions are (e.g., checking rear and side view mirrors, or monitoring pedestrians). Drivers cannot attend to all potential hazards at the same time; attending to one stimulus is necessarily at the expense of attending to another.

A clear indication of where a driver is attending is given by where the driver is looking. While it may sometimes feel as if we perceive the world uniformly well across our visual field, this is not the case. Our retinas have light receptors most densely packed into the central 1-2 degrees, allowing us to see exquisite detail for tasks such as reading. Just five degrees off this central area, our acuity degrades to 40% of central vision (Olson, 1996) and our visual perception for detail is very poor. To compensate for our poor peripheral acuity, our eyes—and thus our attention—shifts up to five times per second between stable fixations during which we perceive our visual world.

What dictates where we attend and where we look? Visual attention and gaze are guided by two factors. The first are called “bottom-up” low-level visual properties of the environment and include a bright spot of color or an area of sharp contrast. A flashing hazard light may capture attention in a bottom-up manner, and cause the eye to look to the stimulus (Itti & Koch, 2001). Less conspicuous stimuli, such as a stopped car without lights, may not be attended because it does not “pop-out” of the scene. Visibility experts can often take light measurements of the scene to quantify how conspicuous a hazard is, and thereby the likelihood a driver would have noticed the hazard. However, drivers can’t simply respond to the brightest stimuli in their field of view, otherwise they would never stop staring at the sun!

The second way drivers direct their attention and gaze is through “top-down” cognitive goals and task demands, such as when changing lanes or driving on a tight curve. Both tasks require critical visual information at critical moments, and drivers typically direct their gaze to predictable key areas at predictable key times. Thus, the pre-collision environment may help a human factors expert determine where the driver was looking immediately before impact and whether the hazard was likely attended to or not.

The complexity of driving is evident when bottom-up and top-down influences compete for a driver’s attention. For example, when planning a lane change, drivers may shift their gaze from the car they are following to their rear-view and side-view mirrors to look for surrounding cars. If the lead car brakes during this time, the sudden bottom-up onset of bright vehicle brake lights compete against the top-down task of searching for surrounding cars.

Where drivers look, and thus, where drivers attend, can be recorded by researchers using an eye-tracker. Eye-trackers are special devices either worn by drivers or composed of cameras mounted across the dashboard. Eye-trackers record both the driver’s field of view and where the driver is looking. Figure 1 shows a series of frames from a movie recorded while a driver was adjusting the radio. The red cross-hair indicates where the driver is looking in each frame. Note the driver alternated his gaze between the road and the radio, leaving them potentially vulnerable to an unexpected incursion from an oncoming vehicle. Eye-tracking video can be an illustrative way to demonstrate where drivers were likely looking in the moments preceding a collision, or to assess whether drivers look at, and presumably notice, certain features of the environment (e.g., a construction warning sign).

human_factors_expert

Figure 1: An example of a driver’s direction of gaze (red cross-hair) while the driver adjusts the radio. At 0.0 seconds, the driver looks to a car in the distance. After looking back and forth to the radio for 1.3 seconds, the car in the distance is now passing the driver. What if the oncoming vehicle had crossed the centerline?

Expectations drive attention and perception

Drivers respond quicker and are more likely to detect hazards when the hazard is expected and appears at an expected location. Drivers are slower to respond to unexpected hazards, such as a darkly-clad pedestrian in their path outside of a crosswalk (Roper & Howard, 1937) or stop signs placed on stretches of road between intersections (Shinoda, et al, 2001).

Unexpected hazards require more conspicuity to capture our attention. In many cases, detecting unexpected hazards may require up to twice the contrast of expected hazards, or a doubling of the driver’s PRT (Ising & Green, 2009). In these situations, it would be unfair to compare a driver’s PRT to those of drivers approaching an expected hazard.

Even when drivers look directly at a hazard, they still may fail to notice it. The “looked but did not see” phenomenon is common in many contexts. In a comical and classic experiment, Simons & Chabris (1999) asked participants to watch a video in which several basketballs were passed among numerous players in white uniforms. Subjects were asked to count the number of passes between players. After a few seconds of viewing, a person in a black gorilla suit walks into the scene, pauses to  beat his chest, and then continues out of the scene. Half of the study participants didn’t notice the gorilla! Failing to notice the gorilla demonstrates an important principle of visual attention. When we search for, or monitor objects with certain features (e.g. white uniforms), we increase the likelihood of our detection of objects with those features at the expense of inhibiting our ability to detect and perceive objects with unattended features (e.g. black gorilla suit).

The ability to filter out features irrelevant to the current task may impair the detection of driving hazards. For example, in one study (Most  & Astur, 2007), participants driving in a simulator were instructed to follow a route indicated by either yellow or blue traffic arrow signs. After several intersections, an unexpected motorcycle, either yellow or blue, veered into the path of the driver. When the color of the motorcycle matched the color of the signs drivers were attending to (e.g. blue motorcycle, following blue signs), drivers were 186 milliseconds faster to brake than drivers who were approached by a motorcycle with a mismatched color (e.g. yellow motorcycle, following blue signs). In the context of the study, the attentional mis-match resulted in a five times more motorcycle collisions. Given how frequently drivers search for signs by their color, it is striking such a common and subtle shift in visual attention results in such impairment in driver PRT.

human_factors_PRT

Figure 2: Drivers were five times more likely to collide with an unexpected yellow motorcycle when they were searching for traffic signs of a different color (blue) (Most & Auster, 2007; used with permission).

In-vehicle distractions

Already challenged to coordinate their attention with myriad events and potential hazards outside their vehicle, drivers are now offered a buffet of in-vehicle devices competing for their attention.  In a review of 33 studies (Caird, Willness, Steel, Scialfa, 2008), the average PRT of a driver using a cell phone is extended by a quarter of a second (250ms) or longer. Use of a cell phone also increases the likelihood of entirely missing critical events, such as a changing traffic light (Hancock, Lesch & Simons, 2003). Despite many state laws implying otherwise, scientific studies have shown drivers to be similarly distracted when using either a hand-held or hands-free phone (Caird, Willness, Steel, Scialfa, 2008). This underscores the idea that a cognitive distraction causes the driving impairment rather than the physical task of handling the phone and not having both hands on the steering wheel.

Drivers underestimate the degree of their attentional impairment during distracting tasks (Horey, Lesch, Garabet, 2008). By some measures, the degree of impairment from cell phone usage is similar the impairments of a drunk driver (Strayer, Drews, & Crouch, 2006). Future studies may help elucidate whether the new in-vehicle displays and navigational aids (e.g. GPS, rear-view cameras) also compete for driver attention and impair the detection of traffic hazards.

Conclusions

Understanding if or when a hazard could have been seen is crucial to helping judges and juries determine liability in a traffic collision. Hazard visibility is affected both by the visual properties of the hazard and the attentional state of the driver. Focusing on one driving task may impair the performance of other driving tasks, either by drawing our gaze or by making us less sensitive to stimuli not immediately relevant to our current task. The degree to which drivers are found liable for such impairment will vary depending on the context of the driving accident and on how judges and juries understand the role of attention in driving.

References

  1. Caird, J.K., Willness, C.R., Steel, P., Scialfa, C. (2008) A meta-analysis of the effects of cell phones on driver performance. Accident Analysis & Prevention. Vol 40(4). 1282-1293.
  2. Dingus, T A, Klauer, S G, Neale, V L, Petersen, A, Lee, S E, Sudweeks, J D, Perez, M A, Hankey, J, Ramsey, D J, Gupta, S, Bucher, C, Doerzaph, Z R, Jermeland, J, Knipling, R R (2006) The 100-Car Naturalistic Driving Study, Phase II - Results of the 100-Car Field Experiment. (Contract No. DTNH22-00-C-07007). Washington, DC: National Highway Traffic Safety Administration.
  3. Hancock, P.A., Lesch, M., Simmons, L. (2003) The distraction effects of phone use during a crucial driving maneuver. Accident Analysis and Prevention (35) 501-514.
  4. Horrey, W.J., Lesch, M.F., Garabet, A. (2008) Assessing the awareness of performance decrements in distracted drivers. . Accident Analysis and Prevention (40) 675-682.
  5. Itti, L. & Koch, C. (2001) Computational modeling of visual attention. Nature Reviews Neuroscience, Vol. 2, No. 3, pp. 194-203
  6. Most, S.B. & Astur, R.S. (2007) Feature-based attentional set as a cause of traffic accidents. Visual Cognition. 15(2), 125-132.
  7. Olson, P.L. (1996) Forensic aspects of driver perception and response. Tucson, AZ: Lawyers & Judges Publishing Company, Inc.
  8. Roper, V.J. & Howard E.A. (1937) Seeing with motor car headlamps. Thirty-first Annual Convention of the Illuminating Engineering Society: White Sulpher Springs, WV.
  9. Shinoda, H., Hayhoe, M., & Shrivastava, A. (2001). What controls attention in natural environments? Vision Research, special issue on Eye Movements and Vision in the Natural World, 41, 3535-3546.
  10. Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28, 1059-1074.
  11. Strayer, D.L., Drews, F.A., Crouch, D.J. (2006) A comparison of the cell phone driver and the drunk driver.  Human Factors. Vol. 48(2) 381-391.
© 2015 MEA Forensic Engineers & Scientists Inc.

 

Follow Me