US20020054210A1 - Method and apparatus for traffic light violation prediction and control - Google Patents

Method and apparatus for traffic light violation prediction and control Download PDF

Info

Publication number
US20020054210A1
US20020054210A1 US09/852,487 US85248701A US2002054210A1 US 20020054210 A1 US20020054210 A1 US 20020054210A1 US 85248701 A US85248701 A US 85248701A US 2002054210 A1 US2002054210 A1 US 2002054210A1
Authority
US
United States
Prior art keywords
vehicle
traffic
light phase
traffic signal
intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/852,487
Inventor
Michael Glier
Douglas Reilly
Michael Tinnemeier
Steven Small
Steven Hsieh
Randall Sybel
Mark Laird
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nestor Traffic Systems Inc
Original Assignee
Nestor Traffic Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nestor Traffic Systems Inc filed Critical Nestor Traffic Systems Inc
Priority to US09/852,487 priority Critical patent/US20020054210A1/en
Publication of US20020054210A1 publication Critical patent/US20020054210A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/08Controlling traffic signals according to detected number or speed of vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/087Override of traffic control, e.g. by signal transmitted by an emergency vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Definitions

  • the present invention is related to traffic monitoring systems, and more particularly to a traffic monitoring system for detecting, measuring and anticipating vehicle motion.
  • Systems for monitoring vehicular traffic are known. For example, it is known to detect vehicles by employing inductive loop sensors. At least one loop of wire or a similar conductive element may be disposed beneath the surface of a roadway at a predetermined location. Electromagnetic induction occurs when a vehicle occupies the roadway above the loop. The induction can be detected via a simple electronic circuit that is coupled with the loop. The inductive loop and associated detection circuitry can be coupled with an electronic counter circuit to count the number of vehicles that pass over the loop.
  • inductive loops are subjected to harsh environmental conditions and consequently have a relatively short expected lifespan.
  • Machine vision traffic monitoring systems are generally mounted above the surface of the roadway and have the potential for much longer lifespan than inductive loop systems. Further, machine vision traffic monitoring systems have the potential to provide more information about traffic conditions than inductive loop traffic monitoring systems. However, known machine vision traffic monitoring systems have not achieved these potentials.
  • a traffic monitoring station employs at least one video camera and a computation unit to detect and track vehicles passing through the field of view of the video camera.
  • the disclosed system may be used as a traffic light violation prediction system for a traffic signal, and/or as a collision avoidance system.
  • the camera provides a video image of a section of roadway in the form of successive individual video frames.
  • Motion is detected through edge analysis and changes in luminance relative to an edge reference frame and a luminance reference frame.
  • the frames are organized into a plurality of sets of pixels.
  • Each set of pixels (“tile”) is in either an “active” state or an “inactive” state.
  • a tile becomes active when the luminance or edge values of the pixels of the tile differ from the luminance and edge values of the corresponding tiles in the corresponding reference frames in accordance with predetermined criteria.
  • the tile becomes inactive when the luminance and edge values of the pixels of the tile do not differ from the corresponding reference frame tiles in accordance with the predetermined criteria.
  • the reference frames which represent the view of the camera without moving vehicles, may be dynamically updated in response to conditions in the field of view of the camera.
  • the reference frames are updated by combining each new frame with the respective reference frames.
  • the combining calculation is weighted in favor of the reference frames to provide a gradual rate of change in the reference frames.
  • a previous frame may also be employed in a “frame-to-frame” comparison with the new frame to detect motion.
  • the frame-to-frame comparison may provide improved results relative to use of the reference frame in conditions of low light and darkness.
  • Each object is represented by at least one group of proximate active tiles (“quanta”).
  • Quanta Individual quantum, each of which contains a predetermined maximum number of tiles, are tracked through successive video frames. The distance traveled by each quantum is readily calculable from the change in position of the quantum relative to stationary features in the field of view of the camera. The time taken to travel the distance is readily calculable since the period of time between successive frames is known.
  • Physical parameters such as velocity, acceleration and direction of travel of the quantum are calculated based on change in quantum position over time.
  • Physical parameters that describe vehicle motion are calculated by employing the physical parameters calculated for the quanta. For example, the velocities calculated for the quanta that comprise the vehicle may be combined and averaged to ascertain the velocity of the vehicle.
  • the motion and shape of quanta are employed to delineate vehicles from other objects.
  • a plurality of segmenter algorithms is employed to perform grouping, dividing and pattern matching functions on the quanta. For example, some segmenter algorithms employ pattern matching to facilitate identification of types of vehicles, such as passenger automobiles and trucks.
  • a physical mapping of vehicle models may be employed to facilitate the proper segmentation of vehicles.
  • a list of possible new objects is generated from the output of the segmenter algorithms. The list of possible new objects is compared with a master list of objects, and objects from the list of possible new objects that cannot be found in the master list are designated as new objects. The object master list is then updated by adding the new objects to the object master list.
  • At least one feature extractor is employed to generate a descriptive vector for each object.
  • the descriptive vector is provided to a neural network classification engine which classifies and scores each object.
  • the resultant score indicates the probability of the object being a vehicle of a particular type.
  • Objects that produce a score that exceeds a predetermined threshold are determined to be vehicles.
  • the traffic monitoring station may be employed to facilitate traffic control in real time. Predetermined parameters that describe vehicle motion may be employed to anticipate future vehicle motion. Proactive action may then be taken to control traffic in response to the anticipated motion of the vehicle. For example, if on the basis of station determined values for vehicle distance from the intersection, speed, acceleration, and vehicle class (truck, car, etc.), the traffic monitoring station determines that the vehicle will “run a red light,” traversing an intersection during a period of time when the traffic signal will be otherwise be indicating “green” for vehicles entering the intersection from another direction, the traffic monitoring station can delay the green light for the other vehicles or cause some other actions to be taken to reduce the likelihood of a collision.
  • Predetermined parameters that describe vehicle motion may be employed to anticipate future vehicle motion. Proactive action may then be taken to control traffic in response to the anticipated motion of the vehicle. For example, if on the basis of station determined values for vehicle distance from the intersection, speed, acceleration, and vehicle class (truck, car, etc.), the traffic monitoring station determines that the vehicle will “run a red light,” travers
  • Such actions may also include displaying the green light for the other vehicles in an altered mode (e.g., flashing) or in some combination with another signal light (e.g., yellow or red), or initiating an audible alarm at the intersection until the danger has passed.
  • the traffic monitoring station may track the offending vehicle through the intersection and record a full motion video movie of the event for vehicle identification and evidentiary purposes.
  • FIG. 1A is a perspective diagram of a traffic monitoring station that illustrates configuration
  • FIG. 1B is a side view diagram of a traffic monitoring station that illustrates tilt angle
  • FIG. 1C is a top view diagram of a traffic monitoring station that illustrates pan angle
  • FIG. 2 is a flow diagram that illustrates the vehicle detection and tracking method of the traffic monitoring station
  • FIG. 3 is a diagram of a new frame that illustrates use of tiles and quanta to identify and track objects
  • FIG. 4 is a diagram of a reference frame
  • FIG. 5 is a diagram that illustrates edge detect tile comparison to determine tile activation
  • FIG. 6 is a diagram that illustrates adjustment of segmenter algorithm weighting
  • FIG. 7 is a diagram that illustrates feature vector generation by a feature extractor
  • FIG. 8 is a diagram of the traffic monitoring station of FIG. 1 that illustrates the processing module and network connections;
  • FIG. 9 is a block diagram of the video capture card of FIG. 8;
  • FIG. 10A is a diagram that illustrates use of the new frame for image stabilization
  • FIG. 10B is a diagram that illustrates use of the reference frame for image stabilization
  • FIG. 11 is diagram of the field of view of a camera that illustrates use of entry and exit zones
  • FIG. 12 is a block diagram of traffic monitoring stations networked through a graphic user interface
  • FIG. 13 is a flow diagram that illustrates station to station vehicle tracking
  • FIG. 14 is a diagram of an intersection that illustrates traffic control based on data gathered by the monitoring station.
  • a traffic monitoring station 8 includes at least one camera 10 and a computation unit 12 .
  • the camera 10 is employed to acquire a video image of a section of a roadway 14 .
  • the computation unit 12 is employed to analyze the acquired video images to detect and track vehicles.
  • a three dimensional geometric representation of the site is calculated from parameters entered by the user in order to configure the traffic monitoring station 8 for operation.
  • the position of a selected reference feature 16 relative to the camera 10 is measured and entered into memory by employing a graphic user interface.
  • a distance Y along the ground between the camera 10 and the reference feature 16 on a line that is parallel with the lane markings 17 and a distance X along a line that is perpendicular with the lane markings are measured and entered into memory.
  • the camera height H, lane widths of all lanes W 1 , W 2 , W 3 and position of each lane in the field of view of the camera are also entered into memory.
  • the tilt angle 15 and pan angle 13 of the camera are trigonometrically calculated from the user-entered information, such as shown in Appendix A.
  • the tilt angle 15 is the angle between a line 2 directly out of the lens of the camera 10 and a line 6 that is parallel to the road.
  • the pan angle 13 is the angle between line 2 and a line 3 that is parallel to the lane lines and passes directly under the camera 10 .
  • a value used for scaling (“scaler”) is calculated for facilitating distance calculations.
  • the scaler is a fixed factor for the entire image that is used for conversion between real distances and pixel displacements. Hence, the distance and direction from the camera to any point in the field of view of the camera, and the distance and direction between any two points in the field of view of the camera can be determined.
  • Corrections for roadway grade and bank may also be calculated during configuration.
  • “Grade” refers to the change in height of the roadway relative to the height of the camera within the field of view of the camera.
  • “Bank” refers to the difference in height of portions of the roadway along a line perpendicular with the lane markings. The user determines the grade and bank of the roadway and enters the determined values into memory by employing a graphic user interface. The grade and bank corrections are achieved by translating the reference plane to match the specified grade and bank.
  • a video frame 18 is acquired from the camera as depicted in step 20 . If an interlaced camera is employed, the acquired frame is de-interlaced. If a progressive scan camera is employed then de-interlacing is not necessary. Image stabilization techniques may also be employed to compensate for movement of the camera due to wind, vibration and other environmental factors, as will be described below.
  • the acquired frame 18 is organized into tiles 22 as depicted in step 24 .
  • Each tile 22 is a region of predetermined dimensions. In the illustrated embodiment, each frame contains 80 tiles per row and 60 tiles per column and the dimensions of each tile are 8 pixels by 8 pixels. Tile dimensions may be adjusted, may be non-square, and may overlap other tiles.
  • a list 26 of tiles in which motion is detected (“active tiles”) 38 is generated by employing either or both of reference frames 28 , 29 and a previously acquired frame 30 in separate comparisons with the acquired frame 18 .
  • the reference frame 28 represents the luminance of the image from the camera in the absence of moving vehicles.
  • the reference frame 29 represents edges detected in the image from the camera in the absence of moving vehicles. In the illustrated embodiment, both the reference frames 28 , 29 and the previous frame 30 are employed. If a color camera is employed, the chrominance (color) portion of each tile 22 in the acquired frame is separated from the luminance (black and white) portion prior to comparison.
  • an edge detect comparison may be employed to detect motion and activate tiles.
  • the tile For each tile 22 of the new frame 18 (FIG. 3), the tile is organized into four “quartiles” 32 of equal size.
  • the pixel luminance values in each quartile 32 are summed to provide a representative luminance value for each quartile.
  • each pixel has a luminance represented by a value from 0 to 255, where greater values indicate greater luminance.
  • the quartile having the maximum representative luminance value is then identified and employed as a baseline for analyzing the other quartiles.
  • the maximum luminance quartile 34 is designated to be in a first state, illustrated as logic 1.
  • the other quartiles in the tile are designated to be in the first state if their representative luminance value exceeds a threshold defined by a predetermined percentage of the luminance value of the maximum luminance quartile 34 (lum ⁇ lum max ).
  • (“the gain”) can be fixed at a specific level or may be allowed to vary based upon the characteristics of the image.
  • Quartiles with a representative value that fails to exceed the threshold are designated to be in a second state, illustrated by logic 0.
  • Each quartile is then compared with the corresponding quartile from the corresponding tile 36 from the reference frame 29 (FIG. 4) and, separately, the previously acquired frame.
  • the tile 22 is designated as “active” if the comparison indicates a difference in the state of more than one quartile. If the comparison indicates a difference in the state of one or fewer quartiles and at least one quartile of the tile is in the second state, the tile is designated as “inactive.”
  • each quartile 32 in the corresponding tiles of the current frame and the reference frame are designated to be in the first state a luminance activation technique is employed.
  • a luminance intensity value is determined by summing the luminance of all pixels in the tile and dividing the sum by the total number of pixels in the tile, i.e., computing the average luminance.
  • the average luminance of the tile is compared with the average luminance of the tile 36 in the corresponding location of the reference frame 28 and the previous frame to detect any difference therebetween.
  • the average luminance of the reference tile is subtracted from the average luminance of the new tile to produce a difference value and, if the magnitude of the difference value exceeds a predetermined threshold, motion is indicated and the tile is designated as “active.”
  • the model using tiles, quartiles and pixels is isomorphic to a neural model of several layers.
  • the reference frames 28 , 29 may be either static or dynamic.
  • a static reference frame may be generated by storing a video frame from the roadway or portion(s) of the roadway when there are no moving objects in the field of view of the camera.
  • the reference frames 28 , 29 are dynamically updated in order to filter differences between frames that are attributable to gradually changing conditions such as shadows.
  • the reference frames are updated by combining each new frame 18 with the reference frames. The combining calculation may be weighted in favor of the reference frames to filter quickly occurring events, such as the passage of vehicles, while incorporating slowly occurring events such as shadows and changes in the ambient light level.
  • active tiles 38 in the list 26 of active tiles are organized into sets of proximately grouped active tiles (“quanta”) 40 as depicted by step 42 .
  • the quanta 40 are employed to track moving objects such as vehicles on successive frames. The distance traveled by each quantum is calculated based upon the change in position of the quantum from frame to frame. Matching and identifying of quantum is facilitated by a “grab phase” and an “expansion phase” as depicted by step 44 .
  • Each quantum has a shape. In the “grab phase,” active tiles are sought in a predicted position that is calculated for the quantum in the new frame, within the shape defined by the quantum. The predicted position is determined by the previously observed velocity and direction of travel of the quantum.
  • any active tiles are located within the quantum shape region in the predicted position of the quantum in the new frame. If no active tiles are located in the quantum shape region in the predicted position in the new frame, the quantum is considered lost.
  • active tiles that are adjacent to a found quantum and that have not been claimed by other quanta are incorporated into the found quantum, thereby allowing each quantum to change shape. Unclaimed active tiles are grouped together to form new quanta unless the number of active tiles is insufficient to form a quantum. If any of the quanta that have changed shape now exceed a predetermined maximum size then these “parent” known quantum are reorganized into a plurality of “children” quantum. Each child quantum inherits the characteristics of its parent quantum, such as velocity, acceleration and direction.
  • the identified quanta are organized into objects as depicted in step 46 .
  • the traffic sensor employs a plurality of segmenter algorithms to organize the identified quanta into objects.
  • Each segmenter algorithm performs a grouping, dividing or pattern matching function. For example, a “blob segmenter” groups quanta that are connected.
  • Some segmenter algorithms facilitate identification of types of vehicles, such as passenger automobiles and trucks.
  • Some segmenter algorithms facilitate identification of vehicle features such as headlights.
  • Some segmenter algorithms reorganize groups of quanta to facilitate identification of features.
  • the segmenter algorithms are employed in accordance with a dynamic weighting technique to facilitate operation under changing conditions.
  • Five segmenter algorithms designated by numbers 1 - 5 are employed in the illustrative example.
  • One segmenter algorithm is employed in each time slot.
  • the segmenter algorithm in the time slot designated by an advancing pointer 48 is employed.
  • a segmenter algorithm successfully detects and tracks an object that is determined to be a vehicle by the neural network and is consistent across a plurality of frames then that segmenter algorithm is granted an additional time slot. Consequently, the segmenter algorithms that are more successful under the prevailing conditions are weighted more heavily than the unsuccessful segmenter algorithms.
  • each segmenter algorithm is assigned at least one permanent time slot 50 in order to assure that each of the segmenter algorithms remains active without regard to performance.
  • operation dynamically adjusts to changing conditions to maintain optimum performance. It should be apparent that the number of segmenters, and number and position of the time slot allocations may be altered from the illustrative example.
  • a list of possible new objects represented by their component quanta is generated by the segmenters as depicted by step 54 .
  • the list of possible new objects is compared with a master list of objects, and any objects from the list of possible new objects that cannot be found in the master list is designated as a new object as depicted by step 56 .
  • the object master list is updated by adding the new objects to the object master list as depicted in step 57 .
  • the objects in the updated object master list are then classified and scored as depicted in step 58 .
  • each feature extractor produces a vector 51 of predetermined length that describes an aspect of the object, such as shape.
  • the illustrated feature extractor overlays the object with a 5 ⁇ 5 grid and generates a vector that describes the shape of the object. Because the number of cells 53 in the grid does not change, the representative vector 51 is relatively stable when the size of the object (in number of pixels) changes, such as when the object approaches or moves away from the camera.
  • the vector 51 is concatenated with vectors 55 provided from other feature extractors, if any, to produce a larger vector 57 that represents the object.
  • Other grid patterns and combinations of overlays may be used to achieve improved results based upon camera position relative to the vehicles and other environmental factors.
  • Masking using a vehicle template may be employed to remove background information prior to feature extraction.
  • the object is then compared with templates 136 that depict the shape of known types of vehicles such as cars, vans, trucks etc. When the best fit match is determined, the center of the object, where the center of the template is located in the match position, is marked and only portions of the object that are within the template are employed for generating the vectors.
  • the descriptive vectors generated by the feature extractors are provided to a neural network classification engine that assigns a score to each object.
  • the score indicates the probability of the object being a vehicle, including the type of vehicle, e.g., passenger automobile, van, truck.
  • Objects that produce a score that exceeds a predetermined threshold are determined to be vehicles of the type indicated. If there are regions of overlap between objects in the updated object master list, ownership of the quanta in those regions is resolved in a competition phase as depicted in step 60 . Of the objects in competition for each quantum in the overlap region, the object that was assigned the highest score by the neural network obtains ownership of the quanta.
  • Physical characteristics relating to object motion are calculated in step 62 .
  • the calculations are based on changes in position of a plurality of quanta from frame to frame.
  • vehicle velocity may be calculated as the average velocity of the quanta of the vehicle, by the change in location of a specific portion of the vehicle such as the center front, or by other techniques.
  • vehicle acceleration may be calculated as the change in vehicle velocity over time and vehicle direction may be calculated by extrapolating from direction of travel of quanta over a plurality of frames.
  • the velocity, acceleration and direction of travel of the quanta are calculated based on known length and width dimensions of each pixel and the known period of time between successive frames.
  • the computation unit 12 includes at least one video capture card 66 .
  • the video capture card 66 performs initial processing on video signals received from the camera 10 .
  • the computation unit 12 operates on the output of the video capture card 66 .
  • the functions described with regard to the embodiment illustrated in FIGS. 8 and 9 are implemented with a custom video capture card 66 . These functions may alternatively be implemented with a commercially available frame grabber and software.
  • the computation unit 12 is a commercially available IBM compatible computer that employs the Windows 95 operating system.
  • the IBM compatible computer includes a Peripheral Controller Interconnect (“PCI”) bus interface.
  • PCI Peripheral Controller Interconnect
  • the video capture card 66 is operative to process new video frames, establish and maintain the reference frame, and compare the new frames with the reference frame in order to ascertain luminance and/or edge differences therebetween that are indicative of motion.
  • a digitizer circuit 70 is employed to convert the analog video signals from the camera.
  • the camera may provide analog video signals in either National Television Standards Committee (“NTSC”) format or Phase Alteration Line (“PAL”) format.
  • NTSC National Television Standards Committee
  • PAL Phase Alteration Line
  • the chrominance portion, if any, of the video signal is separated from the luminance portion of the video signal by the digitizer circuit 70 .
  • the resulting digital signals are provided to an image state machine 72 where the video signal is de-interlaced, if necessary. In particular, the video signal is de-interlaced unless a progressive scan camera is employed.
  • the output of the image state machine 72 is a succession of de-interlaced video frames, each frame being 640 pixels by 480 pixels in size.
  • the image state machine is coupled to a Random Access Memory (“RAM”) 74 that includes a ring of three buffers where frame data is collected prior to transmission of the frames over a digital bus 76 via pixel fetch circuitry 78 .
  • RAM Random Access Memory
  • image stabilization is employed by the video control processor 99 to compensate for camera movement due to environmental factors such as wind.
  • Up to two anchor features 69 that are identified by the user during configuration of the traffic monitoring station are employed.
  • the location of each anchor 69 on the new frame 18 is determined, and the new frame is adjusted accordingly.
  • Each anchor 69 is located by matching a template 162 to the image in the new frame.
  • the template 162 is a copy of the rectangular region of the reference frame 28 that includes a representation of the anchor feature 69 .
  • a pixel by pixel comparison is made between the template 162 and a selected region of the new frame to determine whether a match has been found based on average luminance difference.
  • the first comparison may be made at the coordinates at which the anchor 69 is located in the reference frame, or at the coordinates at which the anchor was located in the previous frame. If a min ⁇ calculation that is less than or equal to the min ⁇ calculation in the previous frame is found, the location is determined to be a match, i.e., the anchor is found. If a min ⁇ calculation that is less than or equal to the min ⁇ calculation in the previous frame is not found, the location of the selected region is adjusted until the best match is located.
  • the location of the selected region is adjustable within an area of up to 8 pixels in any direction from the matching coordinates of the previous frame.
  • the selected region is shifted in turn both vertically and horizontally by a distance of four pixels to yield four min ⁇ calculation results. If the lowest of the four results is lower than the result at the start point, the selected region is moved to the place that yielded the lowest result. If none of the results is lower than the result at the start point, the selected region is shifted in turn both vertically and horizontally by half the original distance, i.e., by two pixels, to yield four new min ⁇ calculation results. If the lowest of the four results is lower than the result at the start point, the selected region is moved to the place that yielded the lowest result. The distance may be halved again to one pixel to yield four new min ⁇ calculation results.
  • the anchor When the best result is found, the anchor is considered found if the result achieves a predetermined threshold of accuracy. If the best result fails to achieve the predetermined threshold of accuracy, an edge comparison is undertaken. The edge comparison is made between the template and the region that defines the best min ⁇ calculation results. If at least one vertical edge, at least one horizontal edge, and at least 75% of all constituent edges are matched, the anchor is considered found. Otherwise, the anchor is considered not found.
  • the new frame 18 is adjusted to produce a stabilized frame based upon how many anchors 69 were found, and where the anchors were found. In the event that both anchors are found and the anchors were found at the same coordinates as in the reference frame, the camera did not move and no correction is necessary. If both anchors moved by the same distance in the same direction, a two-dimensional X-Y offset vector is calculated. If both anchors moved in different directions, the camera may have zoomed and/or rotated. A zoom is indicated when the anchors have moved either towards or away from the center 164 of the image. For example, the anchors appear larger and further from the center of the image when the camera zooms in, and smaller and closer to the center of the image when the camera zooms out.
  • the image is “inflated” by periodically duplicating pixels so that the anchors appear in the expected dimensions.
  • the image is “deflated” by periodically discarding pixels so that the anchors appear in the size that is expected.
  • the video control processor 99 calculates a set of correction factors as described above and sends them to the pixel fetch circuitry 78 .
  • correction factors include instructions for shifting the frame horizontally, vertically, or both to correct for camera pan and tilt motion, and/or directions for inflating or deflating the frame to compensate for camera zoom motion. If no correction is needed, the video control processor calculates a set of correction factors which instructs the pixel fetch circuitry to do a simple copy operation.
  • the correction factors allow the pixel fetch circuitry to select pixels from RAM 74 for transmission on the bus 76 in stabilized order.
  • the pixels are collected into a stabilized frame 84 for use by the computation unit 12 (FIG. 8).
  • a differencing unit 82 employs the contents of the reference frame buffer 80 and the incoming pixels on the bus 76 to compare the reference frame with the stabilized frame, pixel by pixel, in order to determine the differences.
  • the difference values are stored in the difference frame buffer 86 .
  • the computation unit 12 may access the difference frames over the PCI bus 94 .
  • a tiling unit 88 is operative to organize the incoming pixels on bus 76 into tiles 22 (FIG. 3). The tiles are stored in a tile buffer 90 for use by the computation unit 12 (FIG. 8), which may access the tiles via the PCI bus 94 .
  • user-defined zones may be employed to facilitate operation where the view of the camera is partially obstructed and where sections of roadway converge.
  • An entry zone is employed to designate an area of the video image in which new objects may be formed. Objects are not allowed to form outside of the entry zone.
  • an overpass 100 partially obstructs the roadway being monitored. By placing an entry zone 102 in front of the overpass 100 , undesirable detection and tracking of vehicles travelling on the overpass is avoided.
  • a second entry zone 104 is defined for a second section of roadway within the view of the camera. Vehicles entering the roadway through either entry zone 102 or entry zone 104 are tracked.
  • An exit zone 106 is employed to designate an area where individual vehicles are “counted.” Because of the perspective of the field of view of the camera, more distant vehicles appear smaller and closer together. To reduce the likelihood of multiple vehicles being counted as a single vehicle, the number of vehicles included in the vehicle count is determined in the exit zone 106 , which is proximate to the camera.
  • a plurality of traffic monitoring stations 8 may be employed to monitor and share data from multiple sections of roadway.
  • Information gathered from different sections of roadway may be shared via a computer network.
  • the gathered information may be displayed on a graphic user interface 108 located at a separate operations center 110 .
  • Video images 112 from the camera are provided to the graphic user interface 108 through flow manager software 114 .
  • the flow manager maintains near actual time display of the video image through balance of video smoothness and delay by controlling buffering of video data and adapting to available bandwidth.
  • Data resulting from statistical analysis of the video image is provided to the graphic user interface 108 from an analysis engine 116 that includes the tiling unit, segmenter algorithms and neural network described above.
  • the controller card may be employed to transmit the data through an interface 118 to the operations center 110 , as well as optional transmission to other traffic monitoring stations.
  • the interface 118 may be shared memory in the case of a standalone monitoring station/graphic user interface combination or sockets in the case of an independent monitoring station and graphic user interface.
  • the operations center 110 contains an integration tool set for post-processing the traffic data.
  • the tool set enables presentation of data in both graphical and spreadsheet formats.
  • the data may also be exported in different formats for further analysis.
  • the video may also be displayed with an overlay representing vehicle position and type.
  • Alarm parameters may be defined for the data generated by the analysis engine 116 .
  • an alarm may be set to trigger if the average velocity of the vehicles passing through the field of view of the camera drops below a predetermined limit.
  • Alarm calculations may be done by an alarm engine 122 in the traffic monitoring station or at the graphic user interface. Alarm conditions are defined via the graphic user interface.
  • Networked traffic monitoring stations may be employed to identify and track individual vehicles to determine transit time between stations.
  • the shape of the vehicle represented by active tiles is employed to distinguish individual vehicles.
  • a rectangular region (“snippet”) that contains the active tiles that represent a vehicle is obtained as depicted by step 132 .
  • Correction may be made to restore detail obscured by inter-field distortion as depicted by step 134 .
  • the snippet is then compared with templates 136 that depict the shape of known types of vehicles such as cars, vans, trucks etc, as depicted in step 138 .
  • the center of the snippet where the center of the template is located in the match position, is marked as depicted by step 140 .
  • the size of the snippet may be reduced to the size of the matching template.
  • First and second signatures that respectively represent image intensity and image edges are calculated from the snippet as depicted by step 142 .
  • the signatures, matching template type, vehicle speed and a vehicle lane indicator are then transmitted to a second traffic monitoring station as depicted by step 144 .
  • the second traffic monitoring station enters the information into a list that is employed for comparison purposes as depicted in step 146 .
  • information that represents vehicles passing the second traffic monitoring station is calculated by gathering snippets of vehicles and calculating signatures, a lane indicator, speed and vehicle type in the same manner as described with respect to the first traffic monitoring station.
  • the information is then compared with entries selected in step 149 from the list by employing comparitor 150 .
  • entries that are so recent that incredibly high speed would be required for the vehicle to be passing the second traffic monitoring station are not employed. Further, older entries that would indicate an incredibly slow travel rate are discarded.
  • the signatures may be accorded greater weight in the comparison than the lane indicator and vehicle type.
  • Each comparison yields a score, and the highest score 152 is compared with a predetermined threshold score as depicted by step 154 . If the score does not exceed the threshold, the “match” is disregarded as depicted by step 156 . If the score exceeds the threshold, the match is saved as depicted by step 158 .
  • a ratio is calculated by dividing the difference between the best score and the second best score by the best score as depicted by step 160 . If the ratio is greater than or equal to a predetermined value, a vehicle match is indicated. The transit time and average speed of the vehicle between traffic monitoring stations is then reported to the graphic user interface.
  • Inter-field distortion is a by-product of standard video camera scanning technology.
  • An NTSC format video camera will alternately scan even or odd scan lines every 60th of a second.
  • a fast moving vehicle will move enough during the scan to “blur,” seeming to partially appear in two different places at once.
  • the car will move about 1.5 ft during the scan (approx. 60 mph). Greater distortion is observed when the car travels at higher velocity. Greater distortion is also observed when the vehicle is nearer to the camera.
  • the distortion compensating algorithm is based on knowledge of the “camera parameters” and the speed of the vehicle.
  • the camera parameters enable mapping between motion in the real world and motion in the image plane of the camera.
  • the algorithm predicts, based on the camera parameters and the known speed of the vehicle, how much the vehicle has moved in the real world (in the direction of travel).
  • the movement of the vehicle on the image plane is then calculated.
  • the number of scan lines and distance to the left or right on the image is calculated. Correction is implemented by moving the odd scan lines ‘back’ to where the odd scan lines would have been if the car had stayed still (where the car was when the even scan lines were acquired). For example, to move 4 scan lines back, scan line n would be copied back to scan line n ⁇ 4, where n is any odd scan line.
  • the right/left movement is simply where the scan line is positioned when copied back. An offset may be added or subtracted to move the pixels back into the corrected position.
  • scan line n may have an offset of 8 pixels when moved back to scan line n ⁇ 4, so pixel 0 in scan line n is copied to pixel 7 in scan line n ⁇ 4, etc. If the speed of a particular vehicle cannot be determined, the average speed for that lane may be employed to attempt the correction. Distortion correction is not necessary when a progressive scan camera is employed.
  • the traffic monitoring station may be employed to facilitate traffic control.
  • the traffic monitoring station is deployed such that an intersection is within the field of view of the camera.
  • Vehicle detection can be employed to control traffic light cycles independently for left and right turns, and non-turning traffic.
  • Such control which would require multiple inductive loops, can be exerted for a plurality of lanes with a single camera.
  • Predetermined parameters that describe vehicle motion are employed to anticipate future vehicle motion, and proactive action may be taken to control traffic in response to the anticipated motion of the vehicle.
  • the traffic monitoring station determines that a vehicle 124 will “run” a red light signal 125 by traversing an intersection 126 during a period of time when a traffic signal 128 will be indicating “green” for a vehicle 130 entering the intersection from another direction
  • the traffic monitoring station can provide a warning or control such as an audible warning, flashing light and/or delayed green light for the other vehicle 130 in order to reduce the likelihood of a collision.
  • the traffic monitoring station may track the offending vehicle 124 through the intersection 126 and use the tracking information to control a separate camera to zoom in on the vehicle and/or the vehicle license plate to record a single frame, multiple frames or a full motion video movie of the event for vehicle identification and evidentiary purposes.
  • the cameras are coordinated via shared reference features in a field of view overlap area. Once the second camera acquires the target, the second camera zooms in to record the license plate of the offending vehicle.
  • the traffic monitoring station could also be used to detect other types of violations such as illegal lane changes, speed violations, and tailgating. Additionally, the traffic monitoring station can be employed to determine the optimal times to cycle a traffic light based upon detected gaps in traffic and lengths of queues of cars at the intersection.
  • the determination of whether the vehicle will run the red light may be based upon the speed of the vehicle and distance of the vehicle from the intersection. In particular, if the vehicle speed exceeds a predetermined speed within a predetermined distance from the intersection it may be inferred that the vehicle cannot or is unlikely to stop before entering the intersection.
  • the disclosed system employs video tracking to detect vehicles that will not stop for a traffic light that is changing to red.
  • the disclosed system outputs a signal in response to detection of such Non-Stopping Vehicles (NSV's), for example when they have completed passage through the intersection and cross traffic can safely proceed.
  • NSV's Non-Stopping Vehicles
  • a traffic controller can be optionally programmed to delay the onset of the green light for cross traffic or pedestrians until any detected red-light violating vehicles have moved through the intersection.
  • the disclosed system may advantageously reduce the risk of collisions and/or injuries at intersections.
  • the features of the presently disclosed system involve the ability (1) to process a sequence of video images of oncoming traffic to detect, classify and provide continuous tracking of detected vehicles; (2) to calculate from image information real world vehicle displacements with sufficient accuracy to support measurement of vehicle speed and acceleration for all vehicles in the camera field of view; (3) to provide continuous, real time measurement of vehicle position, speed and acceleration; (4) to develop and implement a decision model, using among its inputs, measures of vehicle position, speed, acceleration and classification in order to determine likelihood of a vehicle stopping; and (5) to update the decision model in real time as a result of changing values of vehicle parameters (e.g., position, speed, acceleration) for each oncoming vehicle in the camera's field of view.
  • vehicle parameters e.g., position, speed, acceleration
  • the disclosed system supports the use of cameras for intersection monitoring/control and surveillance.
  • the disclosed system may further be applied to other safety considerations: the detection of vehicles approaching toll booths, railway crossings or other controlled roadway structures (lane reducers, etc.) at excessive speeds, and/or the detection
  • the disclosed system includes innovative video-monitoring capabilities to improve the safety of signalized intersections.
  • NSV's Non-Stopping Vehicles
  • the sensor can generate a signal that specifically indicates a condition of a vehicle passing or about to pass through an intersection in violation of a red-light. This signal can then be used to delay the onset of the green light phase of a traffic light for any cross traffic until the NSV has moved through the intersection and it is safe for cross traffic and/or pedestrians to proceed.
  • such a sensor signal is not necessarily used to dynamically lengthen, or otherwise change, the timing cycle for the traffic light of the NSV.
  • the timing cycle in the traffic light for the NSV may be left unchanged and the NSV does indeed violate a red-light.
  • the capabilities of the disclosed system are based on monitoring and controlling traffic flow at an intersection through approach detection, stop line and turn detection functionality.
  • the disclosed system accurately provides speed and position information on vehicles within a field of view, and determines from the motion characteristics of oncoming vehicles and a knowledge of the intersection timing cycle, whether a vehicle is in the process of running a red-light or has a high probability of running the red-light. In one embodiment, this determination starts as soon as the light for traffic in the given direction of interest cycles to yellow.
  • the accurate, early detection of an NSV provided by the disclosed system is essential in order to produce a sensor signal in advance of the normal cycle for activating the green light of the cross traffic.
  • the disclosed system supports a number of options for determining an “all clear” signal for an intersection. For example, in intersections equipped with high slew rate PTZ cameras (>90 deg/sec.), the camera that images the NSV can be controlled by the disclosed system to track the NSV through the intersection, generating the all clear signal when the NSV has exited the intersection.
  • two alternative embodiments may be employed to determine the “all clear” signal. In a first such alternative embodiment, no single camera images the entire intersection area, but the intersection is completely imaged in the field of view of two or more cameras.
  • the multiple cameras may be used to cooperatively track the target NSV through the field of view of each camera until the NSV has traversed the entire intersection.
  • a determination is made, on the basis of the NSV's speed and acceleration as it exits the camera's field of view, and from predetermined knowledge of the spatial extent of the intersection that is not imaged either by a single camera or by multiple cameras, of the time that will elapse before the vehicle has moved completely through the intersection. Based on such a determination, the all clear signal can be generated once this predicted time has elapsed.
  • This second alternative embodiment requires the fewest resources in terms of camera installation requirements.
  • the present invention may, for example, provide (1) wide area detection of vehicles entering the approach zone of an intersection, (2) high-precision continuous speed and acceleration profiles using 20 frame-per-second video sampling; (3) single camera support for monitoring two approach directions, (4) continuous tracking of vehicles as they cross lane boundaries, and (5) cooperative vehicle tracking between multiple cameras to ensure vehicle clearance through the intersection area.
  • the disclosed system advantageously operates to anticipate and predict traffic light violations before they occur, for example using an approaching vehicle's speed. Accordingly, the disclosed system can be used to intelligently delay a green light until the offending vehicle is safely past the intersection.
  • one or more video cameras may be used by the present system because they meet key requirements of the intersection safety application: the need for sensor information on which accurate measurements of vehicle position, speed, acceleration and classification can be made; the need for such information on all vehicles approaching the intersection (not just for those that are nearest to the intersection in each lane); and the need for continuously updated information such that the individual vehicle measurements can be continuously updated in real time.
  • SAR synthetic aperture radar
  • a radar-based image or images can be processed to detect, track and classify target vehicles on a continuous basis.
  • the present system leverages the investment that municipalities make in deploying video cameras at intersections for surveillance and/or traffic monitoring and flow control purposes to provide significant safety benefits as well. Because the disclosed system potentially supports such dual use of video cameras, significant cost savings result. Additionally, because of the compatibility of the disclosed system with surveillance uses of the video cameras, it can be used for active monitoring in a traffic control center, offering the potential for integration with larger traffic management systems.
  • the features of the disclosed system can provide intelligent green light delays based upon the detection of pedestrians that are in the process of crossing an intersection at a well defined crosswalk. In some cases, late-crossing pedestrians are not visible to motorists who are waiting for a green light. This may result from a motorist's view being obstructed by a truck or bus in an adjacent lane. An embodiment of the disclosed system could identify pedestrians or pedestrian groups, and communicate information on expected time to crossing completion that could be optionally used to delay activation of a green light until such pedestrians were safely clear of the intersection.
  • the disclosed system may further be embodied to monitor other potential safety hazards, such as: vehicles moving at excessive speed approaching toll booths or other controlled roadway structures (lane reducers, etc.); vehicles out of control on steep grades, hills or steep downgrades leading to a traffic light or rail crossing. Additionally, the disclosed system can use the tracking position information it generates to control and move a high performance PTZ camera mechanism and standard NTSC color camera to zoom in on and identify a potential red-light violator for enforcement purposes. In such a case, no additional pavement sensors or fixed cameras are required for red-light violation monitoring.
  • controller logic must be employed to interface sensor outputs to the intersection traffic lights.
  • some existing controllers can be adapted to implement a green-light delay based on outputs from the disclosed system.
  • Implementations and/or deployments of the disclosed system could differ in operation from intersection to intersection.
  • an implementation of the disclosed system could be configured to take into account the different demographics of an area as well as the nature of its traffic flow patterns. For example, at rush hour, vehicles often “creep” into an intersection on a yellow light, even when traffic is backed up, preventing their transit through the intersection before the onset of a red light in their direction.
  • the disclosed system may be pre-configured to delay the activation of the cross traffic green light only for a maximum period of time, in order to give cross traffic the opportunity to flow, allowing them to navigate around vehicles present in the intersection but which have not yet traversed the intersection region.

Abstract

A traffic sensor system for detecting and tracking vehicles is described. The disclosed system may be employed as a traffic light violation prediction system for a traffic signal, and as a collision avoidance system. A video camera is employed to obtain a video image of a section of a roadway. Motion is detected through changes in luminance and edges in frames of the video image. Predetermined sets of pixels (“tiles”) in the frames are designated to be in either an “active” state or an “inactive” state. A tile becomes active when the luminance or edge values of the pixels of the tile differ from the respective luminance or edge values of a corresponding tile in a reference frame in accordance with predetermined criteria. The tile becomes inactive when the luminance or edge values of the pixels of the tile do not differ from the corresponding reference frame tile in accordance with the predetermined criteria. Shape and motion of groups of active tiles (“quanta”) are analyzed with software and a neural network to detect and track vehicles.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a Divisional of U.S. Pat. application Ser. No. 09,059,151, filed Apr. 13, 1998, entitled TRAFFIC SENSOR, which claims priority to U.S. Provisional Patent Application Ser. No. 60/043,690, entitled TRAFFIC SENSOR, filed Apr. 14, 1997.[0001]
  • STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • N/A [0002]
  • BACKGROUND OF THE INVENTION
  • The present invention is related to traffic monitoring systems, and more particularly to a traffic monitoring system for detecting, measuring and anticipating vehicle motion. [0003]
  • Systems for monitoring vehicular traffic are known. For example, it is known to detect vehicles by employing inductive loop sensors. At least one loop of wire or a similar conductive element may be disposed beneath the surface of a roadway at a predetermined location. Electromagnetic induction occurs when a vehicle occupies the roadway above the loop. The induction can be detected via a simple electronic circuit that is coupled with the loop. The inductive loop and associated detection circuitry can be coupled with an electronic counter circuit to count the number of vehicles that pass over the loop. However, inductive loops are subjected to harsh environmental conditions and consequently have a relatively short expected lifespan. [0004]
  • It is also known to employ optical sensors to monitor vehicular traffic. For example, traffic monitoring systems that employ “machine vision” technology such as video cameras are known. Machine vision traffic monitoring systems are generally mounted above the surface of the roadway and have the potential for much longer lifespan than inductive loop systems. Further, machine vision traffic monitoring systems have the potential to provide more information about traffic conditions than inductive loop traffic monitoring systems. However, known machine vision traffic monitoring systems have not achieved these potentials. [0005]
  • SUMMARY OF THE INVENTION
  • In accordance with the present invention, a traffic monitoring station employs at least one video camera and a computation unit to detect and track vehicles passing through the field of view of the video camera. The disclosed system may be used as a traffic light violation prediction system for a traffic signal, and/or as a collision avoidance system. [0006]
  • In an illustrative embodiment, the camera provides a video image of a section of roadway in the form of successive individual video frames. Motion is detected through edge analysis and changes in luminance relative to an edge reference frame and a luminance reference frame. The frames are organized into a plurality of sets of pixels. Each set of pixels (“tile”) is in either an “active” state or an “inactive” state. A tile becomes active when the luminance or edge values of the pixels of the tile differ from the luminance and edge values of the corresponding tiles in the corresponding reference frames in accordance with predetermined criteria. The tile becomes inactive when the luminance and edge values of the pixels of the tile do not differ from the corresponding reference frame tiles in accordance with the predetermined criteria. [0007]
  • The reference frames, which represent the view of the camera without moving vehicles, may be dynamically updated in response to conditions in the field of view of the camera. The reference frames are updated by combining each new frame with the respective reference frames. The combining calculation is weighted in favor of the reference frames to provide a gradual rate of change in the reference frames. A previous frame may also be employed in a “frame-to-frame” comparison with the new frame to detect motion. The frame-to-frame comparison may provide improved results relative to use of the reference frame in conditions of low light and darkness. [0008]
  • Each object is represented by at least one group of proximate active tiles (“quanta”). Individual quantum, each of which contains a predetermined maximum number of tiles, are tracked through successive video frames. The distance traveled by each quantum is readily calculable from the change in position of the quantum relative to stationary features in the field of view of the camera. The time taken to travel the distance is readily calculable since the period of time between successive frames is known. Physical parameters such as velocity, acceleration and direction of travel of the quantum are calculated based on change in quantum position over time. Physical parameters that describe vehicle motion are calculated by employing the physical parameters calculated for the quanta. For example, the velocities calculated for the quanta that comprise the vehicle may be combined and averaged to ascertain the velocity of the vehicle. [0009]
  • The motion and shape of quanta are employed to delineate vehicles from other objects. A plurality of segmenter algorithms is employed to perform grouping, dividing and pattern matching functions on the quanta. For example, some segmenter algorithms employ pattern matching to facilitate identification of types of vehicles, such as passenger automobiles and trucks. A physical mapping of vehicle models may be employed to facilitate the proper segmentation of vehicles. A list of possible new objects is generated from the output of the segmenter algorithms. The list of possible new objects is compared with a master list of objects, and objects from the list of possible new objects that cannot be found in the master list are designated as new objects. The object master list is then updated by adding the new objects to the object master list. [0010]
  • At least one feature extractor is employed to generate a descriptive vector for each object. The descriptive vector is provided to a neural network classification engine which classifies and scores each object. The resultant score indicates the probability of the object being a vehicle of a particular type. Objects that produce a score that exceeds a predetermined threshold are determined to be vehicles. [0011]
  • The traffic monitoring station may be employed to facilitate traffic control in real time. Predetermined parameters that describe vehicle motion may be employed to anticipate future vehicle motion. Proactive action may then be taken to control traffic in response to the anticipated motion of the vehicle. For example, if on the basis of station determined values for vehicle distance from the intersection, speed, acceleration, and vehicle class (truck, car, etc.), the traffic monitoring station determines that the vehicle will “run a red light,” traversing an intersection during a period of time when the traffic signal will be otherwise be indicating “green” for vehicles entering the intersection from another direction, the traffic monitoring station can delay the green light for the other vehicles or cause some other actions to be taken to reduce the likelihood of a collision. Such actions may also include displaying the green light for the other vehicles in an altered mode (e.g., flashing) or in some combination with another signal light (e.g., yellow or red), or initiating an audible alarm at the intersection until the danger has passed. Further, the traffic monitoring station may track the offending vehicle through the intersection and record a full motion video movie of the event for vehicle identification and evidentiary purposes. [0012]
  • BRIEF DESCRIPTION OF THE DRAWING
  • The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following Detailed Description of the Invention, and Drawing, of which: [0013]
  • FIG. 1A is a perspective diagram of a traffic monitoring station that illustrates configuration; [0014]
  • FIG. 1B is a side view diagram of a traffic monitoring station that illustrates tilt angle; [0015]
  • FIG. 1C is a top view diagram of a traffic monitoring station that illustrates pan angle; [0016]
  • FIG. 2 is a flow diagram that illustrates the vehicle detection and tracking method of the traffic monitoring station; [0017]
  • FIG. 3 is a diagram of a new frame that illustrates use of tiles and quanta to identify and track objects; [0018]
  • FIG. 4 is a diagram of a reference frame; [0019]
  • FIG. 5 is a diagram that illustrates edge detect tile comparison to determine tile activation; [0020]
  • FIG. 6 is a diagram that illustrates adjustment of segmenter algorithm weighting; [0021]
  • FIG. 7 is a diagram that illustrates feature vector generation by a feature extractor; [0022]
  • FIG. 8 is a diagram of the traffic monitoring station of FIG. 1 that illustrates the processing module and network connections; [0023]
  • FIG. 9 is a block diagram of the video capture card of FIG. 8; [0024]
  • FIG. 10A is a diagram that illustrates use of the new frame for image stabilization; [0025]
  • FIG. 10B is a diagram that illustrates use of the reference frame for image stabilization; [0026]
  • FIG. 11 is diagram of the field of view of a camera that illustrates use of entry and exit zones; [0027]
  • FIG. 12 is a block diagram of traffic monitoring stations networked through a graphic user interface; [0028]
  • FIG. 13 is a flow diagram that illustrates station to station vehicle tracking; and [0029]
  • FIG. 14 is a diagram of an intersection that illustrates traffic control based on data gathered by the monitoring station.[0030]
  • DETAILED DESCRIPTION OF THE INVENTION
  • U.S. Provisional Patent Application Ser. No. 60/043,690, entitled TRAFFIC SENSOR, filed Apr. 14, 1997, is hereby incorporated herein by reference. [0031]
  • Referring to FIGS. 1A, 1B and [0032] 1C, a traffic monitoring station 8 includes at least one camera 10 and a computation unit 12. The camera 10 is employed to acquire a video image of a section of a roadway 14. The computation unit 12 is employed to analyze the acquired video images to detect and track vehicles.
  • A three dimensional geometric representation of the site is calculated from parameters entered by the user in order to configure the [0033] traffic monitoring station 8 for operation. The position of a selected reference feature 16 relative to the camera 10 is measured and entered into memory by employing a graphic user interface. In particular, a distance Y along the ground between the camera 10 and the reference feature 16 on a line that is parallel with the lane markings 17 and a distance X along a line that is perpendicular with the lane markings are measured and entered into memory. The camera height H, lane widths of all lanes W1, W2, W3 and position of each lane in the field of view of the camera are also entered into memory. The tilt angle 15 and pan angle 13 of the camera are trigonometrically calculated from the user-entered information, such as shown in Appendix A. The tilt angle 15 is the angle between a line 2 directly out of the lens of the camera 10 and a line 6 that is parallel to the road. The pan angle 13 is the angle between line 2 and a line 3 that is parallel to the lane lines and passes directly under the camera 10. A value used for scaling (“scaler”) is calculated for facilitating distance calculations. The scaler is a fixed factor for the entire image that is used for conversion between real distances and pixel displacements. Hence, the distance and direction from the camera to any point in the field of view of the camera, and the distance and direction between any two points in the field of view of the camera can be determined.
  • Corrections for roadway grade and bank may also be calculated during configuration. “Grade” refers to the change in height of the roadway relative to the height of the camera within the field of view of the camera. “Bank” refers to the difference in height of portions of the roadway along a line perpendicular with the lane markings. The user determines the grade and bank of the roadway and enters the determined values into memory by employing a graphic user interface. The grade and bank corrections are achieved by translating the reference plane to match the specified grade and bank. [0034]
  • Referring to FIGS. 2 and 3, operation of the traffic monitoring station will now be described. A [0035] video frame 18 is acquired from the camera as depicted in step 20. If an interlaced camera is employed, the acquired frame is de-interlaced. If a progressive scan camera is employed then de-interlacing is not necessary. Image stabilization techniques may also be employed to compensate for movement of the camera due to wind, vibration and other environmental factors, as will be described below. The acquired frame 18 is organized into tiles 22 as depicted in step 24. Each tile 22 is a region of predetermined dimensions. In the illustrated embodiment, each frame contains 80 tiles per row and 60 tiles per column and the dimensions of each tile are 8 pixels by 8 pixels. Tile dimensions may be adjusted, may be non-square, and may overlap other tiles.
  • Referring to FIGS. 2, 3 and [0036] 4, a list 26 of tiles in which motion is detected (“active tiles”) 38 is generated by employing either or both of reference frames 28, 29 and a previously acquired frame 30 in separate comparisons with the acquired frame 18. The reference frame 28 represents the luminance of the image from the camera in the absence of moving vehicles. The reference frame 29 represents edges detected in the image from the camera in the absence of moving vehicles. In the illustrated embodiment, both the reference frames 28, 29 and the previous frame 30 are employed. If a color camera is employed, the chrominance (color) portion of each tile 22 in the acquired frame is separated from the luminance (black and white) portion prior to comparison.
  • As illustrated in FIG. 5, an edge detect comparison may be employed to detect motion and activate tiles. For each [0037] tile 22 of the new frame 18 (FIG. 3), the tile is organized into four “quartiles” 32 of equal size. The pixel luminance values in each quartile 32 are summed to provide a representative luminance value for each quartile. In the illustrated embodiment, each pixel has a luminance represented by a value from 0 to 255, where greater values indicate greater luminance. The quartile having the maximum representative luminance value is then identified and employed as a baseline for analyzing the other quartiles. In particular, the maximum luminance quartile 34 is designated to be in a first state, illustrated as logic 1. The other quartiles in the tile are designated to be in the first state if their representative luminance value exceeds a threshold defined by a predetermined percentage of the luminance value of the maximum luminance quartile 34 (lum≧βlummax). β (“the gain”) can be fixed at a specific level or may be allowed to vary based upon the characteristics of the image. Quartiles with a representative value that fails to exceed the threshold are designated to be in a second state, illustrated by logic 0. Each quartile is then compared with the corresponding quartile from the corresponding tile 36 from the reference frame 29 (FIG. 4) and, separately, the previously acquired frame. The tile 22 is designated as “active” if the comparison indicates a difference in the state of more than one quartile. If the comparison indicates a difference in the state of one or fewer quartiles and at least one quartile of the tile is in the second state, the tile is designated as “inactive.”
  • In the case where each [0038] quartile 32 in the corresponding tiles of the current frame and the reference frame are designated to be in the first state a luminance activation technique is employed. A luminance intensity value is determined by summing the luminance of all pixels in the tile and dividing the sum by the total number of pixels in the tile, i.e., computing the average luminance. The average luminance of the tile is compared with the average luminance of the tile 36 in the corresponding location of the reference frame 28 and the previous frame to detect any difference therebetween. In particular, the average luminance of the reference tile is subtracted from the average luminance of the new tile to produce a difference value and, if the magnitude of the difference value exceeds a predetermined threshold, motion is indicated and the tile is designated as “active.” The model using tiles, quartiles and pixels is isomorphic to a neural model of several layers.
  • Referring again to FIGS. 3 and 4, the reference frames [0039] 28, 29 may be either static or dynamic. A static reference frame may be generated by storing a video frame from the roadway or portion(s) of the roadway when there are no moving objects in the field of view of the camera. In the illustrated embodiment the reference frames 28, 29 are dynamically updated in order to filter differences between frames that are attributable to gradually changing conditions such as shadows. The reference frames are updated by combining each new frame 18 with the reference frames. The combining calculation may be weighted in favor of the reference frames to filter quickly occurring events, such as the passage of vehicles, while incorporating slowly occurring events such as shadows and changes in the ambient light level.
  • Referring to FIGS. 2 and 3, and Appendix B, [0040] active tiles 38 in the list 26 of active tiles are organized into sets of proximately grouped active tiles (“quanta”) 40 as depicted by step 42. The quanta 40 are employed to track moving objects such as vehicles on successive frames. The distance traveled by each quantum is calculated based upon the change in position of the quantum from frame to frame. Matching and identifying of quantum is facilitated by a “grab phase” and an “expansion phase” as depicted by step 44. Each quantum has a shape. In the “grab phase,” active tiles are sought in a predicted position that is calculated for the quantum in the new frame, within the shape defined by the quantum. The predicted position is determined by the previously observed velocity and direction of travel of the quantum. If any active tiles are located within the quantum shape region in the predicted position of the quantum in the new frame, the quantum is considered found. If no active tiles are located in the quantum shape region in the predicted position in the new frame, the quantum is considered lost. In the “expansion phase,” active tiles that are adjacent to a found quantum and that have not been claimed by other quanta are incorporated into the found quantum, thereby allowing each quantum to change shape. Unclaimed active tiles are grouped together to form new quanta unless the number of active tiles is insufficient to form a quantum. If any of the quanta that have changed shape now exceed a predetermined maximum size then these “parent” known quantum are reorganized into a plurality of “children” quantum. Each child quantum inherits the characteristics of its parent quantum, such as velocity, acceleration and direction.
  • The identified quanta are organized into objects as depicted in [0041] step 46. The traffic sensor employs a plurality of segmenter algorithms to organize the identified quanta into objects. Each segmenter algorithm performs a grouping, dividing or pattern matching function. For example, a “blob segmenter” groups quanta that are connected. Some segmenter algorithms facilitate identification of types of vehicles, such as passenger automobiles and trucks. Some segmenter algorithms facilitate identification of vehicle features such as headlights. Some segmenter algorithms reorganize groups of quanta to facilitate identification of features.
  • Referring to FIG. 6, the segmenter algorithms are employed in accordance with a dynamic weighting technique to facilitate operation under changing conditions. Five segmenter algorithms designated by numbers [0042] 1-5 are employed in the illustrative example. One segmenter algorithm is employed in each time slot. In particular, the segmenter algorithm in the time slot designated by an advancing pointer 48 is employed. When a segmenter algorithm successfully detects and tracks an object that is determined to be a vehicle by the neural network and is consistent across a plurality of frames then that segmenter algorithm is granted an additional time slot. Consequently, the segmenter algorithms that are more successful under the prevailing conditions are weighted more heavily than the unsuccessful segmenter algorithms. However, each segmenter algorithm is assigned at least one permanent time slot 50 in order to assure that each of the segmenter algorithms remains active without regard to performance. Hence, operation dynamically adjusts to changing conditions to maintain optimum performance. It should be apparent that the number of segmenters, and number and position of the time slot allocations may be altered from the illustrative example.
  • A list of possible new objects represented by their component quanta is generated by the segmenters as depicted by [0043] step 54. The list of possible new objects is compared with a master list of objects, and any objects from the list of possible new objects that cannot be found in the master list is designated as a new object as depicted by step 56. The object master list is updated by adding the new objects to the object master list as depicted in step 57. The objects in the updated object master list are then classified and scored as depicted in step 58.
  • Referring to FIGS. 2 and 7, the objects in the master list are examined by employing at least one [0044] feature extractor 49 as depicted by step 47. Each feature extractor produces a vector 51 of predetermined length that describes an aspect of the object, such as shape. The illustrated feature extractor overlays the object with a 5×5 grid and generates a vector that describes the shape of the object. Because the number of cells 53 in the grid does not change, the representative vector 51 is relatively stable when the size of the object (in number of pixels) changes, such as when the object approaches or moves away from the camera. The vector 51 is concatenated with vectors 55 provided from other feature extractors, if any, to produce a larger vector 57 that represents the object. Other grid patterns and combinations of overlays may be used to achieve improved results based upon camera position relative to the vehicles and other environmental factors.
  • Masking using a vehicle template may be employed to remove background information prior to feature extraction. The object is then compared with [0045] templates 136 that depict the shape of known types of vehicles such as cars, vans, trucks etc. When the best fit match is determined, the center of the object, where the center of the template is located in the match position, is marked and only portions of the object that are within the template are employed for generating the vectors.
  • The descriptive vectors generated by the feature extractors are provided to a neural network classification engine that assigns a score to each object. The score indicates the probability of the object being a vehicle, including the type of vehicle, e.g., passenger automobile, van, truck. Objects that produce a score that exceeds a predetermined threshold are determined to be vehicles of the type indicated. If there are regions of overlap between objects in the updated object master list, ownership of the quanta in those regions is resolved in a competition phase as depicted in [0046] step 60. Of the objects in competition for each quantum in the overlap region, the object that was assigned the highest score by the neural network obtains ownership of the quanta.
  • Physical characteristics relating to object motion, such as velocity, acceleration, direction of travel and distance between objects, are calculated in [0047] step 62. The calculations are based on changes in position of a plurality of quanta from frame to frame. In particular, vehicle velocity may be calculated as the average velocity of the quanta of the vehicle, by the change in location of a specific portion of the vehicle such as the center front, or by other techniques. Similarly, vehicle acceleration may be calculated as the change in vehicle velocity over time and vehicle direction may be calculated by extrapolating from direction of travel of quanta over a plurality of frames. The velocity, acceleration and direction of travel of the quanta are calculated based on known length and width dimensions of each pixel and the known period of time between successive frames.
  • Referring to FIG. 8, the [0048] computation unit 12 includes at least one video capture card 66. The video capture card 66 performs initial processing on video signals received from the camera 10. The computation unit 12 operates on the output of the video capture card 66. The functions described with regard to the embodiment illustrated in FIGS. 8 and 9 are implemented with a custom video capture card 66. These functions may alternatively be implemented with a commercially available frame grabber and software. In the illustrative embodiment the computation unit 12 is a commercially available IBM compatible computer that employs the Windows 95 operating system. The IBM compatible computer includes a Peripheral Controller Interconnect (“PCI”) bus interface.
  • Referring to FIG. 9, the [0049] video capture card 66 is operative to process new video frames, establish and maintain the reference frame, and compare the new frames with the reference frame in order to ascertain luminance and/or edge differences therebetween that are indicative of motion. A digitizer circuit 70 is employed to convert the analog video signals from the camera. The camera may provide analog video signals in either National Television Standards Committee (“NTSC”) format or Phase Alteration Line (“PAL”) format. The chrominance portion, if any, of the video signal is separated from the luminance portion of the video signal by the digitizer circuit 70. The resulting digital signals are provided to an image state machine 72 where the video signal is de-interlaced, if necessary. In particular, the video signal is de-interlaced unless a progressive scan camera is employed. The output of the image state machine 72 is a succession of de-interlaced video frames, each frame being 640 pixels by 480 pixels in size. The image state machine is coupled to a Random Access Memory (“RAM”) 74 that includes a ring of three buffers where frame data is collected prior to transmission of the frames over a digital bus 76 via pixel fetch circuitry 78.
  • Referring to FIGS. 9 and 10, image stabilization is employed by the [0050] video control processor 99 to compensate for camera movement due to environmental factors such as wind. Up to two anchor features 69 that are identified by the user during configuration of the traffic monitoring station are employed. The location of each anchor 69 on the new frame 18 is determined, and the new frame is adjusted accordingly. Each anchor 69 is located by matching a template 162 to the image in the new frame. The template 162 is a copy of the rectangular region of the reference frame 28 that includes a representation of the anchor feature 69. A pixel by pixel comparison is made between the template 162 and a selected region of the new frame to determine whether a match has been found based on average luminance difference. The selected region of the new frame is adjusted until the best match is located (best match=minΣ|Newx,y−Refx,y). The first comparison may be made at the coordinates at which the anchor 69 is located in the reference frame, or at the coordinates at which the anchor was located in the previous frame. If a minΣ calculation that is less than or equal to the minΣ calculation in the previous frame is found, the location is determined to be a match, i.e., the anchor is found. If a minΣ calculation that is less than or equal to the minΣ calculation in the previous frame is not found, the location of the selected region is adjusted until the best match is located. The location of the selected region is adjustable within an area of up to 8 pixels in any direction from the matching coordinates of the previous frame. The selected region is shifted in turn both vertically and horizontally by a distance of four pixels to yield four minΣ calculation results. If the lowest of the four results is lower than the result at the start point, the selected region is moved to the place that yielded the lowest result. If none of the results is lower than the result at the start point, the selected region is shifted in turn both vertically and horizontally by half the original distance, i.e., by two pixels, to yield four new minΣ calculation results. If the lowest of the four results is lower than the result at the start point, the selected region is moved to the place that yielded the lowest result. The distance may be halved again to one pixel to yield four new minΣ calculation results. When the best result is found, the anchor is considered found if the result achieves a predetermined threshold of accuracy. If the best result fails to achieve the predetermined threshold of accuracy, an edge comparison is undertaken. The edge comparison is made between the template and the region that defines the best minΣ calculation results. If at least one vertical edge, at least one horizontal edge, and at least 75% of all constituent edges are matched, the anchor is considered found. Otherwise, the anchor is considered not found.
  • The [0051] new frame 18 is adjusted to produce a stabilized frame based upon how many anchors 69 were found, and where the anchors were found. In the event that both anchors are found and the anchors were found at the same coordinates as in the reference frame, the camera did not move and no correction is necessary. If both anchors moved by the same distance in the same direction, a two-dimensional X-Y offset vector is calculated. If both anchors moved in different directions, the camera may have zoomed and/or rotated. A zoom is indicated when the anchors have moved either towards or away from the center 164 of the image. For example, the anchors appear larger and further from the center of the image when the camera zooms in, and smaller and closer to the center of the image when the camera zooms out. In the case of a camera that is zoomed out, the image is “inflated” by periodically duplicating pixels so that the anchors appear in the expected dimensions. In the case of a camera that is zoomed in, the image is “deflated” by periodically discarding pixels so that the anchors appear in the size that is expected.
  • In the event that only one anchor was found, adjustment is based upon the location of that one anchor. If the anchor did not move, no correction is applied. If the previous frame was not zoomed, it is assumed that the new frame is not zoomed. If the previous frame was zoomed, it is assumed that the new frame is also zoomed by the same amount. In the event that neither anchor is found, the corrections that were calculated for the previous frame are employed. [0052]
  • From the number of anchors found and their positions, the [0053] video control processor 99 calculates a set of correction factors as described above and sends them to the pixel fetch circuitry 78. These correction factors include instructions for shifting the frame horizontally, vertically, or both to correct for camera pan and tilt motion, and/or directions for inflating or deflating the frame to compensate for camera zoom motion. If no correction is needed, the video control processor calculates a set of correction factors which instructs the pixel fetch circuitry to do a simple copy operation. The correction factors allow the pixel fetch circuitry to select pixels from RAM 74 for transmission on the bus 76 in stabilized order. The pixels are collected into a stabilized frame 84 for use by the computation unit 12 (FIG. 8).
  • Referring to FIG. 9, a [0054] differencing unit 82 employs the contents of the reference frame buffer 80 and the incoming pixels on the bus 76 to compare the reference frame with the stabilized frame, pixel by pixel, in order to determine the differences. The difference values are stored in the difference frame buffer 86. The computation unit 12 may access the difference frames over the PCI bus 94.
  • A [0055] tiling unit 88 is operative to organize the incoming pixels on bus 76 into tiles 22 (FIG. 3). The tiles are stored in a tile buffer 90 for use by the computation unit 12 (FIG. 8), which may access the tiles via the PCI bus 94.
  • Referring to FIG. 11, user-defined zones may be employed to facilitate operation where the view of the camera is partially obstructed and where sections of roadway converge. An entry zone is employed to designate an area of the video image in which new objects may be formed. Objects are not allowed to form outside of the entry zone. In the illustrated example, an [0056] overpass 100 partially obstructs the roadway being monitored. By placing an entry zone 102 in front of the overpass 100, undesirable detection and tracking of vehicles travelling on the overpass is avoided. A second entry zone 104 is defined for a second section of roadway within the view of the camera. Vehicles entering the roadway through either entry zone 102 or entry zone 104 are tracked. An exit zone 106 is employed to designate an area where individual vehicles are “counted.” Because of the perspective of the field of view of the camera, more distant vehicles appear smaller and closer together. To reduce the likelihood of multiple vehicles being counted as a single vehicle, the number of vehicles included in the vehicle count is determined in the exit zone 106, which is proximate to the camera.
  • Referring now to FIG. 12, a plurality of [0057] traffic monitoring stations 8 may be employed to monitor and share data from multiple sections of roadway. Information gathered from different sections of roadway may be shared via a computer network. The gathered information may be displayed on a graphic user interface 108 located at a separate operations center 110. Video images 112 from the camera are provided to the graphic user interface 108 through flow manager software 114. The flow manager maintains near actual time display of the video image through balance of video smoothness and delay by controlling buffering of video data and adapting to available bandwidth. Data resulting from statistical analysis of the video image is provided to the graphic user interface 108 from an analysis engine 116 that includes the tiling unit, segmenter algorithms and neural network described above. The controller card may be employed to transmit the data through an interface 118 to the operations center 110, as well as optional transmission to other traffic monitoring stations. The interface 118 may be shared memory in the case of a standalone monitoring station/graphic user interface combination or sockets in the case of an independent monitoring station and graphic user interface. The operations center 110 contains an integration tool set for post-processing the traffic data. The tool set enables presentation of data in both graphical and spreadsheet formats. The data may also be exported in different formats for further analysis. The video may also be displayed with an overlay representing vehicle position and type.
  • Alarm parameters may be defined for the data generated by the [0058] analysis engine 116. For example, an alarm may be set to trigger if the average velocity of the vehicles passing through the field of view of the camera drops below a predetermined limit. Alarm calculations may be done by an alarm engine 122 in the traffic monitoring station or at the graphic user interface. Alarm conditions are defined via the graphic user interface.
  • Networked traffic monitoring stations may be employed to identify and track individual vehicles to determine transit time between stations. The shape of the vehicle represented by active tiles is employed to distinguish individual vehicles. At a first traffic control station, a rectangular region (“snippet”) that contains the active tiles that represent a vehicle is obtained as depicted by [0059] step 132. Correction may be made to restore detail obscured by inter-field distortion as depicted by step 134. The snippet is then compared with templates 136 that depict the shape of known types of vehicles such as cars, vans, trucks etc, as depicted in step 138. When the best fit match is determined, the center of the snippet, where the center of the template is located in the match position, is marked as depicted by step 140. Further, the size of the snippet may be reduced to the size of the matching template. First and second signatures that respectively represent image intensity and image edges are calculated from the snippet as depicted by step 142. The signatures, matching template type, vehicle speed and a vehicle lane indicator are then transmitted to a second traffic monitoring station as depicted by step 144. The second traffic monitoring station enters the information into a list that is employed for comparison purposes as depicted in step 146. As depicted by step 148, information that represents vehicles passing the second traffic monitoring station is calculated by gathering snippets of vehicles and calculating signatures, a lane indicator, speed and vehicle type in the same manner as described with respect to the first traffic monitoring station. The information is then compared with entries selected in step 149 from the list by employing comparitor 150. In particular, entries that are so recent that incredibly high speed would be required for the vehicle to be passing the second traffic monitoring station are not employed. Further, older entries that would indicate an incredibly slow travel rate are discarded. The signatures may be accorded greater weight in the comparison than the lane indicator and vehicle type. Each comparison yields a score, and the highest score 152 is compared with a predetermined threshold score as depicted by step 154. If the score does not exceed the threshold, the “match” is disregarded as depicted by step 156. If the score exceeds the threshold, the match is saved as depicted by step 158. At the end of a predetermined interval of time, a ratio is calculated by dividing the difference between the best score and the second best score by the best score as depicted by step 160. If the ratio is greater than or equal to a predetermined value, a vehicle match is indicated. The transit time and average speed of the vehicle between traffic monitoring stations is then reported to the graphic user interface.
  • Inter-field distortion is a by-product of standard video camera scanning technology. An NTSC format video camera will alternately scan even or odd scan lines every 60th of a second. A fast moving vehicle will move enough during the scan to “blur,” seeming to partially appear in two different places at once. Typically, the car will move about 1.5 ft during the scan (approx. 60 mph). Greater distortion is observed when the car travels at higher velocity. Greater distortion is also observed when the vehicle is nearer to the camera. The distortion compensating algorithm is based on knowledge of the “camera parameters” and the speed of the vehicle. The camera parameters enable mapping between motion in the real world and motion in the image plane of the camera. The algorithm predicts, based on the camera parameters and the known speed of the vehicle, how much the vehicle has moved in the real world (in the direction of travel). The movement of the vehicle on the image plane is then calculated. In particular, the number of scan lines and distance to the left or right on the image is calculated. Correction is implemented by moving the odd scan lines ‘back’ to where the odd scan lines would have been if the car had stayed still (where the car was when the even scan lines were acquired). For example, to move 4 scan lines back, scan line n would be copied back to scan line n−4, where n is any odd scan line. The right/left movement is simply where the scan line is positioned when copied back. An offset may be added or subtracted to move the pixels back into the corrected position. For example, scan line n may have an offset of [0060] 8 pixels when moved back to scan line n−4, so pixel 0 in scan line n is copied to pixel 7 in scan line n−4, etc. If the speed of a particular vehicle cannot be determined, the average speed for that lane may be employed to attempt the correction. Distortion correction is not necessary when a progressive scan camera is employed.
  • Referring to FIG. 14, the traffic monitoring station may be employed to facilitate traffic control. In the illustrated embodiment, the traffic monitoring station is deployed such that an intersection is within the field of view of the camera. Vehicle detection can be employed to control traffic light cycles independently for left and right turns, and non-turning traffic. Such control, which would require multiple inductive loops, can be exerted for a plurality of lanes with a single camera. Predetermined parameters that describe vehicle motion are employed to anticipate future vehicle motion, and proactive action may be taken to control traffic in response to the anticipated motion of the vehicle. For example, if the traffic monitoring station determines that a [0061] vehicle 124 will “run” a red light signal 125 by traversing an intersection 126 during a period of time when a traffic signal 128 will be indicating “green” for a vehicle 130 entering the intersection from another direction, the traffic monitoring station can provide a warning or control such as an audible warning, flashing light and/or delayed green light for the other vehicle 130 in order to reduce the likelihood of a collision. Further, the traffic monitoring station may track the offending vehicle 124 through the intersection 126 and use the tracking information to control a separate camera to zoom in on the vehicle and/or the vehicle license plate to record a single frame, multiple frames or a full motion video movie of the event for vehicle identification and evidentiary purposes. The cameras are coordinated via shared reference features in a field of view overlap area. Once the second camera acquires the target, the second camera zooms in to record the license plate of the offending vehicle. The traffic monitoring station could also be used to detect other types of violations such as illegal lane changes, speed violations, and tailgating. Additionally, the traffic monitoring station can be employed to determine the optimal times to cycle a traffic light based upon detected gaps in traffic and lengths of queues of cars at the intersection.
  • The determination of whether the vehicle will run the red light may be based upon the speed of the vehicle and distance of the vehicle from the intersection. In particular, if the vehicle speed exceeds a predetermined speed within a predetermined distance from the intersection it may be inferred that the vehicle cannot or is unlikely to stop before entering the intersection. [0062]
  • As described herein, the disclosed system employs video tracking to detect vehicles that will not stop for a traffic light that is changing to red. In an illustrative embodiment, the disclosed system outputs a signal in response to detection of such Non-Stopping Vehicles (NSV's), for example when they have completed passage through the intersection and cross traffic can safely proceed. With this information, a traffic controller can be optionally programmed to delay the onset of the green light for cross traffic or pedestrians until any detected red-light violating vehicles have moved through the intersection. Through its ability to anticipate red-light vehicle violation before the violation occurs, the disclosed system may advantageously reduce the risk of collisions and/or injuries at intersections. [0063]
  • The features of the presently disclosed system involve the ability (1) to process a sequence of video images of oncoming traffic to detect, classify and provide continuous tracking of detected vehicles; (2) to calculate from image information real world vehicle displacements with sufficient accuracy to support measurement of vehicle speed and acceleration for all vehicles in the camera field of view; (3) to provide continuous, real time measurement of vehicle position, speed and acceleration; (4) to develop and implement a decision model, using among its inputs, measures of vehicle position, speed, acceleration and classification in order to determine likelihood of a vehicle stopping; and (5) to update the decision model in real time as a result of changing values of vehicle parameters (e.g., position, speed, acceleration) for each oncoming vehicle in the camera's field of view. The disclosed system supports the use of cameras for intersection monitoring/control and surveillance. The disclosed system may further be applied to other safety considerations: the detection of vehicles approaching toll booths, railway crossings or other controlled roadway structures (lane reducers, etc.) at excessive speeds, and/or the detection of vehicles exhibiting other aspects of hazardous driving behavior. [0064]
  • The disclosed system includes innovative video-monitoring capabilities to improve the safety of signalized intersections. A significant risk exists for motorists who are given a green light to proceed through an intersection when a vehicle oncoming from another direction of travel elects to run the red-light for that direction. Using the disclosed system, Non-Stopping Vehicles (NSV's) can be detected by a sensor in advance of the onset of the green light for the cross traffic, and the sensor can generate a signal that specifically indicates a condition of a vehicle passing or about to pass through an intersection in violation of a red-light. This signal can then be used to delay the onset of the green light phase of a traffic light for any cross traffic until the NSV has moved through the intersection and it is safe for cross traffic and/or pedestrians to proceed. In an illustrative embodiment, such a sensor signal is not necessarily used to dynamically lengthen, or otherwise change, the timing cycle for the traffic light of the NSV. The timing cycle in the traffic light for the NSV may be left unchanged and the NSV does indeed violate a red-light. [0065]
  • The capabilities of the disclosed system are based on monitoring and controlling traffic flow at an intersection through approach detection, stop line and turn detection functionality. The disclosed system accurately provides speed and position information on vehicles within a field of view, and determines from the motion characteristics of oncoming vehicles and a knowledge of the intersection timing cycle, whether a vehicle is in the process of running a red-light or has a high probability of running the red-light. In one embodiment, this determination starts as soon as the light for traffic in the given direction of interest cycles to yellow. The accurate, early detection of an NSV provided by the disclosed system is essential in order to produce a sensor signal in advance of the normal cycle for activating the green light of the cross traffic. [0066]
  • The disclosed system supports a number of options for determining an “all clear” signal for an intersection. For example, in intersections equipped with high slew rate PTZ cameras (>90 deg/sec.), the camera that images the NSV can be controlled by the disclosed system to track the NSV through the intersection, generating the all clear signal when the NSV has exited the intersection. Alternatively, for intersections not equipped with PTZ cameras, two alternative embodiments may be employed to determine the “all clear” signal. In a first such alternative embodiment, no single camera images the entire intersection area, but the intersection is completely imaged in the field of view of two or more cameras. In this first alternative embodiment, the multiple cameras may be used to cooperatively track the target NSV through the field of view of each camera until the NSV has traversed the entire intersection. In a second alternative embodiment, in which the intersection is not imaged either by a single camera or by combined multiple cameras, a determination is made, on the basis of the NSV's speed and acceleration as it exits the camera's field of view, and from predetermined knowledge of the spatial extent of the intersection that is not imaged either by a single camera or by multiple cameras, of the time that will elapse before the vehicle has moved completely through the intersection. Based on such a determination, the all clear signal can be generated once this predicted time has elapsed. This second alternative embodiment requires the fewest resources in terms of camera installation requirements. [0067]
  • In an exemplary embodiment, the present invention may, for example, provide (1) wide area detection of vehicles entering the approach zone of an intersection, (2) high-precision continuous speed and acceleration profiles using 20 frame-per-second video sampling; (3) single camera support for monitoring two approach directions, (4) continuous tracking of vehicles as they cross lane boundaries, and (5) cooperative vehicle tracking between multiple cameras to ensure vehicle clearance through the intersection area. [0068]
  • The disclosed system advantageously operates to anticipate and predict traffic light violations before they occur, for example using an approaching vehicle's speed. Accordingly, the disclosed system can be used to intelligently delay a green light until the offending vehicle is safely past the intersection. Along these same lines, one or more video cameras may be used by the present system because they meet key requirements of the intersection safety application: the need for sensor information on which accurate measurements of vehicle position, speed, acceleration and classification can be made; the need for such information on all vehicles approaching the intersection (not just for those that are nearest to the intersection in each lane); and the need for continuously updated information such that the individual vehicle measurements can be continuously updated in real time. [0069]
  • Alternatively, other sensor technologies may be used as the basis for sensor devices in the disclosed system, provided that they can potentially satisfy the sensor requirements for the intersection safety application. For example, the more advanced synthetic aperture radar (SAR) systems may be used to provide a radar-based image of the roadway. In similar fashion to video processing, a radar-based image or images can be processed to detect, track and classify target vehicles on a continuous basis. [0070]
  • As a further advantageous feature, the present system leverages the investment that municipalities make in deploying video cameras at intersections for surveillance and/or traffic monitoring and flow control purposes to provide significant safety benefits as well. Because the disclosed system potentially supports such dual use of video cameras, significant cost savings result. Additionally, because of the compatibility of the disclosed system with surveillance uses of the video cameras, it can be used for active monitoring in a traffic control center, offering the potential for integration with larger traffic management systems. [0071]
  • Moreover, the features of the disclosed system can provide intelligent green light delays based upon the detection of pedestrians that are in the process of crossing an intersection at a well defined crosswalk. In some cases, late-crossing pedestrians are not visible to motorists who are waiting for a green light. This may result from a motorist's view being obstructed by a truck or bus in an adjacent lane. An embodiment of the disclosed system could identify pedestrians or pedestrian groups, and communicate information on expected time to crossing completion that could be optionally used to delay activation of a green light until such pedestrians were safely clear of the intersection. The disclosed system may further be embodied to monitor other potential safety hazards, such as: vehicles moving at excessive speed approaching toll booths or other controlled roadway structures (lane reducers, etc.); vehicles out of control on steep grades, hills or steep downgrades leading to a traffic light or rail crossing. Additionally, the disclosed system can use the tracking position information it generates to control and move a high performance PTZ camera mechanism and standard NTSC color camera to zoom in on and identify a potential red-light violator for enforcement purposes. In such a case, no additional pavement sensors or fixed cameras are required for red-light violation monitoring. [0072]
  • The principles of the disclosed system may be applied to any intersection monitoring system, whether video-based or not, that meets required minimum operational specifications for the creation of accurate vehicle count and approach speed measurements on a real-time basis. Irrespective of the specific sensor technology employed, controller logic must be employed to interface sensor outputs to the intersection traffic lights. For example, some existing controllers can be adapted to implement a green-light delay based on outputs from the disclosed system. [0073]
  • Implementations and/or deployments of the disclosed system could differ in operation from intersection to intersection. In this regard, an implementation of the disclosed system could be configured to take into account the different demographics of an area as well as the nature of its traffic flow patterns. For example, at rush hour, vehicles often “creep” into an intersection on a yellow light, even when traffic is backed up, preventing their transit through the intersection before the onset of a red light in their direction. In such cases, the disclosed system may be pre-configured to delay the activation of the cross traffic green light only for a maximum period of time, in order to give cross traffic the opportunity to flow, allowing them to navigate around vehicles present in the intersection but which have not yet traversed the intersection region. [0074]
  • Having described the embodiments consistent with the present invention, other embodiments and variations consistent with the present invention will be apparent to those skilled in the art. Therefore, the invention should not be viewed as limited to the disclosed embodiments but rather should be viewed as limited only by the spirit and scope of the appended claims. [0075]
    short PTLPivotSolve ( double XOffset, double YOffset, double
    Lane Width, double Lanes, double LSlope, double RSlope,
    double DiffX, double CameraHeight, double Grade, double
    Bank, double xPoint, double yPoint, double RealX,
    double RealY, double *LambdaSols double *panSols,
    double *TiltSols, double *Error, short ArraySize )
    {
    short SolCount;
    long x, y;
    double Xo, Yo, LinearDistance, yP;
    double Horizon, New Horizon, dim;
    double BaseTilt, BaseLambda, BasePan, PivotX, PivotY, Tilt,
    Pan, Lambda;
    CameraParams  cp;
    double Scaler = 240.0;
    double PI = 3.1415926535;
    double RadialDistance;
    SolCount = 0;
    cp.Height = CameraHeight;
    cp.XSize = 640;
    cp.YSize = 480;
    cp.XCenter = 320;
    cp.YCenter = 240;
    cp.XOffset = (long) XOffset;
    cp.YOffset = (long) YOffset;
    Grade = atan ( Grade / 100.0 );
    Bank = atan ( Bank / 100.0 );
    cp.Grade = Grade;
    cp.sinG = sin ( Grade );
    Yo = CameraHeight * cp.sinG;
    cp.cosG = cos ( Grade );
    CameraHeight = CameraHeight * cp.cosG;
    cp.Bank = Bank;
    cp.sinB = sin ( Bank ) ;
    Xo = CameraHeight * cp.sinB;
    cp.cosB = cos ( Bank );
    CameraHeight = CameraHeight * cp.cosB;
    DiffX /= Scaler;
    dim = 1.0 /RSlope;
    dim−= 1.0 /LSlope;
    dim /= Lanes;
    Horizon =−DiffX / (Lanes * dim);
    SolCount = 0;
    LinearDistance = sqrt ( ( RealX * RealX ) + (RealY * RealY ) );
    BaseTilt = atan ( CameraHeight /LinearDistance );
    if ( _isnan ( BaseTilt ) ) { // Bogus solution
    return ( 0 ); // No solutions
    }
    cp.st = sin(BaseTilt);
    cp.ct = cos(BaseTilt);
    yP = (240 − yPoint − YOffset) / Scaler;
    NewHorizon = Horizon − ( yP);
    BaseLambda = NewHorizon / tan (BaseTilt );
    if (_isnan ( BaseLambda ) ) {
    return ( 0 );
    }
    cp.Lambda = BaseLambda;
    RadialDistance=sqrt ( (LinearDistance*LinearDistance) + (CameraHeight
    *CameraHeight));
    BasePan = atan (RealX / RadialDistance);
    if (_isnan ( BasePan ) ) {
    return ( 0 );
    }
    cp.sp = sin(BasePan);
    cp.cp = cos(BasePan);
    x = 640 − (long) xPoint;
    y − 480 − (long) yPoint;
    GetRealXY ( &cp, x, y, &PivotX, &PivotY );
    // Now get the real, relocated camera parameters
    LinearDistance = sqrt (( PivotX * PivotX) + ( PivotY * PivotY));
    RadialDistance=sqrt ((LinearDistance*LinearDistance) + (CameraHeight
    *CameraHeight));
    Tilt = atan ( CameraHeight / LinearDistance );
    if (_isnan ( Tilt ) ) { // Bogus solution
    return ( 0 ); // No solutions
    }
    Lambda = (Horizon / CameraHeight) *RadialDistance;
    if (_isnan ( Lambda ) ) { // Bogus solution
    return ( 0 ); // No solutions
    }
    Pan = asin ( PivotX) / RadialDistance );
    if (_isnan ( Pan ) ) { // Bogus solution
    return ( 0 ); // No solutions
    }
    (*LambdaSols = Lambda;
    *PanSols = Pan;
    *TiltSols = Tilt;
    *Error = 0.0;
    SolCount = 1;
    return ( SolCount );
    }
  • [0076]
    Figure US20020054210A1-20020509-P00001
    Figure US20020054210A1-20020509-P00002
    Figure US20020054210A1-20020509-P00003
    Figure US20020054210A1-20020509-P00004
    Figure US20020054210A1-20020509-P00005
    Figure US20020054210A1-20020509-P00006
    Figure US20020054210A1-20020509-P00007
    Figure US20020054210A1-20020509-P00008
    Figure US20020054210A1-20020509-P00009
    Figure US20020054210A1-20020509-P00010
    Figure US20020054210A1-20020509-P00011
    Figure US20020054210A1-20020509-P00012

Claims (53)

What is claimed is:
1. A traffic light violation prediction system for a traffic signal having a at least a red phase and a green phase, comprising:
at least one image capturing device, said image capturing device operative to provide image data of at least one vehicle approaching said traffic signal; and
a computation unit, operative in response to said image capturing device and an indication of said current traffic light phase, to determine whether said at least one vehicle approaching said traffic signal will violate a red light phase of said traffic signal.
2. The system of claim 1, wherein said image capturing device comprises at least one video camera.
3. The system of claim 1, wherein said traffic signal has a yellow light phase, and said computation unit is further responsive to a time remaining in said yellow light phase.
4. The system of claim 1, wherein said computation unit is further responsive to a current speed of said at least one vehicle approaching said traffic intersection.
5. The system of claim 1, wherein said computation unit is further responsive to a current acceleration of said at least one vehicle approaching said traffic intersection.
6. The system of claim 1, wherein said computation unit is further responsive to a current position of said at least one vehicle approaching said traffic intersection.
7. The system of claim 1, wherein said computation unit is further operable to compute a time remaining before said at least one vehicle approaching said traffic intersection enters said traffic intersection, responsive to a determination of a current acceleration of said vehicle.
8. The system of claim 7, wherein said computation unit is further operable to calculate a rate of deceleration required for said at least one vehicle to stop within said time remaining before said vehicle enters said traffic intersection.
9. A method for predicting a traffic light violation of a traffic signal having at least a red phase and a green phase, comprising:
providing image data showing at least one vehicle approaching said traffic signal; and
determining, responsive to said image data and an indication of a current traffic light phase, whether said at least one vehicle approaching said traffic signal will violate a red light phase of said traffic signal.
10. The method of claim 9, wherein said image data is generated by at least one video camera.
11. The method of claim 9, wherein said determining is performed by a computation unit comprising software executing on a processor.
12. The method of claim 9, wherein said traffic light further includes a yellow phase, and wherein said determining is further responsive to a time remaining in said yellow phase.
13. The method of claim 9, wherein said determining further includes the step of determining a current speed for said at least one vehicle approaching said traffic intersection.
14. The method of claim 9, wherein said determining further includes the step of determining a current acceleration for said vehicle approaching said traffic intersection.
15. The method of claim 14, wherein said determining further includes computing a time remaining before said vehicle approaching said traffic intersection enters said traffic intersection, responsive to said determination of said current acceleration of said vehicle.
16. The method of claim 15, further comprising calculating, by said computation unit, a deceleration required for said vehicle to stop within said time remaining before said vehicle enters said traffic intersection.
17. A collision avoidance system for a first traffic signal having a current light phase equal to one of a red light phase and a green light phase, and a second traffic signal having a current light phase equal to one of a red light phase and a green light phase, comprising:
at least one image capturing device, for capturing a plurality of images;
said plurality of images showing at least one vehicle approaching said first traffic signal;
a computation unit, responsive to said plurality of images and an indication of said current first traffic signal light phase, for determining whether said at least one vehicle approaching said first traffic signal will violate said red light phase of said first traffic signal, and for delaying an upcoming green traffic light phase of said second traffic signal responsive to a determination that said at least one vehicle approaching said first traffic signal will violate a red phase of said first traffic signal.
18. The system of claim 17, wherein said image capturing device comprises at least one video camera.
19. The system of claim 17, wherein said computation unit comprises software executing on a processor.
20. The system of claim 17, wherein said computation unit is further responsive to a time remaining in yellow light phase input.
21. The system of claim 17, wherein said computation unit is further operable to determine a current speed for said at least one vehicle.
22. The system of claim 1, wherein said computation unit is further operable to determine a current acceleration for said at least one vehicle.
23. The system of claim 17, wherein said computation unit is further operable to compute a time remaining before one of said at least one vehicle enters said traffic intersection, responsive to determination of a current acceleration of said vehicle.
24. The system of claim 23, wherein said computation unit is further operable to calculate a deceleration required for said at least one vehicle to stop within said time remaining before said vehicle enters said traffic intersection.
25. A method of collision avoidance for a first traffic signal having a current light phase equal to one of the set including at least red and green and a second traffic signal having a current light phase equal to one of the set including at least red and green, comprising:
capturing a plurality of images, said images showing at least one vehicle approaching said first traffic signal, said images derived from an output of a violation prediction image capturing device;
determining, responsive to said plurality of images and indication of said current first traffic signal light phase, whether said at least one vehicle approaching said first traffic signal will violate a red light phase of said first traffic signal; and
delaying an upcoming green light phase of said second traffic signal for a programmed time period responsive to a determination that said at least one vehicle approaching said first traffic signal will violate said red light phase of said first traffic signal.
26. The method of claim 25, wherein said violation prediction image capturing device comprises at least one video camera.
27. The method of claim 25, wherein said collision avoidance unit comprises software executing on a processor.
28. The method of claim 25, further comprising:
determining at least one vehicle location associated with said at least one vehicle; and
wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal is responsive to said at least one vehicle location.
29. The method of claim 25, further comprising:
determining a time remaining in a current yellow light phase; and
wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal is responsive to said time remaining in said current yellow light phase.
30. The method of claim 25, further comprising:
determining a current speed for said at least one vehicle; and
wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal is responsive to said current speed of said at least one vehicle.
31. The method of claim 25, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises determining a current acceleration for said at least one vehicle.
32. The method of claim 25, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises computing a time remaining before said at least one vehicle enters said traffic intersection.
33. The method of claim 32, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises calculating a rate of deceleration required for said at least one vehicle to stop within said time remaining before said vehicle enters said traffic intersection.
34. The method of claim 33, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises determining whether said required deceleration is larger than a specified deceleration limit value.
35. An apparatus for facilitating operation of a traffic light at an intersection of first and second roadways, comprising:
at least one camera that provides first and second video frames that are representative of a field of view of said camera at different points in time;
a video processing circuit that detects vehicles; and
a processor circuit that determines vehicle position in the first and second video frames and vehicle velocity from the difference in vehicle position, said processor circuit being operative to delay the operation of said traffic light in response to predetermined conditions associated with said first and second video frames.
36. An apparatus for facilitating operation of a traffic light at an intersection of first and second roadways, comprising:
at least one camera that provides first and second video frames that are representative of a field of view of said camera at different points in time;
a video processing circuit that detects vehicles; and
a processor circuit that determines vehicle position in the first and second video frames and vehicle velocity from the difference in vehicle position, said processor circuit being operative to generate a signal indicating that a vehicle is about to pass through said intersection in violation of a red-light.
37. A method for controlling vehicular traffic at an intersection having a first traffic light for controlling traffic approaching said intersection from a first direction and a second traffic light for controlling traffic approaching said intersection from a second direction, wherein each of said traffic lights includes at least a red phase and a green phase, said method comprising:
obtaining a plurality of images of a vehicle approaching said intersection from said first direction;
analyzing said plurality of images to predict whether said vehicle will violate said red light phase of said first traffic light;
responsive to a determination said vehicle will violate said red light phase of said first traffic light generating a delay signal indicative of said predicted violation.
38. The method of claim 37 further including delaying a change of said second traffic signal from said red light phase to said green light phase responsive to said delay signal.
39. The method of claim 37, wherein said obtaining said plurality of images is performed by an image capturing device comprising at least one video camera.
40. The method of claim 37, wherein analyzing said plurality of images is performed by a collision avoidance unit comprising software executing on a processor.
41. The method of claim 37, further comprising:
determining at least one vehicle location associated with said at least one vehicle; and
wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal is responsive to said at least one vehicle location.
42. The method of claim 37, further comprising:
determining a time remaining in a current yellow light phase; and
wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal is responsive to said time remaining in said current yellow light phase.
43. The method of claim 37, further comprising:
determining a current speed for said at least one vehicle; and
wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal is responsive to said current speed of said at least one vehicle.
44. The method of claim 37, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises determining a current acceleration for said at least one vehicle.
45. The method of claim 37, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises computing a time remaining before said at least one vehicle enters said traffic intersection.
46. The method of claim 45, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises calculating a rate of deceleration required for said at least one vehicle to stop within said time remaining before said vehicle enters said traffic intersection.
47. The method of claim 46, wherein said determining whether said at least one vehicle will violate said red light phase of said first traffic signal further comprises determining whether said required deceleration is larger than a specified deceleration limit value.
48. An accident avoidance system for an intersection having a traffic signal having a current light phase equal to one of a red light phase and a green light phase, and a pedestrian crosswalk passing through a path of traffic controlled by said first traffic signal, comprising:
at least one image capturing device, for capturing a plurality of images of said pedestrian crosswalk, said plurality of images showing at least one pedestrian within said pedestrian crosswalk; and
a computation unit, responsive to said plurality of images and an indication of said current traffic signal light phase, for determining whether said at least one pedestrian in said crosswalk will exit said crosswalk prior to said current traffic signal light phase transitioning from red to green, and for delaying an upcoming green traffic light phase of said traffic signal responsive to a determination that said at least one pedestrian will not exit said crosswalk prior to said current traffic signal light phase transitioning from red to green.
49. The system of claim 48, wherein said image capturing device comprises at least one video camera.
50. The system of claim 48, wherein said computation unit is further responsive to a current speed of said at least one pedestrian within said pedestrian crosswalk.
51. The system of claim 48, wherein said computation unit is further responsive to a current acceleration of said at least one pedestrian within said pedestrian crosswalk.
52. The system of claim 48, wherein said computation unit is further responsive to a current position of said at least one pedestrian within said pedestrian crosswalk.
53. The system of claim 48, wherein said computation unit is further operable to compute a time remaining before said current traffic signal light phase transitions from red to green.
US09/852,487 1997-04-14 2001-05-10 Method and apparatus for traffic light violation prediction and control Abandoned US20020054210A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/852,487 US20020054210A1 (en) 1997-04-14 2001-05-10 Method and apparatus for traffic light violation prediction and control

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US4369097P 1997-04-14 1997-04-14
US09/059,151 US6760061B1 (en) 1997-04-14 1998-04-13 Traffic sensor
US09/852,487 US20020054210A1 (en) 1997-04-14 2001-05-10 Method and apparatus for traffic light violation prediction and control

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/059,151 Division US6760061B1 (en) 1997-04-14 1998-04-13 Traffic sensor

Publications (1)

Publication Number Publication Date
US20020054210A1 true US20020054210A1 (en) 2002-05-09

Family

ID=26720719

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/059,151 Expired - Fee Related US6760061B1 (en) 1997-04-14 1998-04-13 Traffic sensor
US09/852,487 Abandoned US20020054210A1 (en) 1997-04-14 2001-05-10 Method and apparatus for traffic light violation prediction and control

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/059,151 Expired - Fee Related US6760061B1 (en) 1997-04-14 1998-04-13 Traffic sensor

Country Status (1)

Country Link
US (2) US6760061B1 (en)

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020008758A1 (en) * 2000-03-10 2002-01-24 Broemmelsiek Raymond M. Method and apparatus for video surveillance with defined zones
US20020033832A1 (en) * 2000-09-18 2002-03-21 Rafail Glatman Method for computer modeling of visual images and wave propagation
EP1429302A1 (en) * 2002-12-13 2004-06-16 LG CNS Co., Ltd. Method for detecting accident
US6809760B1 (en) * 1998-06-12 2004-10-26 Canon Kabushiki Kaisha Camera control apparatus for controlling a plurality of cameras for tracking an object
US20040222904A1 (en) * 2003-05-05 2004-11-11 Transol Pty Ltd Traffic violation detection, recording and evidence processing system
US20040239527A1 (en) * 2003-05-27 2004-12-02 Young-Heum Kim System for apprehending traffic signal violators
US20040252193A1 (en) * 2003-06-12 2004-12-16 Higgins Bruce E. Automated traffic violation monitoring and reporting system with combined video and still-image data
US20050036659A1 (en) * 2002-07-05 2005-02-17 Gad Talmon Method and system for effectively performing event detection in a large number of concurrent image sequences
US20050046597A1 (en) * 2003-08-18 2005-03-03 Hutchison Michael C. Traffic light signal system using radar-based target detection and tracking
US20050156734A1 (en) * 2001-09-28 2005-07-21 Zerwekh William D. Integrated detection and monitoring system
US20050169500A1 (en) * 2004-01-30 2005-08-04 Fujitsu Limited Method of and apparatus for setting image-capturing conditions, and computer program
FR2866182A1 (en) * 2004-02-05 2005-08-12 Capsys Image acquisition system programming method for e.g. premises monitoring field, involves defining detection zone in field of view of camera from coordinates of initialization signal in image, according to predetermined program
US20050201622A1 (en) * 2004-03-12 2005-09-15 Shinichi Takarada Image recognition method and image recognition apparatus
US20050242306A1 (en) * 2004-04-29 2005-11-03 Sirota J M System and method for traffic monitoring, speed determination, and traffic light violation detection and recording
US20050285738A1 (en) * 2004-06-28 2005-12-29 Antonios Seas Compact single lens laser system for object/vehicle presence and speed determination
US20070008176A1 (en) * 2005-06-13 2007-01-11 Sirota J M Traffic light status remote sensor system
US20070126869A1 (en) * 2005-12-06 2007-06-07 March Networks Corporation System and method for automatic camera health monitoring
WO2007128111A1 (en) * 2006-05-05 2007-11-15 Dan Manor Traffic sensor incorporating a video camera and method of operating same
WO2008027221A2 (en) * 2006-08-30 2008-03-06 Marton Keith J Method and system to detect tailgating and automatically issue a citation
SG140462A1 (en) * 2004-06-25 2008-03-28 Singapore Polytechnic A monitoring and warning system
US20080166023A1 (en) * 2007-01-05 2008-07-10 Jigang Wang Video speed detection system
US20080266140A1 (en) * 2005-03-03 2008-10-30 Rudiger Heinz Gebert System and Method For Speed Measurement Verification
US7489334B1 (en) 2007-12-12 2009-02-10 International Business Machines Corporation Method and system for reducing the cost of sampling a moving image
US20100027009A1 (en) * 2008-07-31 2010-02-04 General Electric Company Method and system for detecting signal color from a moving video platform
US20100027841A1 (en) * 2008-07-31 2010-02-04 General Electric Company Method and system for detecting a signal structure from a moving video platform
US20100117865A1 (en) * 2003-10-14 2010-05-13 Siemens Industry, Inc. Method and System for Collecting Traffice Data, Monitoring Traffic, and Automated Enforcement at a Centralized Station
US20100302371A1 (en) * 2009-05-27 2010-12-02 Mark Abrams Vehicle tailgating detection system
US20110029904A1 (en) * 2009-07-30 2011-02-03 Adam Miles Smith Behavior and Appearance of Touch-Optimized User Interface Elements for Controlling Computer Function
US20110128376A1 (en) * 2007-03-30 2011-06-02 Persio Walter Bortolotto System and Method For Monitoring and Capturing Potential Traffic Infractions
US8177460B2 (en) 2007-04-01 2012-05-15 Iscar, Ltd. Cutting insert
US20120154200A1 (en) * 2010-12-17 2012-06-21 Fujitsu Limited Control apparatus, radar detection system, and radar detection method
US20120307064A1 (en) * 2011-06-03 2012-12-06 United Parcel Service Of America, Inc. Detection of traffic violations
US20130101167A1 (en) * 2011-10-19 2013-04-25 Lee F. Holeva Identifying, matching and tracking multiple objects in a sequence of images
US20130155288A1 (en) * 2011-12-16 2013-06-20 Samsung Electronics Co., Ltd. Imaging apparatus and imaging method
US20130329094A1 (en) * 2011-12-19 2013-12-12 Ziva Corporation Computational imaging using variable optical transfer function
EP1427212B1 (en) * 2002-11-27 2014-07-16 Bosch Security Systems, Inc. Video tracking system and method
US20140334684A1 (en) * 2012-08-20 2014-11-13 Jonathan Strimling System and method for neighborhood-scale vehicle monitoring
US20140362222A1 (en) * 2013-06-07 2014-12-11 Iteris, Inc. Dynamic zone stabilization and motion compensation in a traffic management apparatus and system
ITBZ20130054A1 (en) * 2013-11-04 2015-05-05 Tarasconi Traffic Tecnologies Srl ROAD TRAFFIC VIDEO SURVEILLANCE SYSTEM WITH REPORTING DANGER SITUATIONS
US9171228B2 (en) * 2011-03-02 2015-10-27 Universite D' Aix-Marseille Method and system for estimating a similarity between two binary images
US9172960B1 (en) * 2010-09-23 2015-10-27 Qualcomm Technologies, Inc. Quantization based on statistics and threshold of luminanceand chrominance
JP5809980B2 (en) * 2010-07-30 2015-11-11 国立大学法人九州工業大学 Vehicle behavior analysis apparatus and vehicle behavior analysis program
WO2016018936A1 (en) * 2014-07-28 2016-02-04 Econolite Group, Inc. Self-configuring traffic signal controller
US20160061172A1 (en) * 2013-03-29 2016-03-03 Hitachi Automotive Systems, Ltd. Running control apparatus and running control system
US20170024899A1 (en) * 2014-06-19 2017-01-26 Bae Systems Information & Electronic Systems Integration Inc. Multi-source multi-modal activity recognition in aerial video surveillance
US9813313B1 (en) * 2013-12-06 2017-11-07 Concurrent Ventures, LLC System, method and article of manufacture for automatic detection and storage/archival of network video to offload the load of a video management system (VMS)
CN107527507A (en) * 2017-09-30 2017-12-29 北京蓝绿相间科技有限公司 Underground garage Intelligent traffic management systems
US9921396B2 (en) 2011-07-17 2018-03-20 Ziva Corp. Optical imaging and communications
WO2018051200A1 (en) * 2016-09-15 2018-03-22 Vivacity Labs Limited A method and system for analyzing the movement of bodies in a traffic system
US9990535B2 (en) 2016-04-27 2018-06-05 Crown Equipment Corporation Pallet detection using units of physical length
CN108198427A (en) * 2017-11-30 2018-06-22 中原智慧城市设计研究院有限公司 Green light of rushing based on characteristics of image frame is broken rules and regulations determination method
US10057346B1 (en) * 2013-12-06 2018-08-21 Concurrent Ventures, LLC System, method and article of manufacture for automatic detection and storage/archival of network video
CN108898834A (en) * 2018-07-12 2018-11-27 安徽电信工程有限责任公司 A kind of intellectual traffic control method monitoring traffic accident at intersection
US20180341812A1 (en) * 2015-04-02 2018-11-29 Sportsmedia Technology Corporation Automatic determination and monitoring of vehicles on a racetrack with corresponding imagery data for broadcast
EP1584079B1 (en) * 2002-07-22 2019-04-03 Citilog Device for detecting an incident or the like on a traffic lane portion
CN111145580A (en) * 2018-11-06 2020-05-12 松下知识产权经营株式会社 Mobile body, management device and system, control method, and computer-readable medium
CN111524390A (en) * 2020-04-22 2020-08-11 上海海事大学 Active early warning system and method for secondary accidents on expressway based on video detection
US11024165B2 (en) * 2016-01-11 2021-06-01 NetraDyne, Inc. Driver behavior monitoring
CN113628457A (en) * 2021-09-07 2021-11-09 重庆交通大学 Intelligent control method and system for traffic signal lamp
CN114333330A (en) * 2022-01-27 2022-04-12 浙江嘉兴数字城市实验室有限公司 Intersection event detection system and method based on roadside edge holographic sensing
US11314209B2 (en) 2017-10-12 2022-04-26 NetraDyne, Inc. Detection of driving actions that mitigate risk
US11322018B2 (en) 2016-07-31 2022-05-03 NetraDyne, Inc. Determining causation of traffic events and encouraging good driving behavior
US11405580B2 (en) * 2020-09-09 2022-08-02 Fotonation Limited Event camera hardware
EP4113459A1 (en) * 2021-07-02 2023-01-04 Fujitsu Technology Solutions GmbH Ai based monitoring of race tracks
WO2023274955A1 (en) * 2021-07-02 2023-01-05 Fujitsu Technology Solutions Gmbh Ai based monitoring of race tracks
US11557154B2 (en) * 2017-06-23 2023-01-17 Kapsch Trafficcom Ag System and method for verification and/or reconciliation of tolling or other electronic transactions, such as purchase transactions
KR102569283B1 (en) * 2022-03-30 2023-08-23 포티투닷 주식회사 Method and apparatus for controlling vehicle
US11840239B2 (en) 2017-09-29 2023-12-12 NetraDyne, Inc. Multiple exposure event determination

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100533482C (en) * 1999-11-03 2009-08-26 特许科技有限公司 Image processing techniques for a video based traffic monitoring system and methods therefor
JP4953498B2 (en) * 2000-07-12 2012-06-13 富士重工業株式会社 Outside vehicle monitoring device with fail-safe function
KR100459476B1 (en) * 2002-04-04 2004-12-03 엘지산전 주식회사 Apparatus and method for queue length of vehicle to measure
NL1020387C2 (en) * 2002-04-15 2003-10-17 Gatsometer Bv Method for remotely synchronizing a traffic monitoring system and a traffic monitoring system equipped for this purpose.
US7382277B2 (en) 2003-02-12 2008-06-03 Edward D. Ioli Trust System for tracking suspicious vehicular activity
US7860639B2 (en) * 2003-02-27 2010-12-28 Shaoping Yang Road traffic control method and traffic facilities
US7747041B2 (en) * 2003-09-24 2010-06-29 Brigham Young University Automated estimation of average stopped delay at signalized intersections
US7983835B2 (en) 2004-11-03 2011-07-19 Lagassey Paul J Modular intelligent transportation system
FR2867864B1 (en) * 2004-03-17 2007-03-02 Automatic Systems METHOD AND INSTALLATION FOR PASSING DETECTION ASSOCIATED WITH A ACCESS DOOR
FR2875091B1 (en) * 2004-09-08 2006-11-24 Citilog Sa METHOD AND DEVICE FOR STABILIZING IMAGES GIVEN BY A VIDEO CAMERA
US7348895B2 (en) * 2004-11-03 2008-03-25 Lagassey Paul J Advanced automobile accident detection, data recordation and reporting system
US7561721B2 (en) * 2005-02-02 2009-07-14 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
US7920959B1 (en) 2005-05-01 2011-04-05 Christopher Reed Williams Method and apparatus for estimating the velocity vector of multiple vehicles on non-level and curved roads using a single camera
US20070031008A1 (en) * 2005-08-02 2007-02-08 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
US7782228B2 (en) * 2005-11-07 2010-08-24 Maxwell David C Vehicle spacing detector and notification system
US7623681B2 (en) * 2005-12-07 2009-11-24 Visteon Global Technologies, Inc. System and method for range measurement of a preceding vehicle
US20080101703A1 (en) * 2006-10-30 2008-05-01 Lockheed Martin Corporation Systems and methods for recognizing shapes in an image
US7501976B2 (en) * 2006-11-07 2009-03-10 Dan Manor Monopulse traffic sensor and method
JP4311457B2 (en) * 2007-02-15 2009-08-12 ソニー株式会社 Motion detection device, motion detection method, imaging device, and monitoring system
JP5132164B2 (en) * 2007-02-22 2013-01-30 富士通株式会社 Background image creation device
US20090005948A1 (en) * 2007-06-28 2009-01-01 Faroog Abdel-Kareem Ibrahim Low speed follow operation and control strategy
US8103436B1 (en) 2007-11-26 2012-01-24 Rhythm Engineering, LLC External adaptive control systems and methods
KR100950465B1 (en) * 2007-12-21 2010-03-31 손승남 Camera control method for vehicle enrance control system
CN101472366A (en) * 2007-12-26 2009-07-01 奥城同立科技开发(北京)有限公司 Traffic light control system suitable for intersection control
JP4770858B2 (en) * 2008-03-28 2011-09-14 アイシン・エィ・ダブリュ株式会社 Signalized intersection information acquisition apparatus, signalized intersection information acquisition method, and signalized intersection information acquisition program
US8654197B2 (en) * 2009-03-04 2014-02-18 Raytheon Company System and method for occupancy detection
DE102010031878A1 (en) 2009-07-22 2011-02-10 Logitech Europe S.A. System and method for remote on-screen virtual input
US9092129B2 (en) 2010-03-17 2015-07-28 Logitech Europe S.A. System and method for capturing hand annotations
US8386156B2 (en) * 2010-08-02 2013-02-26 Siemens Industry, Inc. System and method for lane-specific vehicle detection and control
US8364865B2 (en) 2010-09-28 2013-01-29 Microsoft Corporation Data simulation using host data storage chain
TWI452540B (en) * 2010-12-09 2014-09-11 Ind Tech Res Inst Image based detecting system and method for traffic parameters and computer program product thereof
US8909462B2 (en) * 2011-07-07 2014-12-09 International Business Machines Corporation Context-based traffic flow control
US8825350B1 (en) * 2011-11-22 2014-09-02 Kurt B. Robinson Systems and methods involving features of adaptive and/or autonomous traffic control
KR101305959B1 (en) * 2011-12-27 2013-09-12 전자부품연구원 A method for perception of situation based on image using template and an apparatus thereof
US9423498B1 (en) * 2012-09-25 2016-08-23 Google Inc. Use of motion data in the processing of automotive radar image processing
US9305460B1 (en) 2013-10-28 2016-04-05 John Justine Aza Roadway warning light detector and method of warning a motorist
CN104751627B (en) * 2013-12-31 2017-12-08 西门子公司 A kind of traffic determination method for parameter and device
CN111199218A (en) 2014-01-30 2020-05-26 移动眼视力科技有限公司 Control system for vehicle, and image analysis system
US9747505B2 (en) * 2014-07-07 2017-08-29 Here Global B.V. Lane level traffic
CN104346930A (en) * 2014-10-30 2015-02-11 无锡悟莘科技有限公司 Monitoring photographing method for red-light running vehicle on multi-lane
CN104346929A (en) * 2014-10-30 2015-02-11 无锡悟莘科技有限公司 Monitoring photographing method for red-light running vehicle on single lane
CN104778846B (en) * 2015-03-26 2016-09-28 南京邮电大学 A kind of method for controlling traffic signal lights based on computer vision
US20170365236A1 (en) * 2016-06-21 2017-12-21 Qualcomm Innovation Center, Inc. Display-layer update deferral
CN106650641B (en) * 2016-12-05 2019-05-14 北京文安智能技术股份有限公司 A kind of traffic lights positioning identifying method, apparatus and system
CN106935041A (en) * 2017-05-08 2017-07-07 唐山润驰科技有限公司 Video intelligent traffic guidance system
KR102054926B1 (en) * 2018-02-27 2019-12-12 주식회사 만도 System and method for detecting close cut-in vehicle based on free space signal
US11030893B1 (en) * 2020-06-05 2021-06-08 Samuel Messinger System for reducing speed of a vehicle and method thereof
US11715305B1 (en) 2022-11-30 2023-08-01 Amitha Nandini Mandava Traffic detection system using machine vision

Family Cites Families (89)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3196386A (en) 1960-07-23 1965-07-20 Rossi Bruno Automatic traffic regulating system for street intersections
US3149306A (en) 1962-05-18 1964-09-15 Rad O Lite Inc Automatic phase control for traffic lights
US3302168A (en) 1964-01-28 1967-01-31 Rca Corp Traffic control system
US3613073A (en) 1969-05-14 1971-10-12 Eugene Emerson Clift Traffic control system
JPS512800B1 (en) 1969-07-17 1976-01-28
US3689878A (en) 1970-06-23 1972-09-05 Ltv Aerospace Corp Traffic monitoring system
US3693144A (en) 1970-10-21 1972-09-19 Fischer & Porter Co Pull-in and drop-out delay unit for vehicle detector in traffic-control system
US3810084A (en) 1971-03-23 1974-05-07 Meyer Labs Inc Electronic traffic signal control system
US3731271A (en) 1971-11-26 1973-05-01 Omron Tateisi Electronics Co Traffic signal control system
US3885227A (en) 1972-04-20 1975-05-20 Siemens Ag Street traffic signalling system
FR2185824B1 (en) 1972-05-26 1980-03-14 Thomson Csf
DE2257818C3 (en) 1972-11-25 1975-08-28 Robot, Foto Und Electronic Gmbh & Co Kg, 4000 Duesseldorf Traffic monitoring device
DE2234446B1 (en) 1972-07-13 1973-12-06 Robot, Foto und Electronic GmbH & Co. KG, 4000 Düsseldorf-Benrath TRAFFIC MONITORING DEVICE
US3858223A (en) 1973-02-14 1974-12-31 Robot Foto Electr Kg Device for photographic monitoring of road intersections controlled by a traffic light
FR2279178A1 (en) 1973-12-07 1976-02-13 Thomson Csf DANGER INDICATOR SYSTEM FOR VEHICLES
US3920967A (en) 1974-02-22 1975-11-18 Trw Inc Computerized traffic control apparatus
US4007438A (en) 1975-08-15 1977-02-08 Protonantis Peter N Speed monitoring and ticketing system for a motor vehicle
US4200860A (en) 1976-04-29 1980-04-29 Fritzinger George H Method and apparatus for signalling motorists and pedestrians when the direction of traffic will change
US4122523A (en) 1976-12-17 1978-10-24 General Signal Corporation Route conflict analysis system for control of railroads
US4371863A (en) 1978-05-12 1983-02-01 Fritzinger George H Traffic-actuated control systems providing an advance signal to indicate when the direction of traffic will change
US4228419A (en) 1978-08-09 1980-10-14 Electronic Implementation Systems, Inc. Emergency vehicle traffic control system
US4361202A (en) 1979-06-15 1982-11-30 Michael Minovitch Automated road transportation system
US4401969A (en) 1979-11-13 1983-08-30 Green Gordon J Traffic control system
DE3532527A1 (en) 1985-09-12 1987-03-19 Robot Foto Electr Kg DEVICE FOR PHOTOGRAPHIC MONITORING OF CROSSINGS
JPH0766446B2 (en) * 1985-11-27 1995-07-19 株式会社日立製作所 Method of extracting moving object image
US5122796A (en) * 1986-02-19 1992-06-16 Auto-Sense, Limited Object detection method and apparatus emplying electro-optics
US4774571A (en) 1987-05-20 1988-09-27 Fariborz Mehdipour Computerized ticket dispenser system
US4814765A (en) 1987-06-12 1989-03-21 Econolite Control Products, Inc. Method and apparatus for displaying the status of a system of traffic signals
DE3727503A1 (en) 1987-08-18 1989-03-02 Robot Foto Electr Kg STATIONARY DEVICE FOR MONITORING TRAFFIC
DE3727562C2 (en) 1987-08-19 1993-12-09 Robot Foto Electr Kg Traffic monitoring device
JP2644844B2 (en) * 1988-09-20 1997-08-25 株式会社日立製作所 Distributed image recognition system
US5026153A (en) 1989-03-01 1991-06-25 Mitsubishi Denki K.K. Vehicle tracking control for continuously detecting the distance and direction to a preceding vehicle irrespective of background dark/light distribution
US5063603A (en) * 1989-11-06 1991-11-05 David Sarnoff Research Center, Inc. Dynamic method for recognizing objects and image processing system therefor
US5438517A (en) 1990-02-05 1995-08-01 Caterpillar Inc. Vehicle position determination system and method
US5099322A (en) * 1990-02-27 1992-03-24 Texas Instruments Incorporated Scene change detection system and method
JP2712844B2 (en) 1990-04-27 1998-02-16 株式会社日立製作所 Traffic flow measurement device and traffic flow measurement control device
US5313201A (en) 1990-08-31 1994-05-17 Logistics Development Corporation Vehicular display system
JP2601003B2 (en) 1990-09-25 1997-04-16 日産自動車株式会社 Vehicle running condition recognition device
DE69130147T2 (en) 1990-10-03 1999-04-01 Aisin Seiki Automatic control system for lateral guidance
US5390118A (en) 1990-10-03 1995-02-14 Aisin Seiki Kabushiki Kaisha Automatic lateral guidance control system
US5161107A (en) 1990-10-25 1992-11-03 Mestech Creation Corporation Traffic surveillance system
US5291563A (en) 1990-12-17 1994-03-01 Nippon Telegraph And Telephone Corporation Method and apparatus for detection of target object with improved robustness
US5301239A (en) * 1991-02-18 1994-04-05 Matsushita Electric Industrial Co., Ltd. Apparatus for measuring the dynamic state of traffic
US5296852A (en) * 1991-02-27 1994-03-22 Rathi Rajendra P Method and apparatus for monitoring traffic flow
US5164998A (en) 1991-03-04 1992-11-17 Reinsch Roger A Apparatus and method for image pattern analysis
US5408330A (en) 1991-03-25 1995-04-18 Crimtec Corporation Video incident capture system
US5278554A (en) 1991-04-05 1994-01-11 Marton Louis L Road traffic control system with alternating nonstop traffic flow
US5590217A (en) * 1991-04-08 1996-12-31 Matsushita Electric Industrial Co., Ltd. Vehicle activity measuring apparatus
GB9107476D0 (en) 1991-04-09 1991-05-22 Peek Traffic Ltd Improvements in vehicle detection systems
US5611038A (en) 1991-04-17 1997-03-11 Shaw; Venson M. Audio/video transceiver provided with a device for reconfiguration of incompatibly received or transmitted video and audio information
US5257194A (en) 1991-04-30 1993-10-26 Mitsubishi Corporation Highway traffic signal local controller
US5509082A (en) 1991-05-30 1996-04-16 Matsushita Electric Industrial Co., Ltd. Vehicle movement measuring apparatus
US5281947A (en) 1991-09-20 1994-01-25 C.A.R.E., Inc. Vehicular safety sensor and warning system
US5535314A (en) 1991-11-04 1996-07-09 Hughes Aircraft Company Video image processor and method for detecting vehicles
JP2847682B2 (en) 1991-11-22 1999-01-20 松下電器産業株式会社 Traffic signal ignorance cracker
US5809161A (en) * 1992-03-20 1998-09-15 Commonwealth Scientific And Industrial Research Organisation Vehicle monitoring system
JP2917661B2 (en) * 1992-04-28 1999-07-12 住友電気工業株式会社 Traffic flow measurement processing method and device
US5387908A (en) 1992-05-06 1995-02-07 Henry; Edgeton Traffic control system
US5375250A (en) 1992-07-13 1994-12-20 Van Den Heuvel; Raymond C. Method of intelligent computing and neural-like processing of time and space functions
US5448484A (en) * 1992-11-03 1995-09-05 Bullock; Darcy M. Neural network-based vehicle detection system and method
JP2816919B2 (en) 1992-11-05 1998-10-27 松下電器産業株式会社 Spatial average speed and traffic volume estimation method, point traffic signal control method, traffic volume estimation / traffic signal controller control device
US5345232A (en) 1992-11-19 1994-09-06 Robertson Michael T Traffic light control means for emergency-type vehicles
US5332180A (en) 1992-12-28 1994-07-26 Union Switch & Signal Inc. Traffic control system utilizing on-board vehicle information measurement apparatus
DE4310580A1 (en) 1993-03-31 1994-10-06 Siemens Ag Automatic fee entry system
EP0619570A1 (en) 1993-04-06 1994-10-12 McKenna, Lou Emergency vehicle alarm system for vehicles
PT700559E (en) 1993-05-24 2000-07-31 Locktronic Syst Pty Ltd IMAGE STORAGE SYSTEM FOR VEHICLE IDENTIFICATION
DE4317831C1 (en) 1993-05-28 1994-07-07 Daimler Benz Ag Display to show the danger of the current driving situation of a motor vehicle
US5474266A (en) 1993-06-15 1995-12-12 Koglin; Terry L. Railroad highway crossing
JP3414843B2 (en) 1993-06-22 2003-06-09 三菱電機株式会社 Transportation control device
US5416711A (en) 1993-10-18 1995-05-16 Grumman Aerospace Corporation Infra-red sensor system for intelligent vehicle highway systems
US5381155A (en) 1993-12-08 1995-01-10 Gerber; Eliot S. Vehicle speeding detection and identification
US5434927A (en) 1993-12-08 1995-07-18 Minnesota Mining And Manufacturing Company Method and apparatus for machine vision classification and tracking
US5465118A (en) * 1993-12-17 1995-11-07 International Business Machines Corporation Luminance transition coding method for software motion video compression/decompression
JP3156817B2 (en) * 1994-03-14 2001-04-16 矢崎総業株式会社 Vehicle periphery monitoring device
JP3408617B2 (en) 1994-03-16 2003-05-19 富士通株式会社 Synchronous word multiplexing method for image encoded data
US5404306A (en) 1994-04-20 1995-04-04 Rockwell International Corporation Vehicular traffic monitoring system
US5774569A (en) 1994-07-25 1998-06-30 Waldenmaier; H. Eugene W. Surveillance system
US5617086A (en) * 1994-10-31 1997-04-01 International Road Dynamics Traffic monitoring system
CA2236714C (en) 1995-11-01 2005-09-27 Carl Kupersmit Vehicle speed monitoring system
US5821878A (en) 1995-11-16 1998-10-13 Raswant; Subhash C. Coordinated two-dimensional progression traffic signal system
US6111523A (en) 1995-11-20 2000-08-29 American Traffic Systems, Inc. Method and apparatus for photographing traffic in an intersection
US6067075A (en) 1995-12-21 2000-05-23 Eastman Kodak Company Controller for medical image review station
US5829285A (en) 1996-02-13 1998-11-03 Wilson; Thomas Edward Tire lock
US5708469A (en) 1996-05-03 1998-01-13 International Business Machines Corporation Multiple view telepresence camera system using a wire cage which surroundss a plurality of movable cameras and identifies fields of view
JP3435623B2 (en) * 1996-05-15 2003-08-11 株式会社日立製作所 Traffic flow monitoring device
US5777564A (en) 1996-06-06 1998-07-07 Jones; Edward L. Traffic signal system and method
US6075466A (en) 1996-07-19 2000-06-13 Tracon Systems Ltd. Passive road sensor for automatic monitoring and method thereof
US5948038A (en) 1996-07-31 1999-09-07 American Traffic Systems, Inc. Traffic violation processing system
US5687717A (en) 1996-08-06 1997-11-18 Tremont Medical, Inc. Patient monitoring system with chassis mounted or remotely operable modules and portable computer

Cited By (127)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6809760B1 (en) * 1998-06-12 2004-10-26 Canon Kabushiki Kaisha Camera control apparatus for controlling a plurality of cameras for tracking an object
US20020008758A1 (en) * 2000-03-10 2002-01-24 Broemmelsiek Raymond M. Method and apparatus for video surveillance with defined zones
US20020033832A1 (en) * 2000-09-18 2002-03-21 Rafail Glatman Method for computer modeling of visual images and wave propagation
US20050156734A1 (en) * 2001-09-28 2005-07-21 Zerwekh William D. Integrated detection and monitoring system
US8502699B2 (en) * 2001-09-28 2013-08-06 Mct Technology, Llc Integrated detection and monitoring system
US8004563B2 (en) 2002-07-05 2011-08-23 Agent Vi Method and system for effectively performing event detection using feature streams of image sequences
US20050036659A1 (en) * 2002-07-05 2005-02-17 Gad Talmon Method and system for effectively performing event detection in a large number of concurrent image sequences
EP1584079B1 (en) * 2002-07-22 2019-04-03 Citilog Device for detecting an incident or the like on a traffic lane portion
US9876993B2 (en) 2002-11-27 2018-01-23 Bosch Security Systems, Inc. Video tracking system and method
EP1427212B1 (en) * 2002-11-27 2014-07-16 Bosch Security Systems, Inc. Video tracking system and method
EP1429302A1 (en) * 2002-12-13 2004-06-16 LG CNS Co., Ltd. Method for detecting accident
US20040222904A1 (en) * 2003-05-05 2004-11-11 Transol Pty Ltd Traffic violation detection, recording and evidence processing system
US20060269104A1 (en) * 2003-05-05 2006-11-30 Transol Pty, Ltd. Traffic violation detection, recording and evidence processing system
US6970102B2 (en) 2003-05-05 2005-11-29 Transol Pty Ltd Traffic violation detection, recording and evidence processing system
US20040239527A1 (en) * 2003-05-27 2004-12-02 Young-Heum Kim System for apprehending traffic signal violators
US7986339B2 (en) * 2003-06-12 2011-07-26 Redflex Traffic Systems Pty Ltd Automated traffic violation monitoring and reporting system with combined video and still-image data
US20040252193A1 (en) * 2003-06-12 2004-12-16 Higgins Bruce E. Automated traffic violation monitoring and reporting system with combined video and still-image data
US20050046597A1 (en) * 2003-08-18 2005-03-03 Hutchison Michael C. Traffic light signal system using radar-based target detection and tracking
US7821422B2 (en) * 2003-08-18 2010-10-26 Light Vision Systems, Inc. Traffic light signal system using radar-based target detection and tracking
US20100117865A1 (en) * 2003-10-14 2010-05-13 Siemens Industry, Inc. Method and System for Collecting Traffice Data, Monitoring Traffic, and Automated Enforcement at a Centralized Station
US8344909B2 (en) 2003-10-14 2013-01-01 Siemens Industry, Inc. Method and system for collecting traffic data, monitoring traffic, and automated enforcement at a centralized station
US7893846B2 (en) * 2003-10-14 2011-02-22 Siemens Industry, Inc. Method and system for collecting traffic data, monitoring traffic, and automated enforcement at a centralized station
US20110109479A1 (en) * 2003-10-14 2011-05-12 Siemens Industry, Inc. Method and System for Collecting Traffice Data, Monitoring Traffic, and Automated Enforcement at a Centralized Station
JP2005216070A (en) * 2004-01-30 2005-08-11 Fujitsu Ltd Photographic condition setting program, photographic condition setting method, and photographic condition setting device
US20050169500A1 (en) * 2004-01-30 2005-08-04 Fujitsu Limited Method of and apparatus for setting image-capturing conditions, and computer program
US7630515B2 (en) 2004-01-30 2009-12-08 Fujitsu Limited Method of and apparatus for setting image-capturing conditions, and computer program
US20080316308A1 (en) * 2004-02-05 2008-12-25 Christophe Roger Spinello Method and Device for Programming an Image Acquisition System
FR2866182A1 (en) * 2004-02-05 2005-08-12 Capsys Image acquisition system programming method for e.g. premises monitoring field, involves defining detection zone in field of view of camera from coordinates of initialization signal in image, according to predetermined program
WO2005084026A1 (en) * 2004-02-05 2005-09-09 Capsys Method and device for programming an image acquisition system
US20050201622A1 (en) * 2004-03-12 2005-09-15 Shinichi Takarada Image recognition method and image recognition apparatus
US7751610B2 (en) * 2004-03-12 2010-07-06 Panasonic Corporation Image recognition method and image recognition apparatus
US20050242306A1 (en) * 2004-04-29 2005-11-03 Sirota J M System and method for traffic monitoring, speed determination, and traffic light violation detection and recording
US7616293B2 (en) 2004-04-29 2009-11-10 Sigma Space Corporation System and method for traffic monitoring, speed determination, and traffic light violation detection and recording
SG140462A1 (en) * 2004-06-25 2008-03-28 Singapore Polytechnic A monitoring and warning system
US20050285738A1 (en) * 2004-06-28 2005-12-29 Antonios Seas Compact single lens laser system for object/vehicle presence and speed determination
US7323987B2 (en) 2004-06-28 2008-01-29 Sigma Space Corporation Compact single lens laser system for object/vehicle presence and speed determination
WO2006078691A3 (en) * 2005-01-19 2007-01-25 Mct Ind Inc Integrated detection and monitoring system
WO2006078691A2 (en) * 2005-01-19 2006-07-27 Mct Industries, Inc. Integrated detection and monitoring system
US7680545B2 (en) * 2005-03-03 2010-03-16 Rudiger Heinz Gebert System and method for speed measurement verification
US20080266140A1 (en) * 2005-03-03 2008-10-30 Rudiger Heinz Gebert System and Method For Speed Measurement Verification
US20070008176A1 (en) * 2005-06-13 2007-01-11 Sirota J M Traffic light status remote sensor system
US7495579B2 (en) 2005-06-13 2009-02-24 Sirota J Marcos Traffic light status remote sensor system
US7683934B2 (en) * 2005-12-06 2010-03-23 March Networks Corporation System and method for automatic camera health monitoring
US20070126869A1 (en) * 2005-12-06 2007-06-07 March Networks Corporation System and method for automatic camera health monitoring
WO2007128111A1 (en) * 2006-05-05 2007-11-15 Dan Manor Traffic sensor incorporating a video camera and method of operating same
WO2008027221A3 (en) * 2006-08-30 2008-05-22 Keith J Marton Method and system to detect tailgating and automatically issue a citation
WO2008027221A2 (en) * 2006-08-30 2008-03-06 Marton Keith J Method and system to detect tailgating and automatically issue a citation
US20080062009A1 (en) * 2006-08-30 2008-03-13 Marton Keith J Method and system to improve traffic flow
US8600116B2 (en) 2007-01-05 2013-12-03 American Traffic Solutions, Inc. Video speed detection system
US9002068B2 (en) 2007-01-05 2015-04-07 American Traffic Solutions, Inc. Video speed detection system
US8213685B2 (en) 2007-01-05 2012-07-03 American Traffic Solutions, Inc. Video speed detection system
US8184863B2 (en) 2007-01-05 2012-05-22 American Traffic Solutions, Inc. Video speed detection system
US20080166023A1 (en) * 2007-01-05 2008-07-10 Jigang Wang Video speed detection system
US9342984B2 (en) * 2007-03-30 2016-05-17 Persio Walter Bortolotto System and method for monitoring and capturing potential traffic infractions
US20110128376A1 (en) * 2007-03-30 2011-06-02 Persio Walter Bortolotto System and Method For Monitoring and Capturing Potential Traffic Infractions
US8177460B2 (en) 2007-04-01 2012-05-15 Iscar, Ltd. Cutting insert
US7489334B1 (en) 2007-12-12 2009-02-10 International Business Machines Corporation Method and system for reducing the cost of sampling a moving image
US8229170B2 (en) 2008-07-31 2012-07-24 General Electric Company Method and system for detecting a signal structure from a moving video platform
US8233662B2 (en) 2008-07-31 2012-07-31 General Electric Company Method and system for detecting signal color from a moving video platform
US20100027841A1 (en) * 2008-07-31 2010-02-04 General Electric Company Method and system for detecting a signal structure from a moving video platform
US20100027009A1 (en) * 2008-07-31 2010-02-04 General Electric Company Method and system for detecting signal color from a moving video platform
US20100302371A1 (en) * 2009-05-27 2010-12-02 Mark Abrams Vehicle tailgating detection system
US20110029904A1 (en) * 2009-07-30 2011-02-03 Adam Miles Smith Behavior and Appearance of Touch-Optimized User Interface Elements for Controlling Computer Function
JP5809980B2 (en) * 2010-07-30 2015-11-11 国立大学法人九州工業大学 Vehicle behavior analysis apparatus and vehicle behavior analysis program
US9172960B1 (en) * 2010-09-23 2015-10-27 Qualcomm Technologies, Inc. Quantization based on statistics and threshold of luminanceand chrominance
US8593336B2 (en) * 2010-12-17 2013-11-26 Fujitsu Limited Control apparatus, radar detection system, and radar detection method
US20120154200A1 (en) * 2010-12-17 2012-06-21 Fujitsu Limited Control apparatus, radar detection system, and radar detection method
US9171228B2 (en) * 2011-03-02 2015-10-27 Universite D' Aix-Marseille Method and system for estimating a similarity between two binary images
US20120307064A1 (en) * 2011-06-03 2012-12-06 United Parcel Service Of America, Inc. Detection of traffic violations
US9754484B2 (en) 2011-06-03 2017-09-05 United Parcel Service Of America, Inc. Detection of traffic violations
US9019380B2 (en) * 2011-06-03 2015-04-28 United Parcel Service Of America, Inc. Detection of traffic violations
US9921396B2 (en) 2011-07-17 2018-03-20 Ziva Corp. Optical imaging and communications
US8977032B2 (en) 2011-10-19 2015-03-10 Crown Equipment Corporation Identifying and evaluating multiple rectangles that may correspond to a pallet in an image scene
US8885948B2 (en) 2011-10-19 2014-11-11 Crown Equipment Corporation Identifying and evaluating potential center stringers of a pallet in an image scene
US8995743B2 (en) 2011-10-19 2015-03-31 Crown Equipment Corporation Identifying and locating possible lines corresponding to pallet structure in an image
US9025827B2 (en) 2011-10-19 2015-05-05 Crown Equipment Corporation Controlling truck forks based on identifying and tracking multiple objects in an image scene
US9025886B2 (en) 2011-10-19 2015-05-05 Crown Equipment Corporation Identifying and selecting objects that may correspond to pallets in an image scene
US9082195B2 (en) 2011-10-19 2015-07-14 Crown Equipment Corporation Generating a composite score for a possible pallet in an image scene
US9087384B2 (en) * 2011-10-19 2015-07-21 Crown Equipment Corporation Identifying, matching and tracking multiple objects in a sequence of images
US8938126B2 (en) 2011-10-19 2015-01-20 Crown Equipment Corporation Selecting objects within a vertical range of one another corresponding to pallets in an image scene
US8934672B2 (en) 2011-10-19 2015-01-13 Crown Equipment Corporation Evaluating features in an image possibly corresponding to an intersection of a pallet stringer and a pallet board
US20130101167A1 (en) * 2011-10-19 2013-04-25 Lee F. Holeva Identifying, matching and tracking multiple objects in a sequence of images
US20130155288A1 (en) * 2011-12-16 2013-06-20 Samsung Electronics Co., Ltd. Imaging apparatus and imaging method
US9002138B2 (en) * 2011-12-19 2015-04-07 Ziva Corporation Computational imaging using variable optical transfer function
US20130329094A1 (en) * 2011-12-19 2013-12-12 Ziva Corporation Computational imaging using variable optical transfer function
US20140334684A1 (en) * 2012-08-20 2014-11-13 Jonathan Strimling System and method for neighborhood-scale vehicle monitoring
US10655586B2 (en) * 2013-03-29 2020-05-19 Hitachi Automotive Systems, Ltd. Running control apparatus and running control system
US20160061172A1 (en) * 2013-03-29 2016-03-03 Hitachi Automotive Systems, Ltd. Running control apparatus and running control system
US9152865B2 (en) * 2013-06-07 2015-10-06 Iteris, Inc. Dynamic zone stabilization and motion compensation in a traffic management apparatus and system
US20140362222A1 (en) * 2013-06-07 2014-12-11 Iteris, Inc. Dynamic zone stabilization and motion compensation in a traffic management apparatus and system
ITBZ20130054A1 (en) * 2013-11-04 2015-05-05 Tarasconi Traffic Tecnologies Srl ROAD TRAFFIC VIDEO SURVEILLANCE SYSTEM WITH REPORTING DANGER SITUATIONS
US10057346B1 (en) * 2013-12-06 2018-08-21 Concurrent Ventures, LLC System, method and article of manufacture for automatic detection and storage/archival of network video
US9813313B1 (en) * 2013-12-06 2017-11-07 Concurrent Ventures, LLC System, method and article of manufacture for automatic detection and storage/archival of network video to offload the load of a video management system (VMS)
US20170024899A1 (en) * 2014-06-19 2017-01-26 Bae Systems Information & Electronic Systems Integration Inc. Multi-source multi-modal activity recognition in aerial video surveillance
US9934453B2 (en) * 2014-06-19 2018-04-03 Bae Systems Information And Electronic Systems Integration Inc. Multi-source multi-modal activity recognition in aerial video surveillance
US9978270B2 (en) 2014-07-28 2018-05-22 Econolite Group, Inc. Self-configuring traffic signal controller
WO2016018936A1 (en) * 2014-07-28 2016-02-04 Econolite Group, Inc. Self-configuring traffic signal controller
US20160267790A1 (en) * 2014-07-28 2016-09-15 Econolite Group, Inc. Self-configuring traffic signal controller
US10991243B2 (en) * 2014-07-28 2021-04-27 Econolite Group, Inc. Self-configuring traffic signal controller
US10198943B2 (en) * 2014-07-28 2019-02-05 Econolite Group, Inc. Self-configuring traffic signal controller
US9349288B2 (en) 2014-07-28 2016-05-24 Econolite Group, Inc. Self-configuring traffic signal controller
US11605225B2 (en) * 2015-04-02 2023-03-14 Sportsmedia Technology Corporation Automatic determination and monitoring of vehicles on a racetrack with corresponding imagery data for broadcast
US20220230436A1 (en) * 2015-04-02 2022-07-21 Sportsmedia Technology Corporation Automatic determination and monitoring of vehicles on a racetrack with corresponding imagery data for broadcast
US11301685B2 (en) * 2015-04-02 2022-04-12 Sportsmedia Technology Corporation Automatic determination and monitoring of vehicles on a racetrack with corresponding imagery data for broadcast
US20180341812A1 (en) * 2015-04-02 2018-11-29 Sportsmedia Technology Corporation Automatic determination and monitoring of vehicles on a racetrack with corresponding imagery data for broadcast
US11074813B2 (en) 2016-01-11 2021-07-27 NetraDyne, Inc. Driver behavior monitoring
US11024165B2 (en) * 2016-01-11 2021-06-01 NetraDyne, Inc. Driver behavior monitoring
US11113961B2 (en) 2016-01-11 2021-09-07 NetraDyne, Inc. Driver behavior monitoring
US9990535B2 (en) 2016-04-27 2018-06-05 Crown Equipment Corporation Pallet detection using units of physical length
US11322018B2 (en) 2016-07-31 2022-05-03 NetraDyne, Inc. Determining causation of traffic events and encouraging good driving behavior
WO2018051200A1 (en) * 2016-09-15 2018-03-22 Vivacity Labs Limited A method and system for analyzing the movement of bodies in a traffic system
US11256926B2 (en) * 2016-09-15 2022-02-22 Vivacity Labs Limited Method and system for analyzing the movement of bodies in a traffic system
US11557154B2 (en) * 2017-06-23 2023-01-17 Kapsch Trafficcom Ag System and method for verification and/or reconciliation of tolling or other electronic transactions, such as purchase transactions
US11840239B2 (en) 2017-09-29 2023-12-12 NetraDyne, Inc. Multiple exposure event determination
CN107527507A (en) * 2017-09-30 2017-12-29 北京蓝绿相间科技有限公司 Underground garage Intelligent traffic management systems
US11314209B2 (en) 2017-10-12 2022-04-26 NetraDyne, Inc. Detection of driving actions that mitigate risk
CN108198427A (en) * 2017-11-30 2018-06-22 中原智慧城市设计研究院有限公司 Green light of rushing based on characteristics of image frame is broken rules and regulations determination method
CN108898834A (en) * 2018-07-12 2018-11-27 安徽电信工程有限责任公司 A kind of intellectual traffic control method monitoring traffic accident at intersection
CN111145580A (en) * 2018-11-06 2020-05-12 松下知识产权经营株式会社 Mobile body, management device and system, control method, and computer-readable medium
CN111524390A (en) * 2020-04-22 2020-08-11 上海海事大学 Active early warning system and method for secondary accidents on expressway based on video detection
US11405580B2 (en) * 2020-09-09 2022-08-02 Fotonation Limited Event camera hardware
US11818495B2 (en) 2020-09-09 2023-11-14 Fotonation Limited Event camera hardware
EP4113459A1 (en) * 2021-07-02 2023-01-04 Fujitsu Technology Solutions GmbH Ai based monitoring of race tracks
WO2023274955A1 (en) * 2021-07-02 2023-01-05 Fujitsu Technology Solutions Gmbh Ai based monitoring of race tracks
CN113628457A (en) * 2021-09-07 2021-11-09 重庆交通大学 Intelligent control method and system for traffic signal lamp
CN114333330A (en) * 2022-01-27 2022-04-12 浙江嘉兴数字城市实验室有限公司 Intersection event detection system and method based on roadside edge holographic sensing
KR102569283B1 (en) * 2022-03-30 2023-08-23 포티투닷 주식회사 Method and apparatus for controlling vehicle

Also Published As

Publication number Publication date
US6760061B1 (en) 2004-07-06

Similar Documents

Publication Publication Date Title
US20020054210A1 (en) Method and apparatus for traffic light violation prediction and control
EP1030188B1 (en) Situation awareness system
US11080995B2 (en) Roadway sensing systems
KR102105162B1 (en) A smart overspeeding vehicle oversee apparatus for analyzing vehicle speed, vehicle location and traffic volume using radar, for detecting vehicles that violate the rules, and for storing information on them as videos and images, a smart traffic signal violation vehicle oversee apparatus for the same, and a smart city solution apparatus for the same
CN111223302B (en) External coordinate real-time three-dimensional road condition auxiliary device for mobile carrier and system
US8175331B2 (en) Vehicle surroundings monitoring apparatus, method, and program
EP2993654B1 (en) Method and system for forward collision warning
JP4650079B2 (en) Object detection apparatus and method
JP4654163B2 (en) Vehicle surrounding environment recognition device and system
US20210004607A1 (en) Identification and classification of traffic conflicts
KR101824973B1 (en) Object collision avoidance system at intersection using single camera
WO2002050568A1 (en) Method for detecting stationary object on road
JP7028066B2 (en) Detection device and detection system
KR102330614B1 (en) Traffic signal control system and method thereof
JPH07210795A (en) Method and instrument for image type traffic flow measurement
CN101349562A (en) Method and apparatus for alarming vehicle running bias direction
JPH11203589A (en) Traffic image pickup device and traffic monitoring device
CN106327880A (en) Vehicle speed identification method and system based on monitored video
JP2019207654A (en) Detection device and detection system
JP2019207655A (en) Detection device and detection system
JP3470172B2 (en) Traffic flow monitoring device
JP3412013B2 (en) Obstacle collision prevention support system
KR102228395B1 (en) Apparatus, system and method for analyzing images using divided images
Kolcheck et al. Visual counting of traffic flow from a car via vehicle detection and motion analysis
JP7384181B2 (en) Image collection device, image collection method, and computer program for image collection

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION