USRE36041E - Face recognition system - Google Patents

Face recognition system Download PDF

Info

Publication number
USRE36041E
USRE36041E US08/340,615 US34061594A USRE36041E US RE36041 E USRE36041 E US RE36041E US 34061594 A US34061594 A US 34061594A US RE36041 E USRE36041 E US RE36041E
Authority
US
United States
Prior art keywords
image
subspace
person
reference set
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/340,615
Inventor
Matthew Turk
Alex P. Pentland
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Massachusetts Institute of Technology
Original Assignee
Massachusetts Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Massachusetts Institute of Technology filed Critical Massachusetts Institute of Technology
Priority to US08/340,615 priority Critical patent/USRE36041E/en
Application granted granted Critical
Publication of USRE36041E publication Critical patent/USRE36041E/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • A61B5/1176Recognition of faces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/35Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users
    • H04H60/45Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/94Vector quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/56Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54
    • H04H60/59Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups H04H60/29-H04H60/54 of video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding

Definitions

  • the invention relates to a system for identifying members of a viewing audience.
  • the cost of its advertising time depends critically on the popularity of its programs among the television viewing audience.
  • Popularity in this case, is typically measured in terms of the program's share of the total audience viewing television at the time the program airs.
  • advertisers prefer to place their advertisements where they will reach the greatest number of people.
  • time slots can also demand a higher price.
  • One preferred approach involves monitoring the actual viewing habits of a group of volunteer families which represent a cross-section of all people who watch television.
  • the participants in such a study allow monitoring equipment to be placed in their homes.
  • the monitoring equipment records the time, the identity of the program and the identity of the members of the viewing audience.
  • Many of these systems require active participation by the television viewer to obtain the monitoring information. That is, the viewer must in some way interact with the equipment to record his presence in the viewing audience. If the viewer forgets to record his presence the monitoring statistics will be incomplete. In general, the less manual intervention required by the television viewer, the more likely it is that the gathered statistics on viewing habits will be complete and error free.
  • the invention is a recognition system for identifying members of an audience.
  • the invention includes an imaging system which generates an image of the audience; a selector module for selecting a portion of the generated image; a detection means which analyzes the selected image portion to determine whether an image of a person is present; and a recognition module for determining whether a detected image of a person resembles one of a reference set of images of individuals.
  • the recognition module also determines which one, if any, of the individuals in the reference set the detected image resembles.
  • the selection means includes a motion detector for identifying the selected portion of the image by detecting motion and it includes a locator module for locating the portion of the image corresponding to the face of the person detected.
  • the detection means and the recognition module employ a first and second pattern recognition techniques, respectively, to determine whether an image of a person is present in the selected portion of the image and both pattern recognition techniques employ a set of eigenvectors in a multi-dimensional image space to characterize the reference set.
  • the second pattern recognition technique also represents each member of the reference set as a point in a subspace defined by the set of eigenvectors.
  • the image of a person is an image of a person's face and the reference set includes images of faces of the individuals.
  • the recognition system includes means for representing the reference set as a set of eigenvectors in a multi-dimensional image space and the detection means includes means for representing the selected image portion as an input vector in the multi-dimensional image space and means for computing the distance between a point identified by the input vector and a subspace defined by the set of eigenvectors.
  • the detection means also includes a thresholding means for determining whether an image of a person is present by comparing the computed distance to a preselected threshold.
  • the recognition module includes means for representing each member of the reference set as a corresponding point in the subspace. To determine the location of each point in subspace associated with a corresponding member of the reference set, a vector associated with that member is projected onto the subspace.
  • the recognition module also includes means for projecting the input vector onto the subspace, means for selecting a particular member of the reference set, and means for computing a distance within the subspace between a point identified by the projection of the input vector onto the subspace and the point in the subspace associated with the selected member.
  • the invention is a method for identifying members of an audience.
  • the invention includes the steps of generating an image of the audience; selecting a portion of the generated image; analyzing the selected image portion to determine whether an image of a person is present; and if an image of a person is determined to be present, determining whether the image of a person resembles one of a reference set of images of individuals.
  • One advantage of the invention is that it is fast, relatively simple and works well in a constrained environment, i.e., an environment for which the associated image remains relatively constant except for the coming and going of people.
  • the invention determines whether a selected portion of an image actually contains an image of a face. If it is determined that the selected image portion contains an image of a face, the invention then determine which one of a reference set of known faces the detected face image most resembles. If the detected face image is not present among the reference set, the invention reports the presence of a unknown person in the audience.
  • the invention has the ability to discriminate face images from images of other objects.
  • FIG. 1 is a block diagram of a face recognition system
  • FIG. 2 is a flow diagram of an initialization procedure for the face recognition module
  • FIG. 3 is a flow diagram of the operation of the face recognition module.
  • FIG. 4 is a block diagram of a motion detection system for locating faces within a sequence of images.
  • a video camera 4 which is trained on an area where members of a viewing audience generally sit to watch the TV, sends a sequence of video image frames to a motion detection module 6.
  • Video camera 4 which may, for example, be installed in the home of a family that has volunteered to participate in a study of public viewing habits, generates images of TV viewing audience.
  • Motion detection module 6 processes the sequence of image frames to identify regions of the recorded scene that contain motion, and thus may be evidence of the presence of a person watching TV. In general, motion detection module 6 accomplishes this by comparing successive frames of the image sequence so as to find those locations containing image data that changes over time. Since the image background (i.e., images of the furniture and other objects in the room) will usually remain unchanged from frame to frame, the areas of movement will generally be evidence of the presence of a person in the viewing audience.
  • a head locator module 8 selects a block of the image frame containing the movement and sends it to a face recognition module 10 where it is analyzed for the presence of recognizable faces.
  • Face recognition module 10 performs two functions. First, it determines whether the image data within the selected block resembles a face. Then, if it does resemble a face, module 10 determines whether the face is one of a reference set of faces.
  • the reference set may include, for example, the images of faces of all members of the family in whose house the audience monitoring system has been installed.
  • face recognizer 10 employs a multi-dimensional representation in which face images are characterized by a set of eigenvectors or "eigenfaces".
  • each image is represented as a vector (or a point) in very high dimensional image space in which each pixel of the image is represented by a corresponding dimension or axis.
  • the dimension of this image space thus depends upon the size of the image being represented and can become very large for any reasonably sized image. For example, if the block of image data is N pixels by N pixels, then the multi-dimensional image space has dimension N 2 .
  • the image vector which represents the N ⁇ N block of image data in this multi-dimensional image space is constructed by simply concatenating the rows of the image data to generate a vector of length N 2 .
  • Face images like all other possible images, are represented by points within this multi-dimensional image space.
  • the distribution of faces tends to be grouped within a region of the image space.
  • the distribution of faces of the reference set can be characterized by using principal component analysis.
  • the resulting principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images defines the variation among the set of face images.
  • These eigenvectors are typically ordered, each one accounting for a different amount of variation among the face images. They can be thought of as a set of features which together characterize the variation between face images within the reference set.
  • Each face image location within the multi-dimensional image space contributes more or less to each eigenvector, so that each eigenvector represents a sort of ghostly face which is referred to herein as an eigenface.
  • Each individual face from the reference set can be represented exactly in terms of a linear combination of M non-zero eigenfaces.
  • Each face can also be approximated using only the M' "best" faces, i.e., those that have the largest eigenvalues, and which therefore account for the most variance within the set of face images.
  • the best M' eigenfaces span an M'-dimensional subspace (referred to hereinafter as "face space") of all possible images.
  • This approach to face recognition involves the initialization operations shown in FIG. 2 to "train" recognition module 10.
  • a reference set of face images is obtained and each of the faces of that set is represented as a corresponding vector or point in the multi-dimensional image space (step 100).
  • the distribution of points for the reference set of faces is characterized in terms of a set of eigenvectors (or eigenfaces) (step 102). If a full characterization of the distribution of points is performed, it will yield N 2 eigenfaces of which M are non-zero. Of these, only the M' eigenfaces corresponding to the highest eigenvalues are chosen, where M' ⁇ M ⁇ N 2 .
  • each member of the reference set is represented by a corresponding point within face space (step 104). For a given face, this is accomplished by projecting its point in the higher dimensional image space onto face space.
  • face recognition module 10 After face recognition module 10 is initialized, it implements the steps shown in FIG. 3 to recognize face images supplied by face locator module 8.
  • face recognition module 10 projects the input image (i.e., the image presumed to contain a face) onto face space by projecting it onto each of the M' eigenfaces (step 200).
  • module 10 determines whether the input image is a face at all (whether known or unknown) by checking to see if the image is sufficiently close to "face space" (step 202). That is, module 10 computes how far the input image in the multi-dimensional image space is from the face space and compares this to a preselected threshold. If the computed distance is greater than the preselected threshold, module 10 indicates that it does not represent a face image and motion detection module 6 locates the next block of the overall image which may contain a face image.
  • recognition module 10 treats it as a face image and proceeds with determining whose face it is (step 206). This involves computing distances between the projection of the input image onto face space and each of the reference face images in face space. If the projected input image is sufficiently close to any one of the reference faces (i.e., the computed distance in face space is less than a predetermined distance), recognition module 10 identifies the input image as belonging to the individual associated with that reference face. If the projected input image is not sufficently close to any one of the reference faces, recognition module 10 reports that a person has been located but the identity of the person is unknown.
  • a face image I(x,y) be a two-dimensional N by N array of (8-bit) intensity values.
  • the face image is represented in the multi-dimensional image space as a vector of dimension N 2 .
  • a typical image of size 256 by 256 becomes a vector of dimension 65,536, or, equivalently, a point in 65,536-dimensional image space.
  • the average face of the set is defined by
  • the matrix C is N 2 by N 2 , and determining the N 2 eigenvectors and eigenvalues can become an intractable task for typical image sizes.
  • the number of data points in the face space is less than the dimension of the overall image space (namely, if, M ⁇ N 2 ), there will be only M-1, rather than N 2 , meaningful eigenvectors. (The remaining eigenvectors will have associated eigenvalues of zero.)
  • One can solve for the N 2 -dimensional eigenvectors in this case by first solving for the eigenvectors of an M by M matrix--e.g. solving a 16 ⁇ 16 matrix rather than a 16,384 by 16,384 matrix--and then taking appropriate linear combinations of the face images ⁇ i .
  • the calculations are greatly reduced, from the order of the number of pixels in the images (N 2 ) to the order of the number of images in the training set (M).
  • the training set of face images will be relatively small (M ⁇ N 2 ), and the calculations become quite manageable.
  • the associated eigenvalues provide a basis for ranking the eigenvectors according to their usefulness in characterizing the variation among the images.
  • M' In practice, a smaller M' is sufficient for identification, since accurate construction of the image is not a requirement. In this framework, identification becomes a pattern recognition task.
  • the eigenfaces span an M'-dimensional subspace of the original N 2 image space.
  • a new face image ( ⁇ ) is transformed into its eigenface components (i.e., projected into "face space") by a simple operation
  • the vector may then be used in a standard pattern recognition algorithm to find which of a number of pre-defined face classes, if any, best describes the face.
  • the simplest method for determining which face class provides the best description of an input face image is to find the face class k that minimizes the Euclidian distance
  • ⁇ k is a vector describing the kth face class.
  • the face classes ⁇ i are calculated by averaging the results of the eigenface representation over a small number of face images (as few as one) of each individual.
  • a face is classified as belonging to class k when the minimum ⁇ k is below some chosen threshold ⁇ .sub. ⁇ . Otherwise the face is classified as "unknown", and optionally used to create a new face class.
  • Case 1 In the first case, an individual is recognized and identified. In the second case, an unknown individual is present. The last two cases indicate that the image is not a face image. Case three typically shows up as a false positive in most other recognition systems. In the described embodiment, however, the false recognition may be detected because of the significant distance between the image and the subspace of expected face images.
  • each new face image to be identified calculate its pattern vector ⁇ , the distances ⁇ i to each known class, and the distance ⁇ to face space. If the distance ⁇ > ⁇ t , classify the input image as not a face. If the minimum distance ⁇ k ⁇ .sub. ⁇ and the distance ⁇ 1 , classify the input face as the individual associated with class vector ⁇ k . If the minimum distance ⁇ k > ⁇ and ⁇ 1 , then the image may be classified as "unknown", and optionally used to begin a new face class.
  • this image may be added to the original set of familiar face images, and the eigenfaces may be recalculated (steps 1-4). This gives the opportunity to modify the face space as the system encounters more instances of known faces.
  • calculation of the eigenfaces is done offline as part of the training.
  • the recognition currently takes about 400 msec running rather inefficiently in Lisp on a Sun 4, using face images of size 128 ⁇ 128.
  • the current version could run at close to frame rate (33 msec).
  • Designing a practical system for face recognition within this framework requires assessing the tradeoffs between generality, required accuracy, and speed. If the face recognition task is restricted to a small set of people (such as the members of a family or a small company), a small set of eigenfaces is adequate to span the faces of interest. If the system is to learn new faces or represent many people, a larger basis set of eigenfaces will likely be required.
  • motion detection module 6 and head locator module 8 locates and tracks the position of the head of any person within the scene viewed by video camera 4 by implementing the tracking algorithm depicted in FIG. 4.
  • a sequence of image frames 30 from video camera 4 first passes through a spatio-temporal filtering module 32 which accentuates image locations which change with time.
  • Spatio-temporal filtering module 32 identifies the locations of motion by performing a differencing operation on successive frames of the sequence of image frames. In the output of the spatio-temporal filter module 32, a moving person "lights up" whereas the other areas of the image containing no motion appear as black.
  • the spatio-temporal filtered image passes to a thresholding module 34 which produces a binary motion image identifying the locations of the image for which the motion exceeds a preselected threshold. That is, it locates the areas of the image containing the most motion. In all such areas, the presence of a person is postulated.
  • a motion analyzer module 36 analyzes the binary motion image to watch how "motion blobs" change over time to decide if the motion is caused by a person moving and to determine head position.
  • a few simple rules are applied, such as "the head is the small upper blob above a larger blob (i.e., the body)", and "head motion must be reasonably slow and contiguous” (i.e., heads are not expected to jump around the image erratically).
  • the motion image also allows for an estimate of scale.
  • the size of the blob that is assumed to be the moving head determines the size of the subimage to send to face recognition module 10 (see FIG. 1). This subimage is rescaled to fit the dimensions of the eigenfaces.
  • Face space may also be used to locate faces in single images, either as an alternative to locating faces from motion (e.g. if there is too little motion or many moving objects) or as a method of achieving more precision than is possible by use of motion tracking alone.
  • ⁇ (x,y) and ⁇ i (x,y) are scalar functions of image location
  • ⁇ (x,y) is a vector function of image location
  • Eq. 13 The second term of Eq. 13 is calculated in practice by a correlation with the L eigenfaces: ##EQU5## where x the correlation operator.
  • the first term of Eq. 13 becomes ##EQU6## Since the average face ⁇ and the eigenfaces u i are fixed, the terms ⁇ T ⁇ and ⁇ xu i may be computed ahead of time.
  • the computation of the face map involves only L+1 correlations over the input image and the computation of the first term ⁇ T (x,y) ⁇ (x,y). This is computed by squaring the input image I(x,y) and, at each image location, summing the squared values of the local subimage.
  • multiscale eigenfaces in which an input face image is compared with eigenfaces at a number of scales. In this case the image will appear to be near the face space of only the closest scale eigenfaces.
  • the input image i.e., the portion of the overall image selected for analysis

Abstract

A recognition system for identifying members of an audience, the system including an imaging system which generates an image of the audience; a selector module for selecting a portion of the generated image; a detection means which analyzes the selected image portion to determine whether an image of a person is present; and a recognition module responsive to the detection means for determining whether a detected image of a person identified by the detection means resembles one of a reference set of images of individuals.

Description

BACKGROUND OF THE INVENTION
The invention relates to a system for identifying members of a viewing audience.
For a commercial television network, the cost of its advertising time depends critically on the popularity of its programs among the television viewing audience. Popularity, in this case, is typically measured in terms of the program's share of the total audience viewing television at the time the program airs. As a general rule of thumb, advertisers prefer to place their advertisements where they will reach the greatest number of people. Thus, there is a higher demand among commercial advertisers for advertising time slots along side more popular programs. Such time slots can also demand a higher price.
Because the economics of television advertising depends so critically on the tastes and preferences of the television audience, the television industry invests a substantial amount of time, effort and money in measuring those tastes and preferences. One preferred approach involves monitoring the actual viewing habits of a group of volunteer families which represent a cross-section of all people who watch television. Typically, the participants in such a study allow monitoring equipment to be placed in their homes. Whenever a participant watches a television program, the monitoring equipment records the time, the identity of the program and the identity of the members of the viewing audience. Many of these systems require active participation by the television viewer to obtain the monitoring information. That is, the viewer must in some way interact with the equipment to record his presence in the viewing audience. If the viewer forgets to record his presence the monitoring statistics will be incomplete. In general, the less manual intervention required by the television viewer, the more likely it is that the gathered statistics on viewing habits will be complete and error free.
Systems have been developed which automatically identify members of the viewing audience without requiring the viewer to enter any information. For example, U.S. Pat. No. 4,858,000 to Daozehng Lu, issued Aug. 15, 1989 describes such a system. In the system, a scanner using infrared detectors locates a member of the viewing audience, captures an image of the located member, extracts a pattern signature for the captured image and then compares the extracted pattern signature to a set of stored pattern image signatures to identify the audience member.
SUMMARY OF THE INVENTION
In general, in one aspect, the invention is a recognition system for identifying members of an audience. The invention includes an imaging system which generates an image of the audience; a selector module for selecting a portion of the generated image; a detection means which analyzes the selected image portion to determine whether an image of a person is present; and a recognition module for determining whether a detected image of a person resembles one of a reference set of images of individuals.
Preferred embodiments include the following features. The recognition module also determines which one, if any, of the individuals in the reference set the detected image resembles. The selection means includes a motion detector for identifying the selected portion of the image by detecting motion and it includes a locator module for locating the portion of the image corresponding to the face of the person detected. In the recognition system, the detection means and the recognition module employ a first and second pattern recognition techniques, respectively, to determine whether an image of a person is present in the selected portion of the image and both pattern recognition techniques employ a set of eigenvectors in a multi-dimensional image space to characterize the reference set. In addition, the second pattern recognition technique also represents each member of the reference set as a point in a subspace defined by the set of eigenvectors. Also, the image of a person is an image of a person's face and the reference set includes images of faces of the individuals.
Also in preferred embodiments, the recognition system includes means for representing the reference set as a set of eigenvectors in a multi-dimensional image space and the detection means includes means for representing the selected image portion as an input vector in the multi-dimensional image space and means for computing the distance between a point identified by the input vector and a subspace defined by the set of eigenvectors. The detection means also includes a thresholding means for determining whether an image of a person is present by comparing the computed distance to a preselected threshold. The recognition module includes means for representing each member of the reference set as a corresponding point in the subspace. To determine the location of each point in subspace associated with a corresponding member of the reference set, a vector associated with that member is projected onto the subspace.
The recognition module also includes means for projecting the input vector onto the subspace, means for selecting a particular member of the reference set, and means for computing a distance within the subspace between a point identified by the projection of the input vector onto the subspace and the point in the subspace associated with the selected member.
In general, in another aspect, the invention is a method for identifying members of an audience. The invention includes the steps of generating an image of the audience; selecting a portion of the generated image; analyzing the selected image portion to determine whether an image of a person is present; and if an image of a person is determined to be present, determining whether the image of a person resembles one of a reference set of images of individuals.
One advantage of the invention is that it is fast, relatively simple and works well in a constrained environment, i.e., an environment for which the associated image remains relatively constant except for the coming and going of people. In addition, the invention determines whether a selected portion of an image actually contains an image of a face. If it is determined that the selected image portion contains an image of a face, the invention then determine which one of a reference set of known faces the detected face image most resembles. If the detected face image is not present among the reference set, the invention reports the presence of a unknown person in the audience. The invention has the ability to discriminate face images from images of other objects.
Other advantages and features will become apparent from the following description of the preferred embodiment and from the claims.
DESCRIPTION OF THE PREFERRED EMBODIMENT
FIG. 1 is a block diagram of a face recognition system;
FIG. 2 is a flow diagram of an initialization procedure for the face recognition module;
FIG. 3 is a flow diagram of the operation of the face recognition module; and
FIG. 4 is a block diagram of a motion detection system for locating faces within a sequence of images.
STRUCTURE AND OPERATION
Referring to FIG. 1, in an audience monitoring system 2, a video camera 4, which is trained on an area where members of a viewing audience generally sit to watch the TV, sends a sequence of video image frames to a motion detection module 6. Video camera 4, which may, for example, be installed in the home of a family that has volunteered to participate in a study of public viewing habits, generates images of TV viewing audience. Motion detection module 6 processes the sequence of image frames to identify regions of the recorded scene that contain motion, and thus may be evidence of the presence of a person watching TV. In general, motion detection module 6 accomplishes this by comparing successive frames of the image sequence so as to find those locations containing image data that changes over time. Since the image background (i.e., images of the furniture and other objects in the room) will usually remain unchanged from frame to frame, the areas of movement will generally be evidence of the presence of a person in the viewing audience.
When movement is identified, a head locator module 8 selects a block of the image frame containing the movement and sends it to a face recognition module 10 where it is analyzed for the presence of recognizable faces. Face recognition module 10 performs two functions. First, it determines whether the image data within the selected block resembles a face. Then, if it does resemble a face, module 10 determines whether the face is one of a reference set of faces. The reference set may include, for example, the images of faces of all members of the family in whose house the audience monitoring system has been installed.
To perform its recognition functions, face recognizer 10 employs a multi-dimensional representation in which face images are characterized by a set of eigenvectors or "eigenfaces". In general, according to this technique, each image is represented as a vector (or a point) in very high dimensional image space in which each pixel of the image is represented by a corresponding dimension or axis. The dimension of this image space thus depends upon the size of the image being represented and can become very large for any reasonably sized image. For example, if the block of image data is N pixels by N pixels, then the multi-dimensional image space has dimension N2. The image vector which represents the N×N block of image data in this multi-dimensional image space is constructed by simply concatenating the rows of the image data to generate a vector of length N2.
Face images, like all other possible images, are represented by points within this multi-dimensional image space. The distribution of faces, however, tends to be grouped within a region of the image space. Thus, the distribution of faces of the reference set can be characterized by using principal component analysis. The resulting principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images, defines the variation among the set of face images. These eigenvectors are typically ordered, each one accounting for a different amount of variation among the face images. They can be thought of as a set of features which together characterize the variation between face images within the reference set. Each face image location within the multi-dimensional image space contributes more or less to each eigenvector, so that each eigenvector represents a sort of ghostly face which is referred to herein as an eigenface.
Each individual face from the reference set can be represented exactly in terms of a linear combination of M non-zero eigenfaces. Each face can also be approximated using only the M' "best" faces, i.e., those that have the largest eigenvalues, and which therefore account for the most variance within the set of face images. The best M' eigenfaces span an M'-dimensional subspace (referred to hereinafter as "face space") of all possible images.
This approach to face recognition involves the initialization operations shown in FIG. 2 to "train" recognition module 10. First, a reference set of face images is obtained and each of the faces of that set is represented as a corresponding vector or point in the multi-dimensional image space (step 100). Then, using principal component analysis, the distribution of points for the reference set of faces is characterized in terms of a set of eigenvectors (or eigenfaces) (step 102). If a full characterization of the distribution of points is performed, it will yield N2 eigenfaces of which M are non-zero. Of these, only the M' eigenfaces corresponding to the highest eigenvalues are chosen, where M'<M<<N2. This subset of eigenfaces is used to define a subspace (or face space) within the multidimensional image space. Finally, each member of the reference set is represented by a corresponding point within face space (step 104). For a given face, this is accomplished by projecting its point in the higher dimensional image space onto face space.
If additional faces are added to the reference set at a later time, these operations are repeated to update the set of eigenfaces characterizing the reference set.
After face recognition module 10 is initialized, it implements the steps shown in FIG. 3 to recognize face images supplied by face locator module 8. First, face recognition module 10 projects the input image (i.e., the image presumed to contain a face) onto face space by projecting it onto each of the M' eigenfaces (step 200). Then, module 10 determines whether the input image is a face at all (whether known or unknown) by checking to see if the image is sufficiently close to "face space" (step 202). That is, module 10 computes how far the input image in the multi-dimensional image space is from the face space and compares this to a preselected threshold. If the computed distance is greater than the preselected threshold, module 10 indicates that it does not represent a face image and motion detection module 6 locates the next block of the overall image which may contain a face image.
If the computed distance is sufficiently close to face space (i.e., less than the preselected threshold), recognition module 10 treats it as a face image and proceeds with determining whose face it is (step 206). This involves computing distances between the projection of the input image onto face space and each of the reference face images in face space. If the projected input image is sufficiently close to any one of the reference faces (i.e., the computed distance in face space is less than a predetermined distance), recognition module 10 identifies the input image as belonging to the individual associated with that reference face. If the projected input image is not sufficently close to any one of the reference faces, recognition module 10 reports that a person has been located but the identity of the person is unknown.
The mathematics underlying each of these steps will now be described in greater detail.
Calculating Eigenfaces
Let a face image I(x,y) be a two-dimensional N by N array of (8-bit) intensity values. The face image is represented in the multi-dimensional image space as a vector of dimension N2. Thus, a typical image of size 256 by 256 becomes a vector of dimension 65,536, or, equivalently, a point in 65,536-dimensional image space. An ensemble of images, then, maps to a collection of points in this huge space.
Images of faces, being similar in overall configuration, are not randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. Using principal component analysis, one identifies the vectors which best account for the distribution of face images within the entire image space. These vectors, namely, the "eigenfaces", define the "face space". Each vector is of length N2, describes an N by N image, and is a linear combination of the original face images of the reference set.
Let the training set of face images be Γ1, Γ2, Γ3, . . . , Γm. The average face of the set is defined by
Ψ=(M).sup.-1 Σ.sub.n Γ.sub.n,              (1)
where the summation is from n=1 to M. Each face differs from the average by the vector Φii -Ψ. This set of very large vectors is then subject to principal component analysis, which seeks a set of M orthonormal vectors, un, which best describes the distribution of the data. The kth vector, uk, is chosen such that:
λ.sub.k =(M).sup.-1 Σ.sub.n (u.sub.k.sup.T Φ.sub.n).sup.2 (2)
is a maximum, subject to: ##EQU1##
The vectors uk and scalars λk are the eigenvectors and eigenvalues, respectively, of the covariance matrix ##EQU2## where the matrix A= Φ1 Φ2 . . . ΦM !. The matrix C, however, is N2 by N2, and determining the N2 eigenvectors and eigenvalues can become an intractable task for typical image sizes.
If the number of data points in the face space is less than the dimension of the overall image space (namely, if, M<N2), there will be only M-1, rather than N2, meaningful eigenvectors. (The remaining eigenvectors will have associated eigenvalues of zero.) One can solve for the N2 -dimensional eigenvectors in this case by first solving for the eigenvectors of an M by M matrix--e.g. solving a 16×16 matrix rather than a 16,384 by 16,384 matrix--and then taking appropriate linear combinations of the face images Φi. Consider the eigenvectors vi of AT A such that:
A.sup.T Av.sub.i =μ.sub.i v.sub.i                       (5)
Premultiplying both sides by A, yields:
AA.sup.T Av.sub.i =μ.sub.i Av.sub.i                     (6)
from which it is apparent that Avi are the eigenvectors of C=AAT.
Following this analysis, it is possible to construct the M by M matrix L=AT A, where Lmnm T Φn, and find the M eigenvectors, v1, of L. These vectors determine linear combinations of the M training set face images to form the eigenfaces u1 : ##EQU3##
With this analysis the calculations are greatly reduced, from the order of the number of pixels in the images (N2) to the order of the number of images in the training set (M). In practice, the training set of face images will be relatively small (M<<N2), and the calculations become quite manageable. The associated eigenvalues provide a basis for ranking the eigenvectors according to their usefulness in characterizing the variation among the images.
In practice, a smaller M' is sufficient for identification, since accurate construction of the image is not a requirement. In this framework, identification becomes a pattern recognition task. The eigenfaces span an M'-dimensional subspace of the original N2 image space. The M' significant eigenvectors of the L matrix are chosen as those with the largest associated eigenvalues. In test cases based upon M=16 face images, M'=7 eigenfaces were found to yield acceptable results, i.e., a level of accuracy sufficient for monitoring a TV audience for purposes of studying viewing habits and tastes.
A new face image (Γ) is transformed into its eigenface components (i.e., projected into "face space") by a simple operation,
ω.sub.k =u.sub.k.sup.T (Γ-Ψ),              (8)
for k=1, . . . , M'. This describes a set of point-by-point image multiplications and summations, operations which may be performed at approximately frame rate on current image processing hardware.
The weights form a vector ΩT = ω1 ω2 . . . ωM,! that describes the contribution of each eigenface in representing the input face image, treating the eigenfaces as a basis set for face images. The vector may then be used in a standard pattern recognition algorithm to find which of a number of pre-defined face classes, if any, best describes the face. The simplest method for determining which face class provides the best description of an input face image is to find the face class k that minimizes the Euclidian distance
ε.sub.k =∥(Ω-Ω.sub.k)∥.sup.2, (9)
where Ωk is a vector describing the kth face class. The face classes Ωi are calculated by averaging the results of the eigenface representation over a small number of face images (as few as one) of each individual. A face is classified as belonging to class k when the minimum εk is below some chosen threshold θ.sub.ε. Otherwise the face is classified as "unknown", and optionally used to create a new face class.
Because creating the vector of weights is equivalent to projecting the original face image onto the low-dimensional face space, many images (most of them looking nothing like a face) will project onto a given pattern vector. This is not a problem for the system, however, since the distance ε between the image and the face space is simply the squared distance between the mean-adjusted input image Φ=Γ-Ψ and Φf=Σωk uk, its projection onto face space (where the summation is over k from 1 to M'):
ε.sup.2 =∥Φ-Φ.sub.f ∥.sup.2 (10)
Thus, there are four possibilities for an input image and its pattern vector: (1) near face space and near a face class; (2) near face space but not near a known face class; (3) distant from face space and near a face class; and (4) distant from face space and not near a known face class.
In the first case, an individual is recognized and identified. In the second case, an unknown individual is present. The last two cases indicate that the image is not a face image. Case three typically shows up as a false positive in most other recognition systems. In the described embodiment, however, the false recognition may be detected because of the significant distance between the image and the subspace of expected face images.
Summary of Eigenface Recognition Procedure
To summarize, the eigenfaces approach to face recognition involves the following steps:
1. Collect a set of characteristic face images of the known individuals. This set may include a number of images for each person, with some variation in expression and in lighting. (Say four images of ten people, so M=40.)
2. Calculate the (40×40) matrix L, find its eigenvectors and eigenvalues, and choose the M' eigenvectors with the highest associated eigenvalues. (Let M'=10 in this example.)
3. Combine the normalized training set of images according to Eq. 7 to produce the (M'=10) eigenfaces uk.
4. For each known individual, calculate the class vector Ωk by averaging the eigenface pattern vectors Ω (from Eq. 9) calculated from the original (four) images of the individual. Choose a threshold θ.sub.ε which defines the maximum allowable distance from any face class, and a threshold θt which defines the maximum allowable distance from face space (according to Eq. 10).
5. For each new face image to be identified, calculate its pattern vector φ, the distances εi to each known class, and the distance ε to face space. If the distance ε>θt, classify the input image as not a face. If the minimum distance εk ≦θ.sub.ε and the distance ε≦θ1, classify the input face as the individual associated with class vector Ωk. If the minimum distance εk >θε and ε≦θ1, then the image may be classified as "unknown", and optionally used to begin a new face class.
6. If the new image is classified as a known individual, this image may be added to the original set of familiar face images, and the eigenfaces may be recalculated (steps 1-4). This gives the opportunity to modify the face space as the system encounters more instances of known faces.
In the described embodiment, calculation of the eigenfaces is done offline as part of the training. The recognition currently takes about 400 msec running rather inefficiently in Lisp on a Sun 4, using face images of size 128×128. With some special-purpose hardware, the current version could run at close to frame rate (33 msec).
Designing a practical system for face recognition within this framework requires assessing the tradeoffs between generality, required accuracy, and speed. If the face recognition task is restricted to a small set of people (such as the members of a family or a small company), a small set of eigenfaces is adequate to span the faces of interest. If the system is to learn new faces or represent many people, a larger basis set of eigenfaces will likely be required.
Motion Detection And Head Tracking
In the described embodiment, motion detection module 6 and head locator module 8 locates and tracks the position of the head of any person within the scene viewed by video camera 4 by implementing the tracking algorithm depicted in FIG. 4. A sequence of image frames 30 from video camera 4 first passes through a spatio-temporal filtering module 32 which accentuates image locations which change with time. Spatio-temporal filtering module 32 identifies the locations of motion by performing a differencing operation on successive frames of the sequence of image frames. In the output of the spatio-temporal filter module 32, a moving person "lights up" whereas the other areas of the image containing no motion appear as black.
The spatio-temporal filtered image passes to a thresholding module 34 which produces a binary motion image identifying the locations of the image for which the motion exceeds a preselected threshold. That is, it locates the areas of the image containing the most motion. In all such areas, the presence of a person is postulated.
A motion analyzer module 36 analyzes the binary motion image to watch how "motion blobs" change over time to decide if the motion is caused by a person moving and to determine head position. A few simple rules are applied, such as "the head is the small upper blob above a larger blob (i.e., the body)", and "head motion must be reasonably slow and contiguous" (i.e., heads are not expected to jump around the image erratically).
The motion image also allows for an estimate of scale. The size of the blob that is assumed to be the moving head determines the size of the subimage to send to face recognition module 10 (see FIG. 1). This subimage is rescaled to fit the dimensions of the eigenfaces.
Using "Face Space" To Locate The Face
Face space may also be used to locate faces in single images, either as an alternative to locating faces from motion (e.g. if there is too little motion or many moving objects) or as a method of achieving more precision than is possible by use of motion tracking alone.
Typically, images of faces do not change radically when projected into the face space; whereas, the projection of non-face images appear quite different. This basic idea may be used to detect the presence of faces in a scene. To implement this approach, the distance ε between the local subimage and face space is calculated at every location in the image. This calculated distance from face space is then used as a measure of "faceness". The result of calculating the distance from face space at every point in the image is a "face map" ε(x,y) in which low values (i.e., the dark areas) indicate the presence of a face.
Direct application of Eq. 10, however, is rather expensive computationally. A simpler, more efficient method of calculating the face map ε(x,y) is as follows.
To calculate the face map at every pixel of an image I(x,y), the subimage centered at that pixel is projected onto face space and the projection is then subtracted from the original subimage. To project a subimage Γ onto face space, one first subtracts the mean image (i.e., Ψ), resulting in Φ=Γ-Ψ. With Φf being the projection of Φ onto face space, the distance measure at a given image location is then: ##EQU4## since Φf ⊥(Φ-Φf). Because Φf is a linear combination of the eigenfaces (Φfi ωi ui) and the eigenfaces are orthonormal vectors,
Φ.sub.f.sup.T Φ.sub.f =Σ.sub.i ω.sub.i.sup.2 (12)
and
ε.sup.2 (x,y)=Φ.sup.T (x,y) Φ(x,y)-Σω.sub.i.sup.2 (x,y)               (13)
where ε(x,y) and ωi (x,y) are scalar functions of image location, and Φ(x,y) is a vector function of image location.
The second term of Eq. 13 is calculated in practice by a correlation with the L eigenfaces: ##EQU5## where x the correlation operator. The first term of Eq. 13 becomes ##EQU6## Since the average face Ψ and the eigenfaces ui are fixed, the terms ΨT Ψ and Ψxui may be computed ahead of time.
Thus, the computation of the face map involves only L+1 correlations over the input image and the computation of the first term ΓT (x,y)Γ(x,y). This is computed by squaring the input image I(x,y) and, at each image location, summing the squared values of the local subimage.
Scale Invariance
Experiments reveal that recognition performance decreases quickly as the head size, or scale, is mis-judged. It is therefore desirable for the head size in the input image must be close to that of the eigenfaces. The motion analysis can give an estimate of head size, from which the face image is rescaled to the eigenface size.
Another approach to the scale problem, which may be separate from or in addition to the motion estimate, is to use multiscale eigenfaces, in which an input face image is compared with eigenfaces at a number of scales. In this case the image will appear to be near the face space of only the closest scale eigenfaces. Equivalently, the input image (i.e., the portion of the overall image selected for analysis) can be scaled to multiple sizes and the scale which results in the smallest distance measure to face space used.
Other embodiments are within the following claims. For example, although the eigenfaces approach to face recognition has been presented as an information processing model, it may also be implemented using simple parallel computing elements, as in a connectionist system or artificial neural network.

Claims (25)

What is claimed is:
1. A recognition system for identifying members of an audience, the system comprising:
an imaging system which generates an image of the audience;
a selector module for selecting a portion of said generated image;
means for representing a reference set of images of individuals as a set of eigenvectors in a multi-dimensional image space;
a detection means which determines whether the selected image portion contains an image that can be classified as an image of a person, said detection means including means for representing said selected image portion as an input vector in said multi-dimensional image space and means for computing the distance between a point identified by said input vector and a multi-dimensional subspace defined by said set of eigenvectors, wherein said detection means uses the computed distance to determine whether the selected image portion contains an image that can be classified as an image of a person; and
a recognition module responsive to said detection means for determining whether a detected image of a person identified by said detection means resembles one of the reference set of images of individuals.
2. The recognition system of claim 1 wherein said detection means further comprises a thresholding means for determining whether an image of a person is present by comparing said computed distance to a preselected threshold.
3. The recognition system of claim 1 wherein said . .selection means.!. .Iadd.selector module .Iaddend.comprises a motion detector for identifying the selected portion of said image by detector motion.
4. The recognition system of claim 3 wherein said . .selection means.!. .Iadd.selector module .Iaddend.further comprises a locator module for locating the portion of said image corresponding to a face of the person based on motion detected by said motion detector.
5. The recognition system of claim 1 wherein said image of a person is an image of a person's face and wherein said reference set comprises images of faces of said individuals.
6. The recognition system of claim 1 wherein said recognition module comprises means for representing each member of said reference set as a corresponding point in said subspace.
7. The recognition system of claim 6 wherein the location of each point in subspace associated with a corresponding member of said reference set is determined by projecting a vector associated with that member onto said subspace.
8. The recognition system of claim 7 wherein said recognition module further comprises means for projecting said input vector onto said subspace.
9. The recognition system of claim 8 wherein said recognition module further comprises means for selecting a particular member of said reference set and means for computing a distance within said subspace between a point identified by the projection of said input vector onto said subspace and the point in said subspace associated with said selected member.
10. The recognition system of claim 8 wherein said recognition module further comprises means for determining for each member of said reference set a distance in subspace between the location associated with that member in subspace and the point identified by the projection of said input vector onto said subspace.
11. The recognition system of claim 10 wherein said image of a person is an image of a person's face and wherein said reference set comprises images of faces of said individuals.
12. A method for identifying members of an audience, the method comprising:
generating an image of the audience;
selecting a portion of said generated image;
representing a reference set of images of individuals as a set of eigenevectors in a multi-dimensional image space;
representing said selected image portion as an input vector in said multi-dimensional image space;
computing the distance between a point identified by said input vector and a multi-dimensional subspace defined by said set of eigenvectors;
using the computed distance to determine whether the selected image portion contains an image that can be classified as an image of a person; and
if it is determined that the selected image contains an image that can be classified as an image of a person determining whether said image of a person resembles one of a reference set of images of individuals.
13. The method of claim 12 further comprising the step of determining which one, if any, of the members of said reference set said image of a person resembles.
14. The method of claim 12 wherein the image of the audience is a sequence of image frames and wherein the method further comprises detecting motion within the sequence of image frames and wherein the selected image portion is determined on the basis of the detected motion.
15. The method of claim 12 wherein the step of determining whether the selected image portion contains an image that can be classified as an image of a person further comprises comparing said computed distance to a preselected threshold.
16. The method of claim 15 wherein the step of determining whether said image of a person resembles a member of said reference set comprises representing each member of said reference set as a corresponding point in said subspace.
17. The method of claim 16 wherein the step of determining whether said image of a person resembles a member of said reference set further comprises determining the location of each point in subspace associated with a corresponding member of said reference set by projecting a vector associated with that member onto said subspace.
18. The method of claim 17 wherein the step of determining whether said image of a person resembles a member of said reference set further comprises projecting said input vector onto said subspace.
19. The method of claim 18 wherein the step of determining whether said image of a person resembles a member of said reference set further comprises selecting a member of said reference set and computing a distance within said subspace between a point identified by the projection of said input vector onto said subspace and the point in said subspace associated with said selected member.
20. The method of claim 18 wherein the step of determining whether said image of a person resembles a member of said reference set further comprises determining for each member of said reference set a distance in subspace between the location for that member in subspace and the point identified by the projection of said input vector onto said subspace.
21. The method of claim 20 wherein said image of a person is an image of a person's face and wherein said reference set comprises images of faces of said individuals. .Iadd.
22. A recognition system comprising:
an imaging system which generates an image;
a selector module for selecting a portion of said generated image;
means for representing a reference set of images of individuals as a set of eigenvectors in a multi-dimensional image space;
a detection means which determines whether the selected image portion contains an image that can be classified as an image of a person, said detection means including means for representing said selected image portion as an input vector in said multi-dimensional image space and means for computing the distance between a point identified by said input vector and a multi-dimensional subspace defined by said set of eigenvectors, wherein said detection means uses the computed distance to determine whether the selected image portion contains an image that can be classified as an image of a person; and
a recognition module responsive to said detection means for determining whether a detected image of a person identified by said detection means resembles one of the reference set of images of individuals. .Iaddend..Iadd.23. The recognition system of claim 22 wherein said detection means further comprises a thresholding means for determining whether an image of a person is present by comparing said computed distance to a preselected threshold. .Iaddend..Iadd.24. The recognition system of claim 22 wherein said image of a person is an image of a person's face and wherein said reference set comprises images of faces of said individuals. .Iaddend..Iadd.25. The recognition system of claim 22 wherein said recognition module comprises means for representing each member of said reference set as a corresponding point in said subspace.
.Iaddend..Iadd.26. The recognition system of claim 25 wherein the location of each point in subspace associated with a corresponding member of said reference set is determined by projecting a vector associated with that member onto said subspace. .Iaddend..Iadd.27. The recognition system of claim 26 wherein said recognition module further comprises means for projecting said input vector onto said subspace. .Iaddend..Iadd.28. The recognition system of claim 27 wherein said recognition module further comprises means for selecting a particular member of said reference set and means for computing a distance within said subspace between a point identified by the projection of said input vector onto said subspace and the point in said subspace associated with said selected member. .Iaddend..Iadd.29. The recognition system of claim 27 wherein said recognition module further comprises means for determining for each member of said reference set a distance in subspace between the location associated with that member in subspace and the point identified by the projection of said input vector onto said subspace. .Iaddend..Iadd.30. The recognition system of claim 24 wherein said means for representing said reference set includes means for adding a member to said reference set by protecting into said subspace an input vector having a computed distance indicative of an image of a face. .Iaddend..Iadd.31. A method comprising:
generating an image;
selecting a portion of said generated image;
representing a reference set of images of faces of individuals as a set of eigenvectors in a multi-dimensional image space;
representing said selected image portion as an input vector in said multi-dimensional image space;
computing the distance between a point identified by said input vector and a multi-dimensional subspace defined by said set of eigenvectors;
using the computed distance to determine whether the selected image portion contains an image that can be classified as an image of a person's face; and
if it is determined that the selected image contains an image that can be classified as an image of a person's face, determining whether said image of a person's face resembles one of a reference set of images of faces of
individuals. .Iaddend..Iadd.32. The method of claim 31 further comprising the step of determining which one, if any, of the members of said reference set said image of a person's face resembles. .Iaddend..Iadd.33. The method of claim 31 wherein the step of determining whether the selected image portion contains an image that can be classified as an image of a person's face further comprises comparing said computed distance to a preselected threshold. .Iaddend..Iadd.34. The method of claim 33 wherein the step of determining whether said image of a person's face resembles a member of said reference set comprises representing each member of said reference set as a corresponding point in said subspace. .Iaddend..Iadd.35. The method of claim 34 wherein the step of determining whether said image of a person's face resembles a member of said reference set further comprises determining the location of each point in subspace associated with a corresponding member of said reference set by projecting a vector associated with that member onto said subspace.
.Iaddend..Iadd. The method of claim 35 wherein the step of determining whether said image of a person's face resembles a member of said reference set further comprises projecting said input vector onto said subspace. .Iaddend..Iadd.37. The method of claim 36 wherein the step of determining whether said image of a person's face resembles a member of said reference set further comprises determining for each member of said reference set a distance in subspace between the location for that member in subspace and the point identified by the projection of said input vector onto said subspace. .Iaddend.
US08/340,615 1990-11-01 1994-11-16 Face recognition system Expired - Lifetime USRE36041E (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/340,615 USRE36041E (en) 1990-11-01 1994-11-16 Face recognition system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/608,000 US5164992A (en) 1990-11-01 1990-11-01 Face recognition system
US08/340,615 USRE36041E (en) 1990-11-01 1994-11-16 Face recognition system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US07/608,000 Reissue US5164992A (en) 1990-11-01 1990-11-01 Face recognition system

Publications (1)

Publication Number Publication Date
USRE36041E true USRE36041E (en) 1999-01-12

Family

ID=24434619

Family Applications (2)

Application Number Title Priority Date Filing Date
US07/608,000 Ceased US5164992A (en) 1990-11-01 1990-11-01 Face recognition system
US08/340,615 Expired - Lifetime USRE36041E (en) 1990-11-01 1994-11-16 Face recognition system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US07/608,000 Ceased US5164992A (en) 1990-11-01 1990-11-01 Face recognition system

Country Status (7)

Country Link
US (2) US5164992A (en)
EP (1) EP0555380B1 (en)
AT (1) ATE174441T1 (en)
AU (1) AU9037591A (en)
DE (1) DE69130616T2 (en)
SG (1) SG48965A1 (en)
WO (1) WO1992008202A1 (en)

Cited By (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020006226A1 (en) * 2000-07-12 2002-01-17 Minolta Co., Ltd. Shade component removing apparatus and shade component removing method for removing shade in image
US20020018596A1 (en) * 2000-06-06 2002-02-14 Kenji Nagao Pattern recognition method, pattern check method and pattern recognition apparatus as well as pattern check apparatus using the same methods
US20020055957A1 (en) * 2000-11-28 2002-05-09 Hiroyuki Ohsawa Access system
US20020067856A1 (en) * 2000-12-01 2002-06-06 Iwao Fujii Image recognition apparatus, image recognition method, and recording medium
US6445810B2 (en) * 1997-08-01 2002-09-03 Interval Research Corporation Method and apparatus for personnel detection and tracking
US20020122583A1 (en) * 2000-09-11 2002-09-05 Thompson Robert Lee System and method for obtaining and utilizing maintenance information
US20020126897A1 (en) * 2000-12-01 2002-09-12 Yugo Ueda Motion information recognition system
US6456320B2 (en) * 1997-05-27 2002-09-24 Sanyo Electric Co., Ltd. Monitoring system and imaging system
US6501857B1 (en) * 1999-07-20 2002-12-31 Craig Gotsman Method and system for detecting and classifying objects in an image
US6535620B2 (en) * 2000-03-10 2003-03-18 Sarnoff Corporation Method and apparatus for qualitative spatiotemporal data processing
US6597801B1 (en) * 1999-09-16 2003-07-22 Hewlett-Packard Development Company L.P. Method for object registration via selection of models with dynamically ordered features
US6618490B1 (en) * 1999-09-16 2003-09-09 Hewlett-Packard Development Company, L.P. Method for efficiently registering object models in images via dynamic ordering of features
US6628811B1 (en) * 1998-03-19 2003-09-30 Matsushita Electric Industrial Co. Ltd. Method and apparatus for recognizing image pattern, method and apparatus for judging identity of image patterns, recording medium for recording the pattern recognizing method and recording medium for recording the pattern identity judging method
US20040017932A1 (en) * 2001-12-03 2004-01-29 Ming-Hsuan Yang Face recognition using kernel fisherfaces
US6690414B2 (en) * 2000-12-12 2004-02-10 Koninklijke Philips Electronics N.V. Method and apparatus to reduce false alarms in exit/entrance situations for residential security monitoring
US20040034611A1 (en) * 2002-08-13 2004-02-19 Samsung Electronics Co., Ltd. Face recognition method using artificial neural network and apparatus thereof
US6724920B1 (en) 2000-07-21 2004-04-20 Trw Inc. Application of human facial features recognition to automobile safety
US6795567B1 (en) 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
US20040193789A1 (en) * 2002-08-29 2004-09-30 Paul Rudolf Associative memory device and method based on wave propagation
US20040208361A1 (en) * 2001-03-29 2004-10-21 Vasile Buzuloiu Automated detection of pornographic images
US6810135B1 (en) 2000-06-29 2004-10-26 Trw Inc. Optimized human presence detection through elimination of background interference
US6816085B1 (en) 2000-01-14 2004-11-09 Michael N. Haynes Method for managing a parking lot
US20050060738A1 (en) * 2003-09-15 2005-03-17 Mitsubishi Digital Electronics America, Inc. Passive enforcement method for media ratings
US6873743B2 (en) 2001-03-29 2005-03-29 Fotonation Holdings, Llc Method and apparatus for the automatic real-time detection and correction of red-eye defects in batches of digital images or in handheld appliances
US20050089206A1 (en) * 2003-10-23 2005-04-28 Rice Robert R. Robust and low cost optical system for sensing stress, emotion and deception in human subjects
US20050105803A1 (en) * 2003-11-19 2005-05-19 Ray Lawrence A. Method for selecting an emphasis image from an image collection based upon content recognition
US6904347B1 (en) 2000-06-29 2005-06-07 Trw Inc. Human presence detection, identification and tracking using a facial feature image sensing system for airbag deployment
US20050192760A1 (en) * 2003-12-16 2005-09-01 Dunlap Susan C. System and method for plant identification
US7050084B1 (en) 2004-09-24 2006-05-23 Avaya Technology Corp. Camera frame display
US7085774B2 (en) 2001-08-30 2006-08-01 Infonox On The Web Active profiling system for tracking and quantifying customer conversion efficiency
US20060204056A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Perfecting the effect of flash within an image acquisition devices using face detection
US20060203107A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Perfecting of digital image capture parameters within acquisition devices using face detection
US20060203108A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Perfecting the optics within a digital image acquisition device using face detection
US20060204110A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Detecting orientation of digital images using face detection information
US20060204057A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image adjustable compression and resolution using face detection information
US20060204055A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image processing using face detection information
US20060204054A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image processing composition using face detection information
US7110570B1 (en) 2000-07-21 2006-09-19 Trw Inc. Application of human facial features recognition to automobile security and convenience
US20060215924A1 (en) * 2003-06-26 2006-09-28 Eran Steinberg Perfecting of digital image rendering parameters within rendering devices using face detection
US20070064208A1 (en) * 2005-09-07 2007-03-22 Ablaze Development Corporation Aerial support structure and method for image capture
US20070110305A1 (en) * 2003-06-26 2007-05-17 Fotonation Vision Limited Digital Image Processing Using Face Detection and Skin Tone Information
US7227567B1 (en) 2004-09-14 2007-06-05 Avaya Technology Corp. Customizable background for video communications
US20070160307A1 (en) * 2003-06-26 2007-07-12 Fotonation Vision Limited Modification of Viewing Parameters for Digital Images Using Face Detection Information
US20070172047A1 (en) * 2006-01-25 2007-07-26 Avaya Technology Llc Display hierarchy of participants during phone call
US20080013798A1 (en) * 2006-06-12 2008-01-17 Fotonation Vision Limited Advances in extending the aam techniques from grayscale to color images
US7331671B2 (en) 2004-03-29 2008-02-19 Delphi Technologies, Inc. Eye tracking method based on correlation and detected eye movement
US20080069403A1 (en) * 1995-06-07 2008-03-20 Automotive Technologies International, Inc. Face Monitoring System and Method for Vehicular Occupants
US20080089561A1 (en) * 2006-10-11 2008-04-17 Tong Zhang Face-based image clustering
US7362885B2 (en) 2004-04-20 2008-04-22 Delphi Technologies, Inc. Object tracking and eye state identification method
US7379602B2 (en) 2002-07-29 2008-05-27 Honda Giken Kogyo Kabushiki Kaisha Extended Isomap using Fisher Linear Discriminant and Kernel Fisher Linear Discriminant
US20080175481A1 (en) * 2007-01-18 2008-07-24 Stefan Petrescu Color Segmentation
US20080205712A1 (en) * 2007-02-28 2008-08-28 Fotonation Vision Limited Separating Directional Lighting Variability in Statistical Face Modelling Based on Texture Space Decomposition
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US20080267461A1 (en) * 2006-08-11 2008-10-30 Fotonation Ireland Limited Real-time face tracking in a digital image acquisition device
US20080292193A1 (en) * 2007-05-24 2008-11-27 Fotonation Vision Limited Image Processing Method and Apparatus
US7460150B1 (en) 2005-03-14 2008-12-02 Avaya Inc. Using gaze detection to determine an area of interest within a scene
US20080316328A1 (en) * 2005-12-27 2008-12-25 Fotonation Ireland Limited Foreground/background separation using reference images
US20080317379A1 (en) * 2007-06-21 2008-12-25 Fotonation Ireland Limited Digital image enhancement with reference images
US20080317357A1 (en) * 2003-08-05 2008-12-25 Fotonation Ireland Limited Method of gathering visual meta data using a reference image
US20080317378A1 (en) * 2006-02-14 2008-12-25 Fotonation Ireland Limited Digital image enhancement with reference images
US20090003708A1 (en) * 2003-06-26 2009-01-01 Fotonation Ireland Limited Modification of post-viewing parameters for digital images using image region or feature information
US20090080713A1 (en) * 2007-09-26 2009-03-26 Fotonation Vision Limited Face tracking in a camera processor
US20090103909A1 (en) * 2007-10-17 2009-04-23 Live Event Media, Inc. Aerial camera support structure
US20090141947A1 (en) * 2007-11-29 2009-06-04 Volodymyr Kyyko Method and system of person identification by facial image
US7564476B1 (en) 2005-05-13 2009-07-21 Avaya Inc. Prevent video calls based on appearance
US20090208056A1 (en) * 2006-08-11 2009-08-20 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US20090244296A1 (en) * 2008-03-26 2009-10-01 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US7620218B2 (en) 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images
US7620216B2 (en) 2006-06-14 2009-11-17 Delphi Technologies, Inc. Method of tracking a human eye in a video image
US7650034B2 (en) 2005-12-14 2010-01-19 Delphi Technologies, Inc. Method of locating a human eye in a video image
US7652593B1 (en) 2000-01-14 2010-01-26 Haynes Michael N Method for managing a parking lot
US20100026831A1 (en) * 2008-07-30 2010-02-04 Fotonation Ireland Limited Automatic face and skin beautification using face detection
US20100054549A1 (en) * 2003-06-26 2010-03-04 Fotonation Vision Limited Digital Image Processing Using Face Detection Information
US20100054533A1 (en) * 2003-06-26 2010-03-04 Fotonation Vision Limited Digital Image Processing Using Face Detection Information
US7706576B1 (en) 2004-12-28 2010-04-27 Avaya Inc. Dynamic video equalization of images using face-tracking
US20100272363A1 (en) * 2007-03-05 2010-10-28 Fotonation Vision Limited Face searching and detection in a digital image acquisition device
US20110026780A1 (en) * 2006-08-11 2011-02-03 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
US20110060836A1 (en) * 2005-06-17 2011-03-10 Tessera Technologies Ireland Limited Method for Establishing a Paired Connection Between Media Devices
US20110081052A1 (en) * 2009-10-02 2011-04-07 Fotonation Ireland Limited Face recognition performance using additional image features
US20110091196A1 (en) * 2009-10-16 2011-04-21 Wavecam Media, Inc. Aerial support structure for capturing an image of a target
US7953251B1 (en) 2004-10-28 2011-05-31 Tessera Technologies Ireland Limited Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images
US7974714B2 (en) 1999-10-05 2011-07-05 Steven Mark Hoffberg Intelligent electronic appliance system and method
US8046313B2 (en) 1991-12-23 2011-10-25 Hoffberg Steven M Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US8165282B1 (en) 2006-05-25 2012-04-24 Avaya Inc. Exploiting facial characteristics for improved agent selection
US8433050B1 (en) 2006-02-06 2013-04-30 Avaya Inc. Optimizing conference quality with diverse codecs
US20130142399A1 (en) * 2011-12-04 2013-06-06 King Saud University Face recognition using multilayered discriminant analysis
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
USRE46310E1 (en) 1991-12-23 2017-02-14 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US9692964B2 (en) 2003-06-26 2017-06-27 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
USRE47908E1 (en) 1991-12-23 2020-03-17 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE48056E1 (en) 1991-12-23 2020-06-16 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system

Families Citing this family (252)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9201006D0 (en) * 1992-01-17 1992-03-11 Philip Electronic And Associat Classifying faces
JP2973676B2 (en) * 1992-01-23 1999-11-08 松下電器産業株式会社 Face image feature point extraction device
GB9201856D0 (en) * 1992-01-29 1992-03-18 British Telecomm Method of forming a template
KR930018124A (en) * 1992-02-25 1993-09-21 미다무라 유끼히로 Safe Deposit Box System
JP3252381B2 (en) * 1992-09-08 2002-02-04 ソニー株式会社 Pattern recognition device
US5432864A (en) * 1992-10-05 1995-07-11 Daozheng Lu Identification card verification system
US5954583A (en) * 1992-11-05 1999-09-21 Com21 Limited Secure access control system
US5550928A (en) * 1992-12-15 1996-08-27 A.C. Nielsen Company Audience measurement system and method
US6181805B1 (en) * 1993-08-11 2001-01-30 Nippon Telegraph & Telephone Corporation Object image detecting method and system
US7251637B1 (en) * 1993-09-20 2007-07-31 Fair Isaac Corporation Context vector generation and retrieval
US5781650A (en) * 1994-02-18 1998-07-14 University Of Central Florida Automatic feature detection and age classification of human faces in digital images
US5835616A (en) * 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
DE4413788C1 (en) * 1994-03-15 1995-10-12 Fraunhofer Ges Forschung Personal identification with movement information
EP0758471B1 (en) * 1994-03-15 1999-07-28 Fraunhofer-Gesellschaft Zur Förderung Der Angewandten Forschung E.V. Person identification based on movement information
JP2725599B2 (en) * 1994-06-21 1998-03-11 日本電気株式会社 Ridge direction extraction device
IT1277993B1 (en) * 1994-09-30 1997-11-12 Ist Trentino Di Cultura PROCEDURE FOR STORING AND RETRIEVING IMAGES OF PEOPLE, FOR EXAMPLE IN PHOTOGRAPHIC ARCHIVES AND FOR THE CONSTRUCTION OF IDENTIKIT AND
US5497430A (en) * 1994-11-07 1996-03-05 Physical Optics Corporation Method and apparatus for image recognition using invariant feature signals
WO1996024991A1 (en) * 1995-02-08 1996-08-15 Actual Radio Measurement Remote listenership monitoring system
US5872865A (en) * 1995-02-08 1999-02-16 Apple Computer, Inc. Method and system for automatic classification of video images
CA2215942A1 (en) 1995-03-20 1996-09-26 Lee G. Slocum Systems and methods for identifying images
JPH08263664A (en) * 1995-03-22 1996-10-11 Honda Motor Co Ltd Artificial visual system and image recognizing method
US5710833A (en) * 1995-04-20 1998-01-20 Massachusetts Institute Of Technology Detection, recognition and coding of complex objects using probabilistic eigenspace analysis
US5642431A (en) * 1995-06-07 1997-06-24 Massachusetts Institute Of Technology Network-based system and method for detection of faces and the like
US7664263B2 (en) 1998-03-24 2010-02-16 Moskowitz Scott A Method for combining transfer functions with predetermined key creation
US5963670A (en) 1996-02-12 1999-10-05 Massachusetts Institute Of Technology Method and apparatus for classifying and identifying images
DE19610066C1 (en) * 1996-03-14 1997-09-18 Siemens Nixdorf Advanced Techn Process for the collection of face-related personal data and their use for the identification or verification of persons
US5983237A (en) * 1996-03-29 1999-11-09 Virage, Inc. Visual dictionary
NL1002853C2 (en) * 1996-04-12 1997-10-15 Eyelight Research Nv Method of recognizing a page or object perceived by a person, recording the length of time and locations of the page or object he / she observes or viewing, and means for use in the method.
US6188776B1 (en) 1996-05-21 2001-02-13 Interval Research Corporation Principle component analysis of images for the automatic location of control points
US6430307B1 (en) * 1996-06-18 2002-08-06 Matsushita Electric Industrial Co., Ltd. Feature extraction system and face image recognition system
US5901244A (en) * 1996-06-18 1999-05-04 Matsushita Electric Industrial Co., Ltd. Feature extraction system and face image recognition system
US6145247A (en) 1996-06-27 2000-11-14 Weyerhaeuser Company Fluid switch
US7159116B2 (en) 1999-12-07 2007-01-02 Blue Spike, Inc. Systems, methods and devices for trusted transactions
US7177429B2 (en) 2000-12-07 2007-02-13 Blue Spike, Inc. System and methods for permitting open access to data objects and for securing data within the data objects
US7650015B2 (en) 1997-07-22 2010-01-19 Image Processing Technologies. LLC Image processing method
US6819783B2 (en) 1996-09-04 2004-11-16 Centerframe, Llc Obtaining person-specific images in a public venue
US6526158B1 (en) 1996-09-04 2003-02-25 David A. Goldberg Method and system for obtaining person-specific images in a public venue
US5828769A (en) * 1996-10-23 1998-10-27 Autodesk, Inc. Method and apparatus for recognition of objects via position and orientation consensus of local image encoding
US6184926B1 (en) * 1996-11-26 2001-02-06 Ncr Corporation System and method for detecting a human face in uncontrolled environments
US6345109B1 (en) 1996-12-05 2002-02-05 Matsushita Electric Industrial Co., Ltd. Face recognition-matching system effective to images obtained in different imaging conditions
US6185337B1 (en) 1996-12-17 2001-02-06 Honda Giken Kogyo Kabushiki Kaisha System and method for image recognition
US6111517A (en) * 1996-12-30 2000-08-29 Visionics Corporation Continuous video monitoring using face recognition for access control
US6526156B1 (en) 1997-01-10 2003-02-25 Xerox Corporation Apparatus and method for identifying and tracking objects with view-based representations
US6256401B1 (en) 1997-03-03 2001-07-03 Keith W Whited System and method for storage, retrieval and display of information relating to marine specimens in public aquariums
US6336109B2 (en) * 1997-04-15 2002-01-01 Cerebrus Solutions Limited Method and apparatus for inducing rules from data classifiers
US6256046B1 (en) 1997-04-18 2001-07-03 Compaq Computer Corporation Method and apparatus for visual sensing of humans for active public interfaces
US6151403A (en) * 1997-08-29 2000-11-21 Eastman Kodak Company Method for automatic detection of human eyes in digital images
US6026188A (en) * 1997-10-10 2000-02-15 Unisys Corporation System and method for recognizing a 3-D object by generating a rotated 2-D image of the object from a set of 2-D enrollment images
GB2330679B (en) 1997-10-21 2002-04-24 911 Emergency Products Inc Warning signal light
US6035055A (en) * 1997-11-03 2000-03-07 Hewlett-Packard Company Digital image management system in a distributed data access network system
US6185316B1 (en) 1997-11-12 2001-02-06 Unisys Corporation Self-authentication apparatus and method
US6108437A (en) * 1997-11-14 2000-08-22 Seiko Epson Corporation Face recognition apparatus, method, system and computer readable medium thereof
US6128397A (en) * 1997-11-21 2000-10-03 Justsystem Pittsburgh Research Center Method for finding all frontal faces in arbitrarily complex visual scenes
US5940118A (en) * 1997-12-22 1999-08-17 Nortel Networks Corporation System and method for steering directional microphones
US6148092A (en) * 1998-01-08 2000-11-14 Sharp Laboratories Of America, Inc System for detecting skin-tone regions within an image
US6278491B1 (en) 1998-01-29 2001-08-21 Hewlett-Packard Company Apparatus and a method for automatically detecting and reducing red-eye in a digital image
US6038333A (en) * 1998-03-16 2000-03-14 Hewlett-Packard Company Person identifier and management system
US6675189B2 (en) 1998-05-28 2004-01-06 Hewlett-Packard Development Company, L.P. System for learning and applying integrated task and data parallel strategies in dynamic applications
US6064976A (en) * 1998-06-17 2000-05-16 Intel Corporation Scheduling system
US6404900B1 (en) 1998-06-22 2002-06-11 Sharp Laboratories Of America, Inc. Method for robust human face tracking in presence of multiple persons
US6292575B1 (en) 1998-07-20 2001-09-18 Lau Technologies Real-time facial recognition and verification system
US6999604B1 (en) * 1998-08-07 2006-02-14 Korea Institute Of Science And Technology Apparatus and method for detecting a moving object in a sequence of color frame images
US7134130B1 (en) 1998-12-15 2006-11-07 Gateway Inc. Apparatus and method for user-based control of television content
KR100361497B1 (en) * 1999-01-08 2002-11-18 엘지전자 주식회사 Method of extraction of face from video image
EP1221127B1 (en) 1999-01-13 2009-05-06 Computer Associates Think, Inc. Signature recognition system and method
US7062073B1 (en) 1999-01-19 2006-06-13 Tumey David M Animated toy utilizing artificial intelligence and facial image recognition
US6026747A (en) * 1999-01-19 2000-02-22 Presstek, Inc. Automatic plate-loading cylinder for multiple printing members
KR100671098B1 (en) * 1999-02-01 2007-01-17 주식회사 팬택앤큐리텔 Multimedia data retrieval method and appratus using shape information
US7664264B2 (en) 1999-03-24 2010-02-16 Blue Spike, Inc. Utilizing data reduction in steganographic and cryptographic systems
US7039221B1 (en) 1999-04-09 2006-05-02 Tumey David M Facial image verification utilizing smart-card with integrated video camera
US6636619B1 (en) 1999-07-07 2003-10-21 Zhongfei Zhang Computer based method and apparatus for object recognition
WO2001018628A2 (en) 1999-08-04 2001-03-15 Blue Spike, Inc. A secure personal content server
US7468677B2 (en) * 1999-08-04 2008-12-23 911Ep, Inc. End cap warning signal assembly
US6944319B1 (en) * 1999-09-13 2005-09-13 Microsoft Corporation Pose-invariant face recognition system and process
US6993245B1 (en) 1999-11-18 2006-01-31 Vulcan Patents Llc Iterative, maximally probable, batch-mode commercial detection for audiovisual content
US6968565B1 (en) * 2000-02-25 2005-11-22 Vulcan Patents Llc Detection of content display observers with prevention of unauthorized access to identification signal
US20020062481A1 (en) 2000-02-25 2002-05-23 Malcolm Slaney Method and system for selecting advertisements
US7661116B2 (en) 2000-02-25 2010-02-09 Vulcan Patents Llc Auction for targeted content
US8910199B2 (en) 2000-02-25 2014-12-09 Interval Licensing Llc Targeted television content display
US6940545B1 (en) 2000-02-28 2005-09-06 Eastman Kodak Company Face detecting camera and method
US7127087B2 (en) * 2000-03-27 2006-10-24 Microsoft Corporation Pose-invariant face recognition system and process
US7010788B1 (en) 2000-05-19 2006-03-07 Hewlett-Packard Development Company, L.P. System for computing the optimal static schedule using the stored task execution costs with recent schedule execution costs
GB0013016D0 (en) * 2000-05-26 2000-07-19 Univ Surrey Personal identity authentication process and system
US20030195807A1 (en) * 2000-10-12 2003-10-16 Frank S. Maggio Method and system for verifying exposure to message content via a printed response
JP4374759B2 (en) * 2000-10-13 2009-12-02 オムロン株式会社 Image comparison system and image comparison apparatus
US8188878B2 (en) 2000-11-15 2012-05-29 Federal Law Enforcement Development Services, Inc. LED light communication system
US7439847B2 (en) 2002-08-23 2008-10-21 John C. Pederson Intelligent observation and identification database system
JP4590717B2 (en) * 2000-11-17 2010-12-01 ソニー株式会社 Face identification device and face identification method
US7020305B2 (en) * 2000-12-06 2006-03-28 Microsoft Corporation System and method providing improved head motion estimations for animation
EP1217574A3 (en) * 2000-12-19 2004-05-19 Matsushita Electric Industrial Co., Ltd. A method for lighting- and view-angle-invariant face description with first- and second-order eigenfeatures
US20020081003A1 (en) * 2000-12-27 2002-06-27 Sobol Robert E. System and method for automatically enhancing graphical images
US7034848B2 (en) * 2001-01-05 2006-04-25 Hewlett-Packard Development Company, L.P. System and method for automatically cropping graphical images
US20020136433A1 (en) * 2001-03-26 2002-09-26 Koninklijke Philips Electronics N.V. Adaptive facial recognition system and method
US6813372B2 (en) * 2001-03-30 2004-11-02 Logitech, Inc. Motion and audio detection based webcamming and bandwidth control
DE60213032T2 (en) * 2001-05-22 2006-12-28 Matsushita Electric Industrial Co. Ltd. Facial detection device, face paw detection device, partial image extraction device, and method for these devices
US7131132B1 (en) * 2001-06-11 2006-10-31 Lucent Technologies Inc. Automatic access denial
US20020194586A1 (en) * 2001-06-15 2002-12-19 Srinivas Gutta Method and system and article of manufacture for multi-user profile generation
US20030039379A1 (en) * 2001-08-23 2003-02-27 Koninklijke Philips Electronics N.V. Method and apparatus for automatically assessing interest in a displayed product
EP1293933A1 (en) * 2001-09-03 2003-03-19 Agfa-Gevaert AG Method for automatically detecting red-eye defects in photographic image data
US20050008198A1 (en) * 2001-09-14 2005-01-13 Guo Chun Biao Apparatus and method for selecting key frames of clear faces through a sequence of images
EP1436742A1 (en) * 2001-09-18 2004-07-14 Pro-Corp Holdings International Limited Image recognition inventory management system
GB2382289B (en) * 2001-09-28 2005-07-06 Canon Kk Method and apparatus for generating models of individuals
US6720880B2 (en) * 2001-11-13 2004-04-13 Koninklijke Philips Electronics N.V. Vision-based method and apparatus for automatically activating a child safety feature
US20040201738A1 (en) * 2001-11-13 2004-10-14 Tabula Rasa, Inc. Method and apparatus for providing automatic access to images captured at diverse recreational venues
AUPR899401A0 (en) * 2001-11-21 2001-12-13 Cea Technologies Pty Limited Method and apparatus for non-motion detection
AU2002342393B2 (en) * 2001-11-21 2007-01-25 Iomniscient Pty Ltd Non-motion detection
US20030113002A1 (en) * 2001-12-18 2003-06-19 Koninklijke Philips Electronics N.V. Identification of people using video and audio eigen features
US6879709B2 (en) * 2002-01-17 2005-04-12 International Business Machines Corporation System and method for automatically detecting neutral expressionless faces in digital images
US7369685B2 (en) * 2002-04-05 2008-05-06 Identix Corporation Vision-based operating method and system
US20040052418A1 (en) * 2002-04-05 2004-03-18 Bruno Delean Method and apparatus for probabilistic image analysis
US7287275B2 (en) 2002-04-17 2007-10-23 Moskowitz Scott A Methods, systems and devices for packet watermarking and efficient provisioning of bandwidth
US20030221119A1 (en) * 2002-05-21 2003-11-27 Geiger Richard Gustav Methods and apparatus for communicating with a security access control system
CN1459761B (en) * 2002-05-24 2010-04-21 清华大学 Character identification technique based on Gabor filter set
US8064650B2 (en) 2002-07-10 2011-11-22 Hewlett-Packard Development Company, L.P. File management of digital images using the names of people identified in the images
US7843495B2 (en) * 2002-07-10 2010-11-30 Hewlett-Packard Development Company, L.P. Face recognition in a digital imaging system accessing a database of people
US20040012576A1 (en) * 2002-07-16 2004-01-22 Robert Cazier Digital image display method and system
JP4036051B2 (en) * 2002-07-30 2008-01-23 オムロン株式会社 Face matching device and face matching method
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition
US7421098B2 (en) * 2002-10-07 2008-09-02 Technion Research & Development Foundation Ltd. Facial recognition and the open mouth problem
WO2004034755A2 (en) * 2002-10-11 2004-04-22 Maggio Frank S Remote control system and method for interacting with broadcast content
JP4080843B2 (en) 2002-10-30 2008-04-23 株式会社東芝 Nonvolatile semiconductor memory device
US7499574B1 (en) * 2002-11-07 2009-03-03 Honda Motor Co., Ltd. Video-based face recognition using probabilistic appearance manifolds
KR100455294B1 (en) 2002-12-06 2004-11-06 삼성전자주식회사 Method for detecting user and detecting motion, and apparatus for detecting user within security system
EP1576815A1 (en) * 2002-12-11 2005-09-21 Nielsen Media Research, Inc. Detecting a composition of an audience
US7203338B2 (en) * 2002-12-11 2007-04-10 Nielsen Media Research, Inc. Methods and apparatus to count people appearing in an image
US7184595B2 (en) * 2002-12-26 2007-02-27 Carmel-Haifa University Economic Corporation Ltd. Pattern matching using projection kernels
US7283649B1 (en) * 2003-02-27 2007-10-16 Viisage Technology, Inc. System and method for image recognition using stream data
TWI226589B (en) * 2003-04-28 2005-01-11 Ind Tech Res Inst Statistical facial feature extraction method
US20050063569A1 (en) * 2003-06-13 2005-03-24 Charles Colbert Method and apparatus for face recognition
US8553949B2 (en) 2004-01-22 2013-10-08 DigitalOptics Corporation Europe Limited Classification and organization of consumer digital images using workflow, and face detection and recognition
US8363951B2 (en) 2007-03-05 2013-01-29 DigitalOptics Corporation Europe Limited Face recognition training method and apparatus
US7792335B2 (en) 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
US7587068B1 (en) 2004-01-22 2009-09-08 Fotonation Vision Limited Classification database for consumer digital images
FR2857481A1 (en) * 2003-07-08 2005-01-14 Thomson Licensing Sa METHOD AND DEVICE FOR DETECTING FACES IN A COLOR IMAGE
US7643684B2 (en) * 2003-07-15 2010-01-05 Samsung Electronics Co., Ltd. Apparatus for and method of constructing multi-view face database, and apparatus for and method of generating multi-view face descriptor
US7512255B2 (en) * 2003-08-22 2009-03-31 Board Of Regents, University Of Houston Multi-modal face recognition
EP1669890A4 (en) * 2003-09-26 2007-04-04 Nikon Corp Electronic image accumulation method, electronic image accumulation device, and electronic image accumulation system
JP3998628B2 (en) * 2003-11-05 2007-10-31 株式会社東芝 Pattern recognition apparatus and method
KR100601933B1 (en) * 2003-11-18 2006-07-14 삼성전자주식회사 Method and apparatus of human detection and privacy protection method and system employing the same
US7551755B1 (en) 2004-01-22 2009-06-23 Fotonation Vision Limited Classification and organization of consumer digital images using workflow, and face detection and recognition
US7555148B1 (en) 2004-01-22 2009-06-30 Fotonation Vision Limited Classification system for consumer digital images using workflow, face detection, normalization, and face recognition
US7558408B1 (en) 2004-01-22 2009-07-07 Fotonation Vision Limited Classification system for consumer digital images using workflow and user interface modules, and face detection and recognition
US7564994B1 (en) 2004-01-22 2009-07-21 Fotonation Vision Limited Classification system for consumer digital images using automatic workflow and face detection and recognition
WO2005071614A1 (en) * 2004-01-27 2005-08-04 Seiko Epson Corporation Human face detection position shift correction method, correction system, and correction program
EP1743277A4 (en) 2004-04-15 2011-07-06 Gesturetek Inc Tracking bimanual movements
JP4217664B2 (en) * 2004-06-28 2009-02-04 キヤノン株式会社 Image processing method and image processing apparatus
US7440930B1 (en) 2004-07-22 2008-10-21 Adobe Systems Incorporated Training an attentional cascade
KR20070038544A (en) * 2004-08-04 2007-04-10 쿨레노프 다울렛 Method for automatically recognising a face on an electronic digitised image
US20060056667A1 (en) * 2004-09-16 2006-03-16 Waters Richard C Identifying faces from multiple images acquired from widely separated viewpoints
US7738680B1 (en) * 2004-11-24 2010-06-15 Adobe Systems Incorporated Detecting an object within an image by incrementally evaluating subwindows of the image in parallel
US7440587B1 (en) * 2004-11-24 2008-10-21 Adobe Systems Incorporated Method and apparatus for calibrating sampling operations for an object detection process
JP4810088B2 (en) * 2004-12-17 2011-11-09 キヤノン株式会社 Image processing apparatus, image processing method, and program thereof
JP2006180117A (en) * 2004-12-21 2006-07-06 Funai Electric Co Ltd Broadcast signal receiving system
US8488023B2 (en) 2009-05-20 2013-07-16 DigitalOptics Corporation Europe Limited Identifying facial expressions in acquired digital images
US7715597B2 (en) 2004-12-29 2010-05-11 Fotonation Ireland Limited Method and component for image recognition
ES2791718T3 (en) * 2005-01-07 2020-11-05 Qualcomm Inc Detection and tracking of objects in images
US8144118B2 (en) * 2005-01-21 2012-03-27 Qualcomm Incorporated Motion-based tracking
US7634142B1 (en) 2005-01-24 2009-12-15 Adobe Systems Incorporated Detecting objects in images using a soft cascade
US8235725B1 (en) * 2005-02-20 2012-08-07 Sensory Logic, Inc. Computerized method of assessing consumer reaction to a business stimulus employing facial coding
US8406481B2 (en) * 2005-02-25 2013-03-26 Hysterical Sunset Limited Automated indexing for distributing event photography
US7587101B1 (en) 2005-02-28 2009-09-08 Adobe Systems Incorporated Facilitating computer-assisted tagging of object instances in digital images
KR100639988B1 (en) * 2005-04-21 2006-10-31 한국전자통신연구원 Method and apparatus for extraction of face feature
GB2426136B (en) * 2005-05-11 2008-10-01 Idan Zuta Messaging system and method
JP4653606B2 (en) * 2005-05-23 2011-03-16 株式会社東芝 Image recognition apparatus, method and program
EP1907980B1 (en) 2005-07-18 2013-01-02 Hysterical Sunset Limited Manually-assisted automated indexing of images using facial recognition
US8600174B2 (en) * 2005-09-28 2013-12-03 Facedouble, Inc. Method and system for attaching a metatag to a digital image
US8369570B2 (en) * 2005-09-28 2013-02-05 Facedouble, Inc. Method and system for tagging an image of an individual in a plurality of photos
US8311294B2 (en) 2009-09-08 2012-11-13 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
US7587070B2 (en) * 2005-09-28 2009-09-08 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
US7450740B2 (en) * 2005-09-28 2008-11-11 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
US7599527B2 (en) * 2005-09-28 2009-10-06 Facedouble, Inc. Digital image search system and method
US7961937B2 (en) * 2005-10-26 2011-06-14 Hewlett-Packard Development Company, L.P. Pre-normalization data classification
US8218080B2 (en) * 2005-12-05 2012-07-10 Samsung Electronics Co., Ltd. Personal settings, parental control, and energy saving control of television with digital video camera
US8848057B2 (en) * 2005-12-05 2014-09-30 Samsung Electronics Co., Ltd. Home security applications for television with digital video cameras
US7804983B2 (en) * 2006-02-24 2010-09-28 Fotonation Vision Limited Digital image acquisition control and correction method and apparatus
JP2007233873A (en) * 2006-03-02 2007-09-13 Toshiba Corp Pattern recognition device and method therefor
US20070291104A1 (en) * 2006-06-07 2007-12-20 Wavetronex, Inc. Systems and methods of capturing high-resolution images of objects
FR2903331B1 (en) * 2006-07-07 2008-10-10 Oreal GENERATOR FOR EXCITING A PIEZOELECTRIC TRANSDUCER
EP2050043A2 (en) 2006-08-02 2009-04-22 Fotonation Vision Limited Face recognition with combined pca-based datasets
JP2008059197A (en) * 2006-08-30 2008-03-13 Canon Inc Apparatus, method, computer program and storage medium for collating image
US8171237B2 (en) 2006-10-31 2012-05-01 Yahoo! Inc. Automatic association of reference data with primary process data based on time and shared identifier
CN101636745A (en) * 2006-12-29 2010-01-27 格斯图尔泰克股份有限公司 Manipulation of virtual objects using enhanced interactive system
JP5015270B2 (en) * 2007-02-15 2012-08-29 クアルコム,インコーポレイテッド Input using flashing electromagnetic radiation
EP2123008A4 (en) * 2007-03-05 2011-03-16 Tessera Tech Ireland Ltd Face categorization and annotation of a mobile phone contact list
WO2008137708A1 (en) * 2007-05-04 2008-11-13 Gesturetek, Inc. Camera-based user input for compact devices
US9414458B2 (en) 2007-05-24 2016-08-09 Federal Law Enforcement Development Services, Inc. LED light control assembly and system
US9100124B2 (en) 2007-05-24 2015-08-04 Federal Law Enforcement Development Services, Inc. LED Light Fixture
US9294198B2 (en) 2007-05-24 2016-03-22 Federal Law Enforcement Development Services, Inc. Pulsed light communication key
US9258864B2 (en) 2007-05-24 2016-02-09 Federal Law Enforcement Development Services, Inc. LED light control and management system
WO2008148022A2 (en) 2007-05-24 2008-12-04 Federal Law Enforcement Development Services, Inc. Building illumination apparatus with integrated communications, security and energy management
US9455783B2 (en) 2013-05-06 2016-09-27 Federal Law Enforcement Development Services, Inc. Network security and variable pulse wave form with continuous communication
US11265082B2 (en) 2007-05-24 2022-03-01 Federal Law Enforcement Development Services, Inc. LED light control assembly and system
US8713001B2 (en) * 2007-07-10 2014-04-29 Asim Roy Systems and related methods of user-guided searching
US8064697B2 (en) * 2007-10-12 2011-11-22 Microsoft Corporation Laplacian principal components analysis (LPCA)
US8160309B1 (en) * 2007-12-21 2012-04-17 Csr Technology Inc. Method, apparatus, and system for object recognition and classification
US8705810B2 (en) * 2007-12-28 2014-04-22 Intel Corporation Detecting and indexing characters of videos by NCuts and page ranking
US20090179919A1 (en) * 2008-01-16 2009-07-16 Lidestri James M Methods and Systems for Masking Visual Content
US8180112B2 (en) * 2008-01-21 2012-05-15 Eastman Kodak Company Enabling persistent recognition of individuals in images
US8750578B2 (en) 2008-01-29 2014-06-10 DigitalOptics Corporation Europe Limited Detecting facial expressions in digital images
US9143573B2 (en) 2008-03-20 2015-09-22 Facebook, Inc. Tag suggestions for images on online social networks
WO2009116049A2 (en) 2008-03-20 2009-09-24 Vizi Labs Relationship mapping employing multi-dimensional context including facial recognition
CN102007516A (en) * 2008-04-14 2011-04-06 汤姆森特许公司 Technique for automatically tracking an object
US8406531B2 (en) 2008-05-15 2013-03-26 Yahoo! Inc. Data access based on content of image recorded by a mobile device
US9753948B2 (en) * 2008-05-27 2017-09-05 Match.Com, L.L.C. Face search in personals
US8098894B2 (en) 2008-06-20 2012-01-17 Yahoo! Inc. Mobile imaging device as navigator
EP2291795A1 (en) * 2008-07-02 2011-03-09 C-True Ltd. Face recognition system and method
US9837013B2 (en) * 2008-07-09 2017-12-05 Sharp Laboratories Of America, Inc. Methods and systems for display correction
TW201006230A (en) * 2008-07-24 2010-02-01 Novatek Microelectronics Corp Static image presentation method
US8411963B2 (en) 2008-08-08 2013-04-02 The Nielsen Company (U.S.), Llc Methods and apparatus to count persons in a monitored environment
WO2010063463A2 (en) 2008-12-05 2010-06-10 Fotonation Ireland Limited Face recognition using face tracker classifier data
JP2010186216A (en) * 2009-02-10 2010-08-26 Seiko Epson Corp Specifying position of characteristic portion of face image
US8890773B1 (en) 2009-04-01 2014-11-18 Federal Law Enforcement Development Services, Inc. Visible light transceiver glasses
US8600100B2 (en) * 2009-04-16 2013-12-03 Sensory Logic, Inc. Method of assessing people's self-presentation and actions to evaluate personality type, behavioral tendencies, credibility, motivations and other insights through facial muscle activity and expressions
US8538093B2 (en) * 2009-04-20 2013-09-17 Mark Kodesh Method and apparatus for encouraging social networking through employment of facial feature comparison and matching
JP5709410B2 (en) * 2009-06-16 2015-04-30 キヤノン株式会社 Pattern processing apparatus and method, and program
US8326002B2 (en) * 2009-08-13 2012-12-04 Sensory Logic, Inc. Methods of facial coding scoring for optimally identifying consumers' responses to arrive at effective, incisive, actionable conclusions
US8332281B2 (en) 2009-09-02 2012-12-11 Image Holdings Method of displaying, managing and selling images in an event photography environment
WO2011033386A1 (en) 2009-09-16 2011-03-24 Image Holdings Method and system of displaying, managing and selling images in an event photography environment
CN102741862A (en) 2010-01-29 2012-10-17 诺基亚公司 Methods and apparatuses for facilitating object recognition
JP5524692B2 (en) * 2010-04-20 2014-06-18 富士フイルム株式会社 Information processing apparatus and method, and program
EP2453386B1 (en) * 2010-11-11 2019-03-06 LG Electronics Inc. Multimedia device, multiple image sensors having different types and method for controlling the same
US8819019B2 (en) * 2010-11-18 2014-08-26 Qualcomm Incorporated Systems and methods for robust pattern classification
US8543505B2 (en) 2011-01-14 2013-09-24 Federal Law Enforcement Development Services, Inc. Method of providing lumens and tracking of lumen consumption
US8836777B2 (en) 2011-02-25 2014-09-16 DigitalOptics Corporation Europe Limited Automatic detection of vertical gaze using an embedded imaging device
US20140093142A1 (en) * 2011-05-24 2014-04-03 Nec Corporation Information processing apparatus, information processing method, and information processing program
US8620088B2 (en) 2011-08-31 2013-12-31 The Nielsen Company (Us), Llc Methods and apparatus to count people in images
EP2756473A4 (en) 2011-09-12 2015-01-28 Intel Corp Facilitating television based interaction with social networking tools
AU2011253977B2 (en) 2011-12-12 2015-04-09 Canon Kabushiki Kaisha Method, system and apparatus for selecting an image captured on an image capture device
US8855369B2 (en) 2012-06-22 2014-10-07 Microsoft Corporation Self learning face recognition using depth based tracking for database generation and update
US20140009609A1 (en) * 2012-07-06 2014-01-09 Conexant Systems, Inc. Video door monitor using smarttv with voice wakeup
US8559684B1 (en) * 2012-08-15 2013-10-15 Google Inc. Facial recognition similarity threshold adjustment
US9265112B2 (en) 2013-03-13 2016-02-16 Federal Law Enforcement Development Services, Inc. LED light control and management system
US8873838B2 (en) * 2013-03-14 2014-10-28 Google Inc. Method and apparatus for characterizing an image
EP2973382B1 (en) 2013-03-15 2019-07-17 Socure Inc. Risk assessment using social networking data
CN103207990A (en) * 2013-03-26 2013-07-17 苏州福丰科技有限公司 People recognition system based on mobile terminal and for police
RU2536677C2 (en) * 2013-04-09 2014-12-27 ООО "НеоБИТ" Method of pattern recognition in digital image
EP2843589B1 (en) 2013-08-29 2019-03-13 Alcatel Lucent A method and platform for sending a message to a communication device associated with a moving object
US9589179B2 (en) * 2013-12-19 2017-03-07 Microsoft Technology Licensing, Llc Object detection techniques
CN103679161B (en) * 2014-01-03 2017-01-04 苏州大学 A kind of face identification method and device
CN103679162B (en) * 2014-01-03 2017-07-14 苏州大学 A kind of face identification method and system
US20150198941A1 (en) 2014-01-15 2015-07-16 John C. Pederson Cyber Life Electronic Networking and Commerce Operating Exchange
US9147117B1 (en) * 2014-06-11 2015-09-29 Socure Inc. Analyzing facial recognition data and social network data for user authentication
US20160149547A1 (en) * 2014-11-20 2016-05-26 Intel Corporation Automated audio adjustment
US20170048953A1 (en) 2015-08-11 2017-02-16 Federal Law Enforcement Development Services, Inc. Programmable switch and system
US9977950B2 (en) * 2016-01-27 2018-05-22 Intel Corporation Decoy-based matching system for facial recognition
US10817722B1 (en) 2017-03-20 2020-10-27 Cross Match Technologies, Inc. System for presentation attack detection in an iris or face scanner
US11531756B1 (en) 2017-03-20 2022-12-20 Hid Global Corporation Apparatus for directing presentation attack detection in biometric scanners
US11711638B2 (en) 2020-06-29 2023-07-25 The Nielsen Company (Us), Llc Audience monitoring systems and related methods
CN112200944B (en) * 2020-09-30 2023-01-13 广州市果豆科技有限责任公司 Barrier gate control method and system combining face recognition
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system
US11860704B2 (en) 2021-08-16 2024-01-02 The Nielsen Company (Us), Llc Methods and apparatus to determine user presence
US11758223B2 (en) 2021-12-23 2023-09-12 The Nielsen Company (Us), Llc Apparatus, systems, and methods for user presence detection for audience monitoring

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4636862A (en) * 1984-02-13 1987-01-13 Kokusai Denshin Denwa Kabushiki Kaisha System for detecting vector of motion of moving objects on picture
US4651289A (en) * 1982-01-29 1987-03-17 Tokyo Shibaura Denki Kabushiki Kaisha Pattern recognition apparatus and method for making same
US4752957A (en) * 1983-09-07 1988-06-21 Kabushiki Kaisha Toshiba Apparatus and method for recognizing unknown patterns
US4838644A (en) * 1987-09-15 1989-06-13 The United States Of America As Represented By The United States Department Of Energy Position, rotation, and intensity invariant recognizing method
US4858000A (en) * 1988-09-14 1989-08-15 A. C. Nielsen Company Image recognition audience measurement system and method
US4926491A (en) * 1984-09-17 1990-05-15 Kabushiki Kaisha Toshiba Pattern recognition device
US4930011A (en) * 1988-08-02 1990-05-29 A. C. Nielsen Company Method and apparatus for identifying individual members of a marketing and viewing audience
US4998286A (en) * 1987-02-13 1991-03-05 Olympus Optical Co., Ltd. Correlation operational apparatus for multi-dimensional images
US5031228A (en) * 1988-09-14 1991-07-09 A. C. Nielsen Company Image recognition system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4651289A (en) * 1982-01-29 1987-03-17 Tokyo Shibaura Denki Kabushiki Kaisha Pattern recognition apparatus and method for making same
US4752957A (en) * 1983-09-07 1988-06-21 Kabushiki Kaisha Toshiba Apparatus and method for recognizing unknown patterns
US4636862A (en) * 1984-02-13 1987-01-13 Kokusai Denshin Denwa Kabushiki Kaisha System for detecting vector of motion of moving objects on picture
US4926491A (en) * 1984-09-17 1990-05-15 Kabushiki Kaisha Toshiba Pattern recognition device
US4998286A (en) * 1987-02-13 1991-03-05 Olympus Optical Co., Ltd. Correlation operational apparatus for multi-dimensional images
US4838644A (en) * 1987-09-15 1989-06-13 The United States Of America As Represented By The United States Department Of Energy Position, rotation, and intensity invariant recognizing method
US4930011A (en) * 1988-08-02 1990-05-29 A. C. Nielsen Company Method and apparatus for identifying individual members of a marketing and viewing audience
US4858000A (en) * 1988-09-14 1989-08-15 A. C. Nielsen Company Image recognition audience measurement system and method
US5031228A (en) * 1988-09-14 1991-07-09 A. C. Nielsen Company Image recognition system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
L. Sirovich et al., 1987 Optical Society of America, "Low-dimensional procedure for the characterization of human faces", pp. 519-524.
L. Sirovich et al., 1987 Optical Society of America, Low dimensional procedure for the characterization of human faces , pp. 519 524. *

Cited By (204)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE49387E1 (en) 1991-12-23 2023-01-24 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE48056E1 (en) 1991-12-23 2020-06-16 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE47908E1 (en) 1991-12-23 2020-03-17 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE46310E1 (en) 1991-12-23 2017-02-14 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US8046313B2 (en) 1991-12-23 2011-10-25 Hoffberg Steven M Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US20080069403A1 (en) * 1995-06-07 2008-03-20 Automotive Technologies International, Inc. Face Monitoring System and Method for Vehicular Occupants
US7570785B2 (en) 1995-06-07 2009-08-04 Automotive Technologies International, Inc. Face monitoring system and method for vehicular occupants
US6456320B2 (en) * 1997-05-27 2002-09-24 Sanyo Electric Co., Ltd. Monitoring system and imaging system
US6445810B2 (en) * 1997-08-01 2002-09-03 Interval Research Corporation Method and apparatus for personnel detection and tracking
US6628811B1 (en) * 1998-03-19 2003-09-30 Matsushita Electric Industrial Co. Ltd. Method and apparatus for recognizing image pattern, method and apparatus for judging identity of image patterns, recording medium for recording the pattern recognizing method and recording medium for recording the pattern identity judging method
US6501857B1 (en) * 1999-07-20 2002-12-31 Craig Gotsman Method and system for detecting and classifying objects in an image
US6628834B2 (en) * 1999-07-20 2003-09-30 Hewlett-Packard Development Company, L.P. Template matching system for images
US6597801B1 (en) * 1999-09-16 2003-07-22 Hewlett-Packard Development Company L.P. Method for object registration via selection of models with dynamically ordered features
US6618490B1 (en) * 1999-09-16 2003-09-09 Hewlett-Packard Development Company, L.P. Method for efficiently registering object models in images via dynamic ordering of features
US6795567B1 (en) 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
US7974714B2 (en) 1999-10-05 2011-07-05 Steven Mark Hoffberg Intelligent electronic appliance system and method
US7688225B1 (en) 2000-01-14 2010-03-30 Haynes Michael N Method for managing a parking lot
US6816085B1 (en) 2000-01-14 2004-11-09 Michael N. Haynes Method for managing a parking lot
US7652593B1 (en) 2000-01-14 2010-01-26 Haynes Michael N Method for managing a parking lot
US6535620B2 (en) * 2000-03-10 2003-03-18 Sarnoff Corporation Method and apparatus for qualitative spatiotemporal data processing
US6865296B2 (en) * 2000-06-06 2005-03-08 Matsushita Electric Industrial Co., Ltd. Pattern recognition method, pattern check method and pattern recognition apparatus as well as pattern check apparatus using the same methods
US20020018596A1 (en) * 2000-06-06 2002-02-14 Kenji Nagao Pattern recognition method, pattern check method and pattern recognition apparatus as well as pattern check apparatus using the same methods
US6810135B1 (en) 2000-06-29 2004-10-26 Trw Inc. Optimized human presence detection through elimination of background interference
US6904347B1 (en) 2000-06-29 2005-06-07 Trw Inc. Human presence detection, identification and tracking using a facial feature image sensing system for airbag deployment
US20020006226A1 (en) * 2000-07-12 2002-01-17 Minolta Co., Ltd. Shade component removing apparatus and shade component removing method for removing shade in image
US6975763B2 (en) * 2000-07-12 2005-12-13 Minolta Co., Ltd. Shade component removing apparatus and shade component removing method for removing shade in image
US6724920B1 (en) 2000-07-21 2004-04-20 Trw Inc. Application of human facial features recognition to automobile safety
US7110570B1 (en) 2000-07-21 2006-09-19 Trw Inc. Application of human facial features recognition to automobile security and convenience
US7068301B2 (en) 2000-09-11 2006-06-27 Pinotage L.L.C. System and method for obtaining and utilizing maintenance information
US20020122583A1 (en) * 2000-09-11 2002-09-05 Thompson Robert Lee System and method for obtaining and utilizing maintenance information
US6529620B2 (en) 2000-09-11 2003-03-04 Pinotage, L.L.C. System and method for obtaining and utilizing maintenance information
US20020055957A1 (en) * 2000-11-28 2002-05-09 Hiroyuki Ohsawa Access system
US7188307B2 (en) * 2000-11-28 2007-03-06 Canon Kabushiki Kaisha Access system
US20020126897A1 (en) * 2000-12-01 2002-09-12 Yugo Ueda Motion information recognition system
US6965694B2 (en) * 2000-12-01 2005-11-15 Honda Giken Kogyo Kabushiki Kaisa Motion information recognition system
US20020067856A1 (en) * 2000-12-01 2002-06-06 Iwao Fujii Image recognition apparatus, image recognition method, and recording medium
US6690414B2 (en) * 2000-12-12 2004-02-10 Koninklijke Philips Electronics N.V. Method and apparatus to reduce false alarms in exit/entrance situations for residential security monitoring
US20040208361A1 (en) * 2001-03-29 2004-10-21 Vasile Buzuloiu Automated detection of pornographic images
US7103215B2 (en) 2001-03-29 2006-09-05 Potomedia Technologies Llc Automated detection of pornographic images
US6873743B2 (en) 2001-03-29 2005-03-29 Fotonation Holdings, Llc Method and apparatus for the automatic real-time detection and correction of red-eye defects in batches of digital images or in handheld appliances
US6904168B1 (en) 2001-03-29 2005-06-07 Fotonation Holdings, Llc Workflow system for detection and classification of images suspected as pornographic
US7085774B2 (en) 2001-08-30 2006-08-01 Infonox On The Web Active profiling system for tracking and quantifying customer conversion efficiency
US7054468B2 (en) 2001-12-03 2006-05-30 Honda Motor Co., Ltd. Face recognition using kernel fisherfaces
US20040017932A1 (en) * 2001-12-03 2004-01-29 Ming-Hsuan Yang Face recognition using kernel fisherfaces
US7379602B2 (en) 2002-07-29 2008-05-27 Honda Giken Kogyo Kabushiki Kaisha Extended Isomap using Fisher Linear Discriminant and Kernel Fisher Linear Discriminant
US20040034611A1 (en) * 2002-08-13 2004-02-19 Samsung Electronics Co., Ltd. Face recognition method using artificial neural network and apparatus thereof
US7295687B2 (en) * 2002-08-13 2007-11-13 Samsung Electronics Co., Ltd. Face recognition method using artificial neural network and apparatus thereof
US7512571B2 (en) 2002-08-29 2009-03-31 Paul Rudolf Associative memory device and method based on wave propagation
US20040193789A1 (en) * 2002-08-29 2004-09-30 Paul Rudolf Associative memory device and method based on wave propagation
US7362368B2 (en) 2003-06-26 2008-04-22 Fotonation Vision Limited Perfecting the optics within a digital image acquisition device using face detection
US20060204056A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Perfecting the effect of flash within an image acquisition devices using face detection
US7912245B2 (en) 2003-06-26 2011-03-22 Tessera Technologies Ireland Limited Method of improving orientation and color balance of digital images using face detection information
US20070160307A1 (en) * 2003-06-26 2007-07-12 Fotonation Vision Limited Modification of Viewing Parameters for Digital Images Using Face Detection Information
US20110025886A1 (en) * 2003-06-26 2011-02-03 Tessera Technologies Ireland Limited Perfecting the Effect of Flash within an Image Acquisition Devices Using Face Detection
US7269292B2 (en) 2003-06-26 2007-09-11 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US20060215924A1 (en) * 2003-06-26 2006-09-28 Eran Steinberg Perfecting of digital image rendering parameters within rendering devices using face detection
US7315630B2 (en) 2003-06-26 2008-01-01 Fotonation Vision Limited Perfecting of digital image rendering parameters within rendering devices using face detection
US7317815B2 (en) 2003-06-26 2008-01-08 Fotonation Vision Limited Digital image processing composition using face detection information
US20060204054A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image processing composition using face detection information
US20080019565A1 (en) * 2003-06-26 2008-01-24 Fotonation Vision Limited Digital Image Adjustable Compression and Resolution Using Face Detection Information
US20110013044A1 (en) * 2003-06-26 2011-01-20 Tessera Technologies Ireland Limited Perfecting the effect of flash within an image acquisition devices using face detection
US20080043122A1 (en) * 2003-06-26 2008-02-21 Fotonation Vision Limited Perfecting the Effect of Flash within an Image Acquisition Devices Using Face Detection
US20060204055A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image processing using face detection information
US20060204057A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Digital image adjustable compression and resolution using face detection information
US8005265B2 (en) 2003-06-26 2011-08-23 Tessera Technologies Ireland Limited Digital image processing using face detection information
US20110075894A1 (en) * 2003-06-26 2011-03-31 Tessera Technologies Ireland Limited Digital Image Processing Using Face Detection Information
US20060204110A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Detecting orientation of digital images using face detection information
US7860274B2 (en) 2003-06-26 2010-12-28 Fotonation Vision Limited Digital image processing using face detection information
US7853043B2 (en) 2003-06-26 2010-12-14 Tessera Technologies Ireland Limited Digital image processing using face detection information
US20080143854A1 (en) * 2003-06-26 2008-06-19 Fotonation Vision Limited Perfecting the optics within a digital image acquisition device using face detection
US9692964B2 (en) 2003-06-26 2017-06-27 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US20060203108A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Perfecting the optics within a digital image acquisition device using face detection
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US9129381B2 (en) 2003-06-26 2015-09-08 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US9053545B2 (en) 2003-06-26 2015-06-09 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US7848549B2 (en) 2003-06-26 2010-12-07 Fotonation Vision Limited Digital image processing using face detection information
US7466866B2 (en) 2003-06-26 2008-12-16 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US8989453B2 (en) 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US7844135B2 (en) 2003-06-26 2010-11-30 Tessera Technologies Ireland Limited Detecting orientation of digital images using face detection information
US8675991B2 (en) 2003-06-26 2014-03-18 DigitalOptics Corporation Europe Limited Modification of post-viewing parameters for digital images using region or feature information
US7471846B2 (en) 2003-06-26 2008-12-30 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US20090003708A1 (en) * 2003-06-26 2009-01-01 Fotonation Ireland Limited Modification of post-viewing parameters for digital images using image region or feature information
US20090052750A1 (en) * 2003-06-26 2009-02-26 Fotonation Vision Limited Digital Image Processing Using Face Detection Information
US20090052749A1 (en) * 2003-06-26 2009-02-26 Fotonation Vision Limited Digital Image Processing Using Face Detection Information
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US20060203107A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Perfecting of digital image capture parameters within acquisition devices using face detection
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US20090141144A1 (en) * 2003-06-26 2009-06-04 Fotonation Vision Limited Digital Image Adjustable Compression and Resolution Using Face Detection Information
US20100271499A1 (en) * 2003-06-26 2010-10-28 Fotonation Ireland Limited Perfecting of Digital Image Capture Parameters Within Acquisition Devices Using Face Detection
US7809162B2 (en) 2003-06-26 2010-10-05 Fotonation Vision Limited Digital image processing using face detection information
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US20070110305A1 (en) * 2003-06-26 2007-05-17 Fotonation Vision Limited Digital Image Processing Using Face Detection and Skin Tone Information
US7574016B2 (en) 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
US8326066B2 (en) 2003-06-26 2012-12-04 DigitalOptics Corporation Europe Limited Digital image adjustable compression and resolution using face detection information
US20100165140A1 (en) * 2003-06-26 2010-07-01 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US7616233B2 (en) 2003-06-26 2009-11-10 Fotonation Vision Limited Perfecting of digital image capture parameters within acquisition devices using face detection
US8224108B2 (en) 2003-06-26 2012-07-17 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8160312B2 (en) 2003-06-26 2012-04-17 DigitalOptics Corporation Europe Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7630527B2 (en) 2003-06-26 2009-12-08 Fotonation Ireland Limited Method of improving orientation and color balance of digital images using face detection information
US7634109B2 (en) 2003-06-26 2009-12-15 Fotonation Ireland Limited Digital image processing using face detection information
US7702136B2 (en) 2003-06-26 2010-04-20 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US20100092039A1 (en) * 2003-06-26 2010-04-15 Eran Steinberg Digital Image Processing Using Face Detection Information
US8155401B2 (en) 2003-06-26 2012-04-10 DigitalOptics Corporation Europe Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7693311B2 (en) 2003-06-26 2010-04-06 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7684630B2 (en) 2003-06-26 2010-03-23 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US20100039525A1 (en) * 2003-06-26 2010-02-18 Fotonation Ireland Limited Perfecting of Digital Image Capture Parameters Within Acquisition Devices Using Face Detection
US8131016B2 (en) 2003-06-26 2012-03-06 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US20100054549A1 (en) * 2003-06-26 2010-03-04 Fotonation Vision Limited Digital Image Processing Using Face Detection Information
US20100054533A1 (en) * 2003-06-26 2010-03-04 Fotonation Vision Limited Digital Image Processing Using Face Detection Information
US8055090B2 (en) 2003-06-26 2011-11-08 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US20080317357A1 (en) * 2003-08-05 2008-12-25 Fotonation Ireland Limited Method of gathering visual meta data using a reference image
US8330831B2 (en) 2003-08-05 2012-12-11 DigitalOptics Corporation Europe Limited Method of gathering visual meta data using a reference image
US20050060738A1 (en) * 2003-09-15 2005-03-17 Mitsubishi Digital Electronics America, Inc. Passive enforcement method for media ratings
US7388971B2 (en) 2003-10-23 2008-06-17 Northrop Grumman Corporation Robust and low cost optical system for sensing stress, emotion and deception in human subjects
US20050089206A1 (en) * 2003-10-23 2005-04-28 Rice Robert R. Robust and low cost optical system for sensing stress, emotion and deception in human subjects
US7660445B2 (en) * 2003-11-19 2010-02-09 Eastman Kodak Company Method for selecting an emphasis image from an image collection based upon content recognition
US7382903B2 (en) * 2003-11-19 2008-06-03 Eastman Kodak Company Method for selecting an emphasis image from an image collection based upon content recognition
US20050105803A1 (en) * 2003-11-19 2005-05-19 Ray Lawrence A. Method for selecting an emphasis image from an image collection based upon content recognition
US20050192760A1 (en) * 2003-12-16 2005-09-01 Dunlap Susan C. System and method for plant identification
US8577616B2 (en) 2003-12-16 2013-11-05 Aerulean Plant Identification Systems, Inc. System and method for plant identification
US7331671B2 (en) 2004-03-29 2008-02-19 Delphi Technologies, Inc. Eye tracking method based on correlation and detected eye movement
US7362885B2 (en) 2004-04-20 2008-04-22 Delphi Technologies, Inc. Object tracking and eye state identification method
US7227567B1 (en) 2004-09-14 2007-06-05 Avaya Technology Corp. Customizable background for video communications
US7050084B1 (en) 2004-09-24 2006-05-23 Avaya Technology Corp. Camera frame display
US8320641B2 (en) 2004-10-28 2012-11-27 DigitalOptics Corporation Europe Limited Method and apparatus for red-eye detection using preview or other reference images
US20110221936A1 (en) * 2004-10-28 2011-09-15 Tessera Technologies Ireland Limited Method and Apparatus for Detection and Correction of Multiple Image Defects Within Digital Images Using Preview or Other Reference Images
US8135184B2 (en) 2004-10-28 2012-03-13 DigitalOptics Corporation Europe Limited Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images
US7953251B1 (en) 2004-10-28 2011-05-31 Tessera Technologies Ireland Limited Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images
US7706576B1 (en) 2004-12-28 2010-04-27 Avaya Inc. Dynamic video equalization of images using face-tracking
US7460150B1 (en) 2005-03-14 2008-12-02 Avaya Inc. Using gaze detection to determine an area of interest within a scene
US7564476B1 (en) 2005-05-13 2009-07-21 Avaya Inc. Prevent video calls based on appearance
US7962629B2 (en) 2005-06-17 2011-06-14 Tessera Technologies Ireland Limited Method for establishing a paired connection between media devices
US20110060836A1 (en) * 2005-06-17 2011-03-10 Tessera Technologies Ireland Limited Method for Establishing a Paired Connection Between Media Devices
US20070064208A1 (en) * 2005-09-07 2007-03-22 Ablaze Development Corporation Aerial support structure and method for image capture
US7650034B2 (en) 2005-12-14 2010-01-19 Delphi Technologies, Inc. Method of locating a human eye in a video image
US20080316328A1 (en) * 2005-12-27 2008-12-25 Fotonation Ireland Limited Foreground/background separation using reference images
US8593542B2 (en) 2005-12-27 2013-11-26 DigitalOptics Corporation Europe Limited Foreground/background separation using reference images
US7668304B2 (en) 2006-01-25 2010-02-23 Avaya Inc. Display hierarchy of participants during phone call
US20070172047A1 (en) * 2006-01-25 2007-07-26 Avaya Technology Llc Display hierarchy of participants during phone call
US8433050B1 (en) 2006-02-06 2013-04-30 Avaya Inc. Optimizing conference quality with diverse codecs
US8682097B2 (en) 2006-02-14 2014-03-25 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US20080317378A1 (en) * 2006-02-14 2008-12-25 Fotonation Ireland Limited Digital image enhancement with reference images
US8165282B1 (en) 2006-05-25 2012-04-24 Avaya Inc. Exploiting facial characteristics for improved agent selection
US20080013798A1 (en) * 2006-06-12 2008-01-17 Fotonation Vision Limited Advances in extending the aam techniques from grayscale to color images
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
US7620216B2 (en) 2006-06-14 2009-11-17 Delphi Technologies, Inc. Method of tracking a human eye in a video image
US8055029B2 (en) 2006-08-11 2011-11-08 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US20110129121A1 (en) * 2006-08-11 2011-06-02 Tessera Technologies Ireland Limited Real-time face tracking in a digital image acquisition device
US7864990B2 (en) 2006-08-11 2011-01-04 Tessera Technologies Ireland Limited Real-time face tracking in a digital image acquisition device
US8050465B2 (en) 2006-08-11 2011-11-01 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US8509496B2 (en) 2006-08-11 2013-08-13 DigitalOptics Corporation Europe Limited Real-time face tracking with reference images
US20100060727A1 (en) * 2006-08-11 2010-03-11 Eran Steinberg Real-time face tracking with reference images
US7916897B2 (en) 2006-08-11 2011-03-29 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
US8385610B2 (en) 2006-08-11 2013-02-26 DigitalOptics Corporation Europe Limited Face tracking for controlling imaging parameters
US20090208056A1 (en) * 2006-08-11 2009-08-20 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US8270674B2 (en) 2006-08-11 2012-09-18 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US20110026780A1 (en) * 2006-08-11 2011-02-03 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
US20080267461A1 (en) * 2006-08-11 2008-10-30 Fotonation Ireland Limited Real-time face tracking in a digital image acquisition device
US7620218B2 (en) 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images
US20080089561A1 (en) * 2006-10-11 2008-04-17 Tong Zhang Face-based image clustering
US8031914B2 (en) 2006-10-11 2011-10-04 Hewlett-Packard Development Company, L.P. Face-based image clustering
US20080175481A1 (en) * 2007-01-18 2008-07-24 Stefan Petrescu Color Segmentation
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8224039B2 (en) 2007-02-28 2012-07-17 DigitalOptics Corporation Europe Limited Separating a directional lighting variability in statistical face modelling based on texture space decomposition
US20080205712A1 (en) * 2007-02-28 2008-08-28 Fotonation Vision Limited Separating Directional Lighting Variability in Statistical Face Modelling Based on Texture Space Decomposition
US8509561B2 (en) 2007-02-28 2013-08-13 DigitalOptics Corporation Europe Limited Separating directional lighting variability in statistical face modelling based on texture space decomposition
US8923564B2 (en) 2007-03-05 2014-12-30 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US8649604B2 (en) 2007-03-05 2014-02-11 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US9224034B2 (en) 2007-03-05 2015-12-29 Fotonation Limited Face searching and detection in a digital image acquisition device
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US20100272363A1 (en) * 2007-03-05 2010-10-28 Fotonation Vision Limited Face searching and detection in a digital image acquisition device
US7916971B2 (en) 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
US8515138B2 (en) 2007-05-24 2013-08-20 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US20110234847A1 (en) * 2007-05-24 2011-09-29 Tessera Technologies Ireland Limited Image Processing Method and Apparatus
US8494232B2 (en) 2007-05-24 2013-07-23 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US20080292193A1 (en) * 2007-05-24 2008-11-27 Fotonation Vision Limited Image Processing Method and Apparatus
US20110235912A1 (en) * 2007-05-24 2011-09-29 Tessera Technologies Ireland Limited Image Processing Method and Apparatus
US9767539B2 (en) 2007-06-21 2017-09-19 Fotonation Limited Image capture device with contemporaneous image correction mechanism
US20080317379A1 (en) * 2007-06-21 2008-12-25 Fotonation Ireland Limited Digital image enhancement with reference images
US8896725B2 (en) 2007-06-21 2014-11-25 Fotonation Limited Image capture device with contemporaneous reference image capture mechanism
US8213737B2 (en) 2007-06-21 2012-07-03 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US8155397B2 (en) 2007-09-26 2012-04-10 DigitalOptics Corporation Europe Limited Face tracking in a camera processor
US20090080713A1 (en) * 2007-09-26 2009-03-26 Fotonation Vision Limited Face tracking in a camera processor
US20090103909A1 (en) * 2007-10-17 2009-04-23 Live Event Media, Inc. Aerial camera support structure
US20090141947A1 (en) * 2007-11-29 2009-06-04 Volodymyr Kyyko Method and system of person identification by facial image
US8064653B2 (en) 2007-11-29 2011-11-22 Viewdle, Inc. Method and system of person identification by facial image
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US20110053654A1 (en) * 2008-03-26 2011-03-03 Tessera Technologies Ireland Limited Method of Making a Digital Camera Image of a Scene Including the Camera User
US8243182B2 (en) 2008-03-26 2012-08-14 DigitalOptics Corporation Europe Limited Method of making a digital camera image of a scene including the camera user
US20090244296A1 (en) * 2008-03-26 2009-10-01 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US20100026832A1 (en) * 2008-07-30 2010-02-04 Mihai Ciuc Automatic face and skin beautification using face detection
US8345114B2 (en) 2008-07-30 2013-01-01 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
US9007480B2 (en) 2008-07-30 2015-04-14 Fotonation Limited Automatic face and skin beautification using face detection
US8384793B2 (en) 2008-07-30 2013-02-26 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
US20100026831A1 (en) * 2008-07-30 2010-02-04 Fotonation Ireland Limited Automatic face and skin beautification using face detection
US10032068B2 (en) 2009-10-02 2018-07-24 Fotonation Limited Method of making a digital camera image of a first scene with a superimposed second scene
US8379917B2 (en) 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features
US20110081052A1 (en) * 2009-10-02 2011-04-07 Fotonation Ireland Limited Face recognition performance using additional image features
US8251597B2 (en) 2009-10-16 2012-08-28 Wavecam Media, Inc. Aerial support structure for capturing an image of a target
US20110091196A1 (en) * 2009-10-16 2011-04-21 Wavecam Media, Inc. Aerial support structure for capturing an image of a target
US20130142399A1 (en) * 2011-12-04 2013-06-06 King Saud University Face recognition using multilayered discriminant analysis
US9355303B2 (en) * 2011-12-04 2016-05-31 King Saud University Face recognition using multilayered discriminant analysis

Also Published As

Publication number Publication date
EP0555380A4 (en) 1994-07-20
US5164992A (en) 1992-11-17
WO1992008202A1 (en) 1992-05-14
EP0555380B1 (en) 1998-12-09
DE69130616T2 (en) 1999-05-06
EP0555380A1 (en) 1993-08-18
SG48965A1 (en) 1998-05-18
AU9037591A (en) 1992-05-26
ATE174441T1 (en) 1998-12-15
DE69130616D1 (en) 1999-01-21

Similar Documents

Publication Publication Date Title
USRE36041E (en) Face recognition system
Föckler et al. Phoneguide: museum guidance supported by on-device object recognition on mobile phones
US6681032B2 (en) Real-time facial recognition and verification system
Turk et al. Eigenfaces for recognition
US6807286B1 (en) Object recognition using binary image quantization and hough kernels
US7596247B2 (en) Method and apparatus for object recognition using probability models
Steffens et al. Personspotter-fast and robust system for human detection, tracking and recognition
US11443454B2 (en) Method for estimating the pose of a camera in the frame of reference of a three-dimensional scene, device, augmented reality system and computer program therefor
US9367730B2 (en) Method and system for automated face detection and recognition
US7167576B2 (en) Method and apparatus for measuring dwell time of objects in an environment
US8977010B2 (en) Method for discriminating between a real face and a two-dimensional image of the face in a biometric detection process
US20030059124A1 (en) Real-time facial recognition and verification system
CN110110601A (en) Video pedestrian weight recognizer and device based on multi-space attention model
Wei et al. Face detection for image annotation
Fukui et al. Facial feature point extraction method based on combination of shape extraction and pattern matching
US8094971B2 (en) Method and system for automatically determining the orientation of a digital image
CN110399835A (en) A kind of analysis method of personnel&#39;s residence time, apparatus and system
Foresti et al. Face detection for visual surveillance
Hashemi A survey of visual attention models
Rao Implementation of Low Cost IoT Based Intruder Detection System by Face Recognition using Machine Learning
Razalli et al. Real-time face tracking application with embedded facial age range estimation algorithm
CN113255549A (en) Intelligent recognition method and system for pennisseum hunting behavior state
Sainthillier et al. Skin capillary network recognition and analysis by means of neural algorithms
Imanuddin et al. A moving human detection and tracking using combination of HOG and color histogram
Monwar et al. A real-time face recognition approach from video sequence using skin color model and eigenface method

Legal Events

Date Code Title Description
FPAY Fee payment

Year of fee payment: 8

SULP Surcharge for late payment
FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES FILED (ORIGINAL EVENT CODE: PMFP); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PMFG); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12

SULP Surcharge for late payment
PRDP Patent reinstated due to the acceptance of a late maintenance fee

Effective date: 19990112

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY