WO2016077286A1 - System and method for device position classification - Google Patents

System and method for device position classification Download PDF

Info

Publication number
WO2016077286A1
WO2016077286A1 PCT/US2015/059854 US2015059854W WO2016077286A1 WO 2016077286 A1 WO2016077286 A1 WO 2016077286A1 US 2015059854 W US2015059854 W US 2015059854W WO 2016077286 A1 WO2016077286 A1 WO 2016077286A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
rotation matrix
signal
characteristic
operable
Prior art date
Application number
PCT/US2015/059854
Other languages
French (fr)
Inventor
Hemabh Shekhar
Original Assignee
Invensense, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/537,568 external-priority patent/US20160131484A1/en
Application filed by Invensense, Inc. filed Critical Invensense, Inc.
Publication of WO2016077286A1 publication Critical patent/WO2016077286A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1654Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with electromagnetic compass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • G01C22/006Pedometers

Definitions

  • Consumer devices do not classify device position, for example relative to a user, in an efficient and reliable manner.
  • hand-held and/or wearable consumer devices do not efficiently and reliably determine whether and/or how a device is being held or otherwise carried by a user.
  • Figure 1 shows a block diagram of an example electronic device comprising position classification capability, in accordance with various aspects of the present disclosure.
  • Figures 2A-2B show example signal plots, in accordance with various aspects of the present disclosure.
  • Figure 3 shows a chart of two signal processing features, in accordance with various aspects of the present disclosure.
  • Figure 4 shows an example processing system for determining a position of a device, in accordance with various aspects of the present disclosure.
  • Figure 5 shows an example processing system for determining a position of a device, for example incorporating signal selection, in accordance with various aspects of the present disclosure.
  • Figure 6 shows an example processing system for determining a position of a device, for example incorporating non-inertial sensor data, in accordance with various aspects of the present disclosure.
  • Figure 7 shows an example processing system for determining a position of a device, for example incorporating orientation, in accordance with various aspects of the present disclosure.
  • Figure 8 shows an example processing system for determining a position of a device, for example analyzing multiple characteristics of a signal, in accordance with various aspects of the present disclosure.
  • Figure 9 shows an example processing system for determining a position of a device, for example analyzing one or more characteristics of a plurality of signals, in accordance with various aspects of the present disclosure.
  • Figure 10 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a signal and a second characteristic of a plurality of signals, in accordance with various aspects of the present disclosure.
  • Figure 11 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a plurality of signals and a second characteristic of a plurality of signals, in accordance with various aspects of the present disclosure.
  • Figure 12 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a selectable first set of signals and a second characteristic of a selectable second set of signals, in accordance with various aspects of the present disclosure.
  • Figure 13 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a selectable first set of signals and a second characteristic of a selectable second set of signals and non-inertial sensor data, in accordance with various aspects of the present disclosure.
  • aspects of this disclosure comprise processing one or more respective signal characteristics of one or more signals indicative of device orientation to determine a position of a device relative to a user that is in motion.
  • various aspects of this disclosure comprise analyzing, over time, one or more characteristics of a first signal indicative of the alignment of a first device axis with a reference direction and one or more characteristics of a second signal indicative of the alignment of a second device axis with the known direction, and determining the position of the device based at least in part on such analysis.
  • One or more analyzed signals may, for example, correspond to and/or be derived from MEMS sensor signals.
  • phrase "A and/or B” Such phrase should be understood to mean just A, or just B, or both A and B.
  • phrase "A, B, and/or C” should be understood to mean just A, just B, just C, A and B, A and C, B and C, or all of A and B and C.
  • any one or more of the modules discussed herein may be implemented by shared hardware, including for example a shared processor. Also for example, any one or more of the modules discussed herein may share software portions, including for example subroutines. Additionally for example, any one or more of the modules discussed herein may be implemented with independent dedicated hardware and/or software. Accordingly, the scope of various aspects of this disclosure should not be limited by arbitrary boundaries between modules unless explicitly claimed.
  • a chip is defined to include at least one substrate typically formed from a semiconductor material.
  • a single chip may for example be formed from multiple substrates, where the substrates are mechanically bonded and electrically connected to preserve the functionality.
  • Multiple chip includes at least 2 substrates, wherein the 2 substrates are electrically connected, but do not require mechanical bonding.
  • a package provides electrical connection between the bond pads on the chip (or for example a multi-chip module) to a metal lead that can be soldered to a printed circuit board (or PCB).
  • a package typically comprises a substrate and a cover.
  • An Integrated Circuit (IC) substrate may refer to a silicon substrate with electrical circuits, typically CMOS circuits.
  • IC Integrated Circuit
  • a MEMS substrate provides mechanical support for the MEMS structure(s). The MEMS structural layer is attached to the MEMS substrate.
  • the MEMS substrate is also referred to as handle substrate or handle wafer. In some embodiments, the handle substrate serves as a cap to the MEMS structure.
  • an electronic device incorporating a sensor may, for example, employ a motion tracking module also referred to as Motion Processing Unit (MPU) that includes at least one sensor in addition to electronic circuits.
  • the at least one sensor may comprise any of a variety of sensors, such as for example a gyroscope, a compass, a magnetometer, an accelerometer, a microphone, a pressure sensor, a proximity sensor, a moisture sensor, a temperature sensor, a biometric sensor, or an ambient light sensor, among others known in the art.
  • Some embodiments may, for example, comprise an accelerometer, gyroscope, and magnetometer or other compass technology, which each provide a measurement along three axes that are orthogonal relative to each other, and may be referred to as a 9-axis device.
  • Other embodiments may, for example, comprise an accelerometer, gyroscope, compass, and pressure sensor, and may be referred to as a 10-axis device.
  • Other embodiments may not include all the sensors or may provide measurements along one or more axes.
  • the sensors may, for example, be formed on a first substrate.
  • Various embodiments may, for example, include solid-state sensors and/or any other type of sensors.
  • the electronic circuits in the MPU may, for example, receive measurement outputs from the one or more sensors.
  • the electronic circuits process the sensor data.
  • the electronic circuits may, for example, be implemented on a second silicon substrate.
  • the first substrate may be vertically stacked, attached and electrically connected to the second substrate in a single semiconductor chip, while in other embodiments, the first substrate may be disposed laterally and electrically connected to the second substrate in a single semiconductor package. (e.g., both attached to a common packaging substrate or other material).
  • the sensors may, for example, be formed on different respective substrates (e.g. , all attached to a common packaging substrate or other material).
  • the first substrate is attached to the second substrate through wafer bonding, as described in commonly owned U.S. Patent No. 7, 104, 129, which is hereby incorporated herein by reference in its entirety, to simultaneously provide electrical connections and hermetically seal the MEMS devices.
  • This fabrication technique advantageously enables technology that allows for the design and manufacture of high performance, multi-axis, inertial sensors in a very small and economical package. Integration at the wafer-level minimizes parasitic capacitances, allowing for improved signal-to-noise relative to a discrete solution. Such integration at the wafer-level also enables the incorporation of a rich feature set which minimizes the need for external amplification.
  • raw data refers to measurement outputs from the sensors which are not yet processed.
  • Motion data refers to processed raw data. Processing may, for example, comprise applying a sensor fusion algorithm or applying any other algorithm. In the case of a sensor fusion algorithm, data from one or more sensors may be combined and/or processed to provide an orientation of the device.
  • an MPU may include processors, memory, control logic and sensors among structures.
  • the operating system of the telephone may be beneficial for the operating system of the telephone to know how the user is utilizing the phone while walking.
  • the operating system of the telephone may know whether the phone is presently in a user's pocket while the user is walking, whether the phone is being held in a user' s hand in front of the user while the user is walking, whether the phone is being held in a hand at the user's side while the user is walking, etc.
  • the operating system may safely turn off various phone functionality (e.g., visual display functionality, positioning functionality, etc.) and may also, for example, turn up a volume of audio notifications. Also for example, when a phone is being carried by a user out in front of the user, the operating system may turn on or keep on the visual display and/or other functionality, and may also turn down a volume of audio notifications.
  • various phone functionality e.g., visual display functionality, positioning functionality, etc.
  • the operating system may turn on or keep on the visual display and/or other functionality, and may also turn down a volume of audio notifications.
  • various aspects of this disclosure comprise processing one or more respective signal characteristics of one or more signals indicative of device orientation to determine a position of a device relative to a user (e.g. , a user that is moving).
  • various aspects of this disclosure comprise analyzing, over time, one or more characteristics of a first signal indicative of the alignment of a first device axis with a first reference direction and one or more characteristics of a second signal indicative of the alignment of a second device axis with a second reference direction (e.g., where the first and second reference directions may be the same or different), and determining the position of the device based at least in part on such analysis.
  • the position of the device may, for example, be determined in relation to the user of the device (e.g., in the user's pocket, in the user's hand at the user' s side, in the user's hand held in front of the user, etc.).
  • the discussion will now turn to discussing various aspects in view of the attached figures.
  • FIG. 1 shows a block diagram of an example electronic device 100 comprising position classification capability, in accordance with various aspects of the present disclosure.
  • the device 100 may be implemented as a device or apparatus, such as a handheld and/or wearable device that can be moved in space by a user, and its motion and/or orientation in space therefore sensed.
  • a handheld and/or wearable device may comprise a mobile phone (e.g. , a cellular phone, a phone running on a local network, or any other telephone handset), wired telephone (e.g.
  • a phone attached by a wire and/or optical tether personal digital assistant (PDA), pedometer, personal activity and/or health monitoring device, video game player, video game controller, navigation device, mobile internet device (MID), personal navigation device (PND), digital still camera, digital video camera, binoculars, telephoto lens, portable music, video, or media player, remote control, or other handheld device, headset, eyeglasses, or a combination of one or more of these devices.
  • PDA personal digital assistant
  • pedometer personal activity and/or health monitoring device
  • video game player video game controller
  • navigation device mobile internet device (MID), personal navigation device (PND)
  • digital still camera digital video camera
  • binoculars binoculars
  • telephoto lens portable music, video, or media player, remote control, or other handheld device, headset, eyeglasses, or a combination of one or more of these devices.
  • the device 100 may be a self-contained device that comprises its own display and/or other user output devices in addition to the user input devices as described below.
  • the device 100 may function in conjunction with another portable device or a non-portable device such as a desktop computer, electronic tabletop device, server computer, smart phone, etc., which can communicate with the device 100, e.g. , via network connections.
  • the device 100 may, for example, be capable of communicating via a wired connection using any type of wire-based communication protocol (e.g., serial transmissions, parallel transmissions, packet-based data communications), wireless connection (e.g. , electromagnetic radiation, infrared radiation or other wireless technology), or a combination of one or more wired connections and one or more wireless connections.
  • the example device 100 comprises an MPU 120, application (or host) processor 112, application (or host) memory 114, and may comprise one or more sensors, such as external sensor(s) 116.
  • the application processor 112 may, for example, be configured to perform the various computations and operations involved with the general function of the device 100 (e.g. , running applications, performing operating system functions, performing power management functionality, controlling user interface functionality for the device 100, etc.).
  • the application processor 112 may, for example, be coupled to MPU 120 through a communication interface 118, which may be any suitable bus or interface, such as a peripheral component interconnect express (PCIe) bus, a universal serial bus (USB), a universal asynchronous receiver/transmitter (UART) serial bus, a suitable advanced microcontroller bus architecture (AMBA) interface, an Inter-Integrated Circuit (I2C) bus, a serial digital input output (SDIO) bus, or other equivalent.
  • the application memory 114 may, for example, comprise programs, drivers or other data that utilize information provided by the MPU 120. Details regarding example suitable configurations of the application (or host) processor 112 and MPU 120 may be found in co-pending, commonly owned U.S. Patent Application Serial No.
  • the MPU 120 is shown to comprise a sensor processor 130, internal memory 140 and one or more internal sensors 150.
  • the internal sensors 150 comprise a gyroscope 151, an accelerometer 152, a compass 153 (for example a magnetometer), a pressure sensor 154, a microphone 155, and a proximity sensor 156.
  • the internal sensors 150 may comprise any of a variety of sensors, for example, a temperature sensor, light sensor, moisture sensor, biometric sensor, etc.
  • All or some of the internal sensors 150 may, for example, be implemented as MEMS -based motion sensors, including inertial sensors such as a gyroscope or accelerometer, or an electromagnetic sensor such as a Hall effect or Lorentz field magnetometer. As desired, one or more of the internal sensors 150 may be configured to provide raw data output measured along three orthogonal axes or any equivalent structure.
  • the internal memory 140 may store algorithms, routines or other instructions for processing data output by one or more of the internal sensors 120, including the position classification module 142 and sensor fusion module 144, as described in more detail herein.
  • external sensor(s) 116 may comprise one or more sensors, such as accelerometers, gyroscopes, magnetometers, pressure sensors, microphones, proximity sensors, and ambient light sensors, biometric sensors, temperature sensors, and moisture sensors, among other sensors.
  • an internal sensor generally refers to a sensor implemented, for example using MEMS techniques, for integration with the MPU 120 into a single chip.
  • an external sensor as used herein generally refers to a sensor carried on-board the device 100 that is not integrated into the MPU 120.
  • position classification module 142 may be implemented using instructions stored in any available memory resource, such as for example the application memory 114, and may be executed using any available processor, such as for example the application processor 112. Still further, the functionality performed by the position classification module 142 may be implemented using any combination of hardware, firmware and software.
  • the application (or host) processor 112 and/or sensor processor 130 may be one or more microprocessors, central processing units (CPUs), microcontrollers or other processors, which run software programs for the device 100 and/or for other applications related to the functionality of the device 100.
  • different software application programs such as menu navigation software, games, camera function control, navigation software, and telephone, or a wide variety of other software and functional interfaces, can be provided.
  • multiple different applications can be provided on a single device 100, and in some of those embodiments, multiple applications can run simultaneously on the device 100.
  • Multiple layers of software can, for example, be provided on a computer readable medium such as electronic memory or other storage medium such as hard disk, optical disk, flash drive, etc., for use with application processor 112 and sensor processor 130.
  • an operating system layer can be provided for the device 100 to control and manage system resources in real time, enable functions of application software and other layers, and interface application programs with other software and functions of the device 100.
  • one or more motion algorithm layers may provide motion algorithms for lower- level processing of raw sensor data provided from internal or external sensors.
  • a sensor device driver layer may provide a software interface to the hardware sensors of the device 100.
  • Some or all of these layers can be provided in the application memory 114 for access by the application processor 112, in internal memory 140 for access by the sensor processor 130, or in any other suitable architecture (e.g. , including distributed architectures).
  • the example architecture depicted in Figure 1 may provide for position classification to be performed using the MPU 120 and might not require involvement of the application (or host) processor 112 and/or application memory 114.
  • Such example embodiments may, for example, be implemented with one or more internal sensor sensors 150 on a single chip and/or a multi-chip.
  • the position classification techniques may be implemented using computationally efficient algorithms to reduce processing overhead and power consumption.
  • a position classification module may be implemented by a processor (e.g., the sensor processor 130) operating in accordance with software instructions (e.g., the position classification module 142 stored in the internal memory 140), or by a pure hardware solution.
  • software instructions e.g., the position classification module 142 stored in the internal memory 140
  • FIG. 2- 13 will provide further example details of at least the operation of the sensor fusion software module 144 and/or the position classification module 142 (e.g. , when executed by a processor such as the sensor processor 130).
  • any or all of the functional modules discussed herein may be implemented in a pure hardware implementation and/or by a processor operating in accordance with software instructions.
  • any or all software instructions may be stored in a non-transitory computer-readable medium
  • Various aspects of this disclosure comprise determining or classifying a position of a device by, at least in part, analyzing transformation coefficients that are generally utilized to transform a position, vector, velocity, acceleration, etc., from a first coordinate system to a second coordinate system. Such a transformation may be generally performed by multiplying an input vector expressed in the first coordinate system by a transformation matrix.
  • General transformation matrices comprise translation coefficients and rotation coefficients. For illustrative clarity, the discussion herein will focus on rotation coefficients. Note, however, that the scope of various aspects of this disclosure is not limited to rotation coefficients.
  • a rotation matrix e.g., a direction cosine matrix or DCM
  • a rotation matrix may be utilized to rotationally transform coordinates (e.g. , of a vector, position, velocity, acceleration, force, etc.) expressed in a first coordinate system to coordinates expressed in a second coordinate system.
  • coordinates e.g. , of a vector, position, velocity, acceleration, force, etc.
  • a direction cosine matrix R may look like:
  • the third row of the rotation matrix R may, for example, be generally concerned with determining the z-axis component of the world coordinate system, A w z , as a function of the matrix coefficients R 31 , R 32 , and R 33 multiplied by respective At, x , Ab y , and At, z values of the body coordinate vector A b , which are then summed.
  • R 33 which may also be referred to herein as g z , is the extent to which the z axis of the body coordinate system is aligned with the z axis of the world coordinate system.
  • the z axis in the body coordinate system may, for example, be defined as extending orthogonally from the face of the telephone.
  • the z axis of the world coordinate system may, for example, be aligned with gravity and point upward from the ground.
  • R 32 which may also be referred to herein as g y , is the extent to which the y axis of the body coordinate system is aligned with the z axis of the world coordinate system.
  • the y axis may, for example, be defined as extending out the top of the phone along the longitudinal axis of the phone.
  • R 31 which may also be referred to herein as g x , is the extent to which the x axis of the body coordinate system is aligned with the z axis of the world coordinate system.
  • the x axis may for example be defined as extending out the right side of the phone when looking at the face of the phone along the lateral axis of the phone.
  • the x axis of the body coordinate system is orthogonal to the z axis of the world coordinate system;
  • the y axis of the body coordinate system is 45° relative to the z axis of the world coordinate system
  • the coefficients of the rotation matrix R express an instantaneous rotational relationship, but as a device moves, the coefficients change over time.
  • the matrix R coefficients may, for example, be updated on a periodic basis at an update rate that is implementation dependent (e.g. , at a sensor update rate, at a user step rate, at 100 Hz, 10 Hz, 1 Hz, 51 Hz, 200 Hz, 500 Hz, 1000 Hz rate, a sensor update rate, etc.) and/or situation dependent.
  • Each of the matrix R coefficients may thus be viewed and/or processed individually and/or in aggregate as a discrete time signal.
  • a rotation matrix R may, for example, be output from one or more system modules that integrate information from various sensors (e.g., acceleration sensors, gyroscopes, compasses, pressure sensors, etc.) to ascertain the present orientation of a device.
  • sensors e.g., acceleration sensors, gyroscopes, compasses, pressure sensors, etc.
  • Such a rotation matrix R may, for example in various implementations, be derived from quaternion processing.
  • a Direction Cosine Matrix (DCM) module may receive orientation information as input, for example quaternion information and/or Euler angle information from a sensor fusion module, and process such input orientation information to determine the rotation matrix R.
  • the DCM module may receive quaternion information that is updated at a sensor rate (or sensor sample rate), for example 51 Hz or a different rate less than or greater than 51 Hz.
  • the DCM module may, however, determine the rotation matrix R at a rate that is equal to a user step rate, a multiple of the user step rate, a fraction of the user step rate, some other function of the user step rate, etc.
  • the DCM module may determine the rotation matrix R at a rate that is less than the update rate of the information (e.g. , orientation information) input to the DCM module.
  • the DCM module may determine the rotation matrix R only when a step has been detected and/or suspected. Thus, when no stepping is detected, no updating of the rotation matrix R occurs.
  • the R coefficients of the rotation matrix R may generally be determined by a DCM module.
  • the DCM module may, for example, be a component of the Attitude Determination Modules discussed herein.
  • Analyzing the values of the rotation matrix R coefficients provides insight into how a user device is positioned, for example providing insight into how a user is utilizing the device. Such analysis may, for example, result in a determined device position (e.g. , in relation to the user thereof).
  • references to the body coordinate system include a device coordinate system, a component or package coordinate system, a chip coordinate system, a sensor coordinate system, etc.
  • the world coordinate system may also be referred to herein as an inertial coordinate system.
  • Empirical evidence has shown a correlation between various signal characteristics (e.g. , rotational matrix coefficients over time) and device position.
  • various signal characteristics e.g. , rotational matrix coefficients over time
  • the rotation matrix coefficient R 32 which as explained above is indicative of the degree of alignment between the y-axis of the body coordinate system and the z-axis of the world coordinate system, includes information that is highly indicative of device position, for example as a user moves with the device.
  • the coefficient R 33 has been found to include useful information, along with R 31 .
  • the following discussion focuses on analysis of the R 32 and R 33 coefficients, but the scope of this disclosure is not limited to the analysis of such coefficients.
  • Figure 2A shows an example signal plot, in accordance with various aspects of this disclosure.
  • the scope of the present disclosure should not be limited by characteristics of analog and/or digital implementations.
  • the fundamental frequency of yl may, for example, be generally aligned with the cadence of the user, with each period or complete cycle corresponding to two user steps.
  • yl is generally smooth, for example including a relatively small amount of significant higher frequency components (e.g. signal energy at frequencies higher than the fundamental frequency). Contextually, this may for example correspond to a relatively smooth arm swing by a user that is walking.
  • Figure 2B shows an example signal plot, in accordance with various aspects of this disclosure.
  • the fundamental frequency of y2 may, for example, be generally aligned with the cadence of the user, with each period or complete cycle corresponding to two user steps.
  • y2 has similar energy at the fundamental frequency to yl of Figure 2A.
  • y2 has more fluctuation than yl of Figure 2A. This may, for example, be caused by relatively higher amounts of jostling for a device positioned in a pocket compared to a device held in a swinging hand. Signal y2 may, for example, be viewed as having a substantial amount of energy at frequencies higher than the fundamental frequency.
  • Figure 2C shows an example signal plot, in accordance with various aspects of this disclosure.
  • the fundamental frequency of y3 is generally aligned with the cadence of the user, with each period or complete cycle corresponding to two user steps but may also in various use scenarios have a substantial frequency component at the single-step cadence (e.g.
  • y3 has substantially less energy at the fundamental frequency than y2 shown in Figure 2B or yl shown in Figure 2A.
  • the amount of device motion in the held-in-front scenario may be less than the respective amounts of motion in the hand-swinging and/or in-pocket scenarios.
  • y3 has more fluctuation than yl, and for example a similar amount of fluctuation to y2. This may, for example, be caused by relatively higher amounts of jostling for a device being held in front of the user than for a device held in a swinging hand.
  • Signal y3 may, for example, be viewed as having a substantial amount of energy at frequencies higher than the fundamental frequency.
  • FIG 3 shows a chart 300 of two signal processing results, which may also be referred to herein as features or functions, in accordance with various aspects of the present disclosure.
  • the chart 300 generally shows empirical results, for example regions of scatter plot results obtain through experimentation.
  • the F feature shown on the horizontal axis of the chart 300 is a reflection of the presence of substantial higher frequency content (e.g. , substantial frequency content at frequencies higher than user stepping frequencies, for example higher than the primary frequency of the plots shown in Figures 2A-2C) in a rotation matrix signal (e.g. , the R 32 signal).
  • a mobile phone (or other device) positioned in a user' s pocket while the user is walking (e.g. , shown at region 310) or held by the user in front of the user while the user is walking (e.g., shown at region 320) experiences more substantial higher frequency movement (e.g., jostling) than a phone that is held in the user's hand during typical walking with the user's hand swinging naturally (e.g. , shown at region 330).
  • a threshold T can be set, below which a phone can be determined to be held in a swinging hand, and above which the phone may be determined to be either in the pocket of the user or held by the user in front of the user.
  • the F 2 feature shown on the vertical axis of the chart 300 is a reflection of signal amplitude in one or more rotation matrix signals (for example, the R 32 signal, the R 33 signal, the combined amplitude of the R 32 and R 33 signals, etc.).
  • a mobile phone or other device positioned in a user' s pocket while the user is walking (e.g. , shown at region 310) or held in the user's hand during typical walking with the user's hand swinging naturally (e.g., shown at region 330) experiences higher amounts of movement than a phone that the user is holding in front of the user while walking (e.g. , shown at region 320).
  • a threshold T 2 can be set, below which a phone may be determined to be held by the user in front of the user while the user is walking, and above which the phone may be determined to be either in the user' s pocket or held in the swinging hand of the user.
  • Figure 3 only shows three position classifications, more may be added without departing from the scope of this disclosure.
  • one or more additional regions of the chart 300 may empirically be found to correspond to a device being positioned in a fanny pack, a device carried in an arm or leg band, in a stocking, in a pair of glasses, in a backpack, etc.
  • an Fi/F 2 result that falls within a particular distance of either Ti and/or T 2 may be associated with less certainty than a result that is at least a particular distance away from such thresholds.
  • certainty thresholds may be offset from the Ti and/or T 2 values by an absolute value (e.g. T +/- CO, by a relative value (e.g. , T +/- Ci ), etc.
  • an absolute value e.g. T +/- CO
  • a relative value e.g. , T +/- Ci
  • such thresholds may be viewed as horizontal lines above and below the T 2 line and vertical lines to the left and right of the Ti line of Figure 3.
  • FIG 4 shows an example processing system 400 for determining (or classifying) a position of a device, in accordance with various aspects of the present disclosure. Any or all aspects of the example processing system 400 may, for example, share any or all characteristics with the example device 100 illustrated in Figure 1 and discussed herein (for example, implemented by a position classification module).
  • the Fi feature is a reflection of higher order frequency content in a signal.
  • the Critical Point and/or Inflexion Point Estimation (CPIPE) module 410 processes the input signal (e.g. , the R 32 rotation matrix coefficient as may, for example be received from a DCM module) over time and outputs the signal Fi, which is an indication of the amount of higher order frequency content in the input signal.
  • the input signal e.g. , the R 32 rotation matrix coefficient as may, for example be received from a DCM module
  • the CPIPE module 410 may perform such processing in any of a variety of manners, non-limiting examples of which may comprise determining a number of maxima and/or minima of a signal over a period of time (for example, critical points where the first derivative is zero), determining a number of inflexion points over a period of time (for example, points where the curvature changes between convex and concave or at which the second derivative of the signal changes sign), etc. In such a manner, the number of critical and/or inflexion points during a time window may be indicative of an amount of higher order frequency content. Note that the CPIPE module 410 may, for example, comprise a low pass filter to filter out noise.
  • the F 2 feature is a reflection of signal amplitude.
  • a first High-Pass Filter (HPF) module 420 with a low cut-off frequency (e.g. , 0.1 Hz) is used to remove the DC bias of the R 32 signal (e.g. , that reflective of steady state orientation).
  • the signal After being processed by the first HPF module 420, the signal is provided to a first Window module 422 that windows the signal.
  • the window may, for example, comprise static sequential blocks of time, rolling blocks of time, etc.
  • the window may be two seconds in duration, but may also be more or less than two seconds.
  • the duration of the window may also be adjustable during system operation. Note that there are many ways to window a signal. The scope of this disclosure is not limited by characteristics of any particular manner of windowing a signal.
  • the signal After being processed by the first Window module 422, the signal is provided to a first ABS module 424.
  • the first ABS module 424 may, for example, determine and output a signal indicative of the amplitude of the signal (e.g. , exactly equal to the amplitude, indicative of the amplitude of the signal scaled or squared, etc.). Note that there are many ways to determine an amplitude of a signal. The scope of this disclosure is not limited by characteristics of any particular manner of determining an amplitude of a signal.
  • the signal After being processed by the first ABS module 424, the signal is provided to a first MAX module 426.
  • the first MAX module 426 may, for example, identify a maximum magnitude of the signal (e.g. , over the window). Note that there are many ways to determine a maximum amplitude of a signal. The scope of this disclosure is not limited by characteristics of any particular manner of determining a maximum amplitude of a signal.
  • the signal R 33 is processed by a second HPF module 430, a second Window module 432, a second ABS module 434, and a second MAX module 436 to identify its maximum amplitude, for example during a time window.
  • Such “second” modules e.g., 430, 432, 434, and 436) may share any or all characteristics with the "first” modules (e.g. , 420, 422, 424, and 426) discussed herein.
  • the "first” and “second” modules may be performed by same respective modules.
  • an HPF module may process both R 32 and R 33 (e.g. , in a time-multiplexed manner).
  • the "first” and “second” modules may also, for example, be performed by separate distinct modules, for example providing enhanced parallelism for processing.
  • the system 400 illustrated in Figure 4 adds the maximum amplitudes of the R 32 and R 33 signals with a Summer module 440.
  • the sum is then low-pass filtered with an LPF module 445 (e.g. , with a cutoff frequency of 0.8 Hz or other value greater or less than 0.8 Hz), for example to reduce the instantaneous effects of anomalies, to generate the F 2 feature.
  • the F 2 feature may, for example, comprise an indication of movement magnitude.
  • the Position Determination module 450 analyzes the Fi and F 2 features (or signals representative thereof), for example comparing such signals with the thresholds Ti and T 2 discussed with regard to the chart of Figure 3, to determine (or classify) the position of the device. For example, the Position Determination module 450 may output a signal indicating whether a user is walking (or otherwise moving) with the user device (e.g. , a mobile phone) in the user's swinging hand, in the user's pocket, being held in front of the user, etc. The Position Determination module 450 may, for example, communicate the output signal to an operating system, host processor, etc., of a device incorporating the system 400.
  • the user device e.g. a mobile phone
  • the Position Determination module 450 may also determine a confidence level associated with the position classification (e.g. , by comparing the features with respective confidence thresholds on either side of the thresholds Ti and T 2 .
  • the empirical analysis discussed above included particular device (or phone) use scenarios.
  • a user may also, for example, hold or carry a device in a non-typical manner (e.g., a non-typical orientation in the hand, sideways versus upright, sideways or angled in a pocket instead of upright, etc.).
  • particular signals may have relatively more or more reliable information than other signals.
  • the R 31 signal may have more useful characteristics (e.g.
  • FIG. 5 shows an example system 500 like that of Figure 4, but with a R a b Selection module 560 to select the rotation matrix coefficients to be analyzed by the system 500.
  • FIG. 5 shows an example processing system 500 for determining a position of a device, for example incorporating signal selection, in accordance with various aspects of the present disclosure.
  • the system 500 may, for example, share any or all characteristics with the systems 100 and 400 discussed with regard to Figures 1 and 4, and/or with any systems discussed herein.
  • the HPF modules 520 and 530, Window modules 522 and 532, ABS modules 524 and 534, MAX modules 526 and 536, Summer module 541, LPF module 545, CPIPE module 510, and/or Position Determination module 550 may share any or all characteristics with other similarly-named modules discussed herein. Note that also, in various implementations, such modules may have different respective operating parameters.
  • the R a b Selection module 560 may, for example, receive a plurality (e.g., some or all) of rotation matrix coefficients as input.
  • An example source of such coefficients is shown as an Attitude Determination module 562.
  • the Attitude Determination module 562 may comprise a DCM module that forms a rotation matrix based, for example, on various sensor signals.
  • signal selection may be based at least in part on characteristics of the selected signals themselves (e.g., amplitude or energy levels, frequency content, noise content, etc.), on external sources of information (e.g. , information from the operating system regarding how the device is currently being utilized, information from non- inertial sensors like light sensors, microphones, thermometers, etc.).
  • the R a b Selection module 560 may select for analysis the R al bi signal as the signal of R 32 or R 31 with the highest energy, or may select both signals. Focusing the signal analysis on dominant signals may, for example, reduce instances of an incorrect position determination.
  • the R a Selection module 560 may select for analysis the Ra2b2 signal as the signal of R 33 or another signal (e.g. , regarding rotation matrix coefficients and/or other parameters) with the highest energy. Again, focusing the signal analysis on dominant signals may, for example, reduce instances of an incorrect position determination. Though only two signals are shown analyzed by the system 500, note that any number of signals may be analyzed, for example if found to be significant by the R a Selection module 560.
  • the system 500 may classify the device position by processing rotation matrix coefficients.
  • Information from any of a variety of sensors and/or the operating system may be analyzed instead of or in addition to the rotation matrix coefficients.
  • Figure 6 shows an example system 600 like that of Figure 5, but with a Non-inertial Sensor Data module 670 added as a source of information gathered from non-inertial sensors.
  • FIG. 6 shows an example processing system 600 for determining a position of a device, for example incorporating non-inertial sensor data, in accordance with various aspects of the present disclosure.
  • the system 600 may, for example, share any or all characteristics with the systems 100, 400 and 500 discussed with regard to Figures 1, 4 and 5, and/or with any systems discussed herein.
  • HPF modules 620 and 630 may share any or all characteristics with other similarly-named modules discussed herein. Note that also, in various implementations, such modules may have different respective operating parameters.
  • the Non-inertial Sensor Data module 670 may, for example, receive and/or condition signals from one or more of a variety of non-inertial sensors.
  • Example non-inertial sensors may, for example, comprise light sensors, microphones, pressure sensors, biometric sensors, temperature sensors, moisture sensors, clocks, compasses, magnetometers, etc.
  • the Position Determination module 650 may use this additional information to classify the device position. For example, a light sensor may detect relatively low levels of light when in a user's pocket and/or different frequency content based on whether it is swinging or being held mostly stationary. Also for example, a sound sensor may hear different sounds and/or sound characteristics when stored in a user's pocket, when held in the user's hand, when held with two hands, etc.
  • a pocket location will detect fabric noise and/or muffled ambient noise, while a hand-held position will hear less fabric noise and brighter ambient noise.
  • a biometric sensor may have little or no signals in a pocket, a medium-quality signal when held in a single hand, a strong signal when held with both hands, etc.
  • a temperature sensor may detect elevated temperatures when being held in a hand and/or when being exposed to sunlight, as apposed to being carried in a pocket.
  • the Position Determination module 650 may utilize information from such sensors (or from the device O/S, or other source) to augment and/or replace the analysis performed based on rotational matrix coefficients. Such augmentation may be particularly beneficial when a level of certainty in a classification based only on rotation matrix coefficients is relatively low. For example, when relatively uncertain whether a phone is in a pocket or being held by a user, a temperature increase due to the phone being held in the hand and/or exposed to sunlight would support a "hand-held" classification decision. In an example scenario in which the analysis of various sensor signals result in a solution in which the system 600 (e.g. , the Position Determination module 650) is confident, other sensors may be shut down, placed into a power-save mode, etc.
  • the system 600 e.g. , the Position Determination module 650
  • high-pass filters may be utilized to filter out steady state (or DC) bias from the signals being analyzed.
  • the bias information which may be indicative of steady state device orientation, may be beneficial in determining device position. For example, in a scenario in which a phone held in front of a user is generally held at an average angle of 45°, information of such average orientation may assist the Position Determination module 650 in determining that a phone is being held in front of the user. Similarly, in a scenario in which a phone held in the user's pocket is vertical on-average, information of such average vertical orientation may assist the Position Determination module 650 in determining that the phone is presently located in the user' s pocket.
  • FIG. 7 shows a system 700 that is generally analogous to the Figure 6 system, but with a second Low-Pass Filter (LPF) module 775 coupling one or more coefficients of the rotation matrix R to the Position Determination block 750.
  • LPF Low-Pass Filter
  • Figure 7 shows an example processing system for determining a position of a device, for example incorporating orientation, in accordance with various aspects of the present disclosure.
  • the system 700 may, for example, share any or all characteristics with the systems 100, 400, 500, and 600 discussed with regard to Figures 1, 4, 5, and 6, and/or with any systems discussed herein.
  • HPF modules 720 and 730 may share any or all characteristics with other similarly-named modules discussed herein. Note that also, in various implementations, such modules may have different respective operating parameters.
  • the second LPF module 775 low-pass filters one or more coefficients of the rotation matrix R to, for example, provide an indication of steady-state orientation to the Position Determination module 750.
  • Figures 8-13 present various high-level system diagrams for the illustration of various general aspects of this disclosure, for example both structure and/or functional aspects.
  • FIG. 8 such figure shows a system 800 in which different characteristics of a same signal are analyzed to determine (or classify) device position.
  • input signal S i may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R 32 ), but need not be.
  • the system 800 may, for example at the First Characteristic Analysis module 811, analyze a first characteristic of the input signal Si (e.g. , the presence of absence of substantial higher level frequency components) and output the results of such analysis at F 1 ; and the system 800 may also, for example at the Second Characteristic Analysis module 821 analyze a second characteristic of the input signal S (e.g.
  • the Position Determination module 850 may, as discussed previously, then analyze the F and F 2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
  • input signal Si may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R 32 ), but need not be.
  • input signal S 2 may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R 33 ), but need not be.
  • both input signals S and S 2 may be related to inertial sensors.
  • one of such input signals may be related to inertial sensors and the other related to a non-inertial sensor.
  • Input signal S may, for example, be analyzed by the First Signal Analysis module 912 for one or more of a variety of signal characteristics, various non- limiting examples are presented herein (e.g. , higher frequency content, amplitude, bias, etc.).
  • Input signal S 2 may, for example, be analyzed by the Second Signal Analysis module 922 for one or more of a variety of signal characteristics, various non-limiting examples are presented herein (e.g. , higher frequency content, amplitude, bias, etc.).
  • the Position Determination module 950 may then process the results of such signal analysis to determine or classify the device position.
  • FIG. 10 such figure shows a system 1000 in which a first characteristic of a first signal is analyzed, a second characteristic of the first signal and of a second signal is analyzed, and the results of such analyses are processed to classify device position.
  • input signal S i may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R 32 ), but need not be.
  • the system 1000 may, for example at the First Signal Analysis Module 1013, analyze a first characteristic of the input signal S (e.g. , the presence of absence of substantial higher level frequency components) and output the results of such analysis at Fi.
  • input signal S 2 may be a discrete time signal (or series) of rotation matrix coefficient values (e.g.
  • the system 1000 may also, for example at the Second Signal Analysis Module 1023, analyze a second characteristic of the input signals S i and S 2 (e.g. , maximum combined amplitude value in a time window) and output the results of such analysis at F 2 .
  • the Position Determination module 1050 may, as discussed previously, then analyze the Fi and F 2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
  • FIG 11 shows a system 1100 similar to the system 1000 of Figure 10, but with the first characteristic (e.g. , a frequency content characteristic) of the second signal S 2 also being analyzed.
  • input signal S i may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R 32 ), but need not be.
  • the system 1000 may, for example at the First Signal Analysis Module 1114, analyze a first characteristic of the input signals S i and S 2 (e.g. , the presence of absence of substantial higher level frequency components) and output the results of such analysis at F ⁇ .
  • input signal S 2 may be a discrete time signal (or series) of rotation matrix coefficient values (e.g., for a coefficient R 33 ), but need not be.
  • the system 1100 may also, for example at the Second Signal Analysis Module 1124, analyze a second characteristic of the input signals S i and S 2 (e.g. , maximum combined amplitude value in a time window) and output the results of such analysis at F 2 .
  • the Position Determination module 1150 may, as discussed previously, then analyze the Fi and F 2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
  • the signals to be analyzed may, for example at the Signal Selection module 1201 be selected (e.g. , at startup, during real-time as the device is used, etc.) for processing.
  • a first set of n (e.g. , one or more) signals may be analyzed, for example at the First Signal Analysis Module 1215, for a first signal characteristic (e.g. , a frequency or spectral content characteristics) and a second set of m (e.g., one or more) signals may be analyzed, for example at the Second Signal Analysis Module 1225 for a second signal characteristic (e.g. , an amplitude characteristics).
  • a second use scenario may, for example, result in one or more different sets of signals being analyzed.
  • a general device orientation may change, causing the Signal Selection module 1201 to select different signals associated with different rotation directions, different inertial or non-inertial sensors, etc.
  • the Position Determination module 1250 may, as discussed previously, then analyze the F and F 2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
  • FIG. 13 such figure shows a system 1300 that is generally similar to the system 1200 shown in Figure 12, but with the addition of a Non-inertial Sensor Data module 1370 providing information directly to the Position Determination module 1350.
  • a system configuration is not meant to exclude non-inertial sensor information from being selected and processed to obtain features Fi and F 2 .
  • the signals to be analyzed may, for example at the Signal Selection module 1301 be selected (e.g. , at startup, during real-time as the device is used, etc.) for processing.
  • n e.g. , one or more
  • signals may be analyzed, for example at the First Signal Analysis Module 1316, for a first signal characteristic (e.g.
  • a frequency or spectral content characteristics and a second set of m (e.g. , one or more) signals may be analyzed, for example at the Second Signal Analysis Module 1326 for a second signal characteristic (e.g. , an amplitude characteristics).
  • a second use scenario may, for example, result in one or more different sets of signals being analyzed. For example, a general device orientation may change, causing the Signal Selection module 1301 to select different signals associated with different rotation directions, different inertial or non-inertial sensors, etc.
  • the Position Determination module 1250 may, as discussed previously, then analyze the Fi and F 2 signals, along with the data from the Non-inertial Sensor Data module 1370, to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
  • any one or more of the modules and/or functions discussed herein may be implemented by a pure hardware solution or by a processor (e.g., an application or host processor, a sensor processor, etc.) executing software instructions.
  • a processor e.g., an application or host processor, a sensor processor, etc.
  • other embodiments may comprise or provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer (or processor), thereby causing the machine and/or computer to perform the methods as described herein.

Abstract

A system and method for classifying a position of a device may include processing one or more respective signal characteristics of one or more signals indicative of device orientation to determine a position of a device relative to a user. In a non-limiting example, various aspects of this disclosure include analyzing, over time, one or more characteristics of a first signal indicative of the alignment of a first device axis with a reference direction and one or more characteristics of a second signal indicative of the alignment of a second device axis with the known direction, and determining the position of the device based at least in part on such analysis. One or more analyzed signals may, for example, correspond to and/or be derived from MEMS sensor signals.

Description

SYSTEM AND METHOD FOR
DEVICE POSITION CLASSIFICATION
CROSS-REFERENCE TO RELATED APPLICATIONS/INCORPORATION BY
REFERENCE
[0001] U.S. Patent No. 7,104,129 is hereby incorporated herein by reference in its entirety. U.S. Patent Application Serial No. 12/106,921, filed April 21, 2008, is hereby incorporated herein by reference in its entirety.
FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT [0002] [Not Applicable]
SEQUENCE LISTING
[0003] [Not Applicable]
MICROFICHE/COPYRIGHT REFERENCE
[0004] [Not Applicable]
BACKGROUND
[0005] Consumer devices do not classify device position, for example relative to a user, in an efficient and reliable manner. For example, hand-held and/or wearable consumer devices do not efficiently and reliably determine whether and/or how a device is being held or otherwise carried by a user. Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such approaches with the disclosure as set forth in the remainder of this application with reference to the drawings.
BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
[0006] Figure 1 shows a block diagram of an example electronic device comprising position classification capability, in accordance with various aspects of the present disclosure.
[0007] Figures 2A-2B show example signal plots, in accordance with various aspects of the present disclosure.
[0008] Figure 3 shows a chart of two signal processing features, in accordance with various aspects of the present disclosure.
[0009] Figure 4 shows an example processing system for determining a position of a device, in accordance with various aspects of the present disclosure.
[0010] Figure 5 shows an example processing system for determining a position of a device, for example incorporating signal selection, in accordance with various aspects of the present disclosure.
[0011] Figure 6 shows an example processing system for determining a position of a device, for example incorporating non-inertial sensor data, in accordance with various aspects of the present disclosure.
[0012] Figure 7 shows an example processing system for determining a position of a device, for example incorporating orientation, in accordance with various aspects of the present disclosure.
[0013] Figure 8 shows an example processing system for determining a position of a device, for example analyzing multiple characteristics of a signal, in accordance with various aspects of the present disclosure.
[0014] Figure 9 shows an example processing system for determining a position of a device, for example analyzing one or more characteristics of a plurality of signals, in accordance with various aspects of the present disclosure.
[0015] Figure 10 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a signal and a second characteristic of a plurality of signals, in accordance with various aspects of the present disclosure. [0016] Figure 11 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a plurality of signals and a second characteristic of a plurality of signals, in accordance with various aspects of the present disclosure.
[0017] Figure 12 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a selectable first set of signals and a second characteristic of a selectable second set of signals, in accordance with various aspects of the present disclosure.
[0018] Figure 13 shows an example processing system for determining a position of a device, for example analyzing a first characteristic of a selectable first set of signals and a second characteristic of a selectable second set of signals and non-inertial sensor data, in accordance with various aspects of the present disclosure.
SUMMARY
[0019] Various aspects of this disclosure comprise processing one or more respective signal characteristics of one or more signals indicative of device orientation to determine a position of a device relative to a user that is in motion. In a non-limiting example, various aspects of this disclosure comprise analyzing, over time, one or more characteristics of a first signal indicative of the alignment of a first device axis with a reference direction and one or more characteristics of a second signal indicative of the alignment of a second device axis with the known direction, and determining the position of the device based at least in part on such analysis. One or more analyzed signals may, for example, correspond to and/or be derived from MEMS sensor signals.
DETAILED DESCRIPTION OF VARIOUS ASPECTS OF THE DISCLOSURE
[0020] The following discussion presents various aspects of the present disclosure by providing various examples thereof. Such examples are non-limiting, and thus the scope of various aspects of the present disclosure should not necessarily be limited by any particular characteristics of the provided examples. In the following discussion, the phrases "for example" and "e.g." and "exemplary" are non-limiting and are generally synonymous with "by way of example and not limitation," "for example and not limitation," and the like.
[0021] The following discussion may at times utilize the phrase "A and/or B." Such phrase should be understood to mean just A, or just B, or both A and B. Similarly, the phrase "A, B, and/or C" should be understood to mean just A, just B, just C, A and B, A and C, B and C, or all of A and B and C.
[0022] The following discussion may at times utilize the phrases "operable to," "operates to," and the like in discussing functionality performed by particular hardware, including hardware operating in accordance with software instructions. The phrases "operates to," "is operable to," and the like include "operates when enabled to." For example, a module that operates to perform a particular operation, but only after receiving a signal to enable such operation, is included by the phrases "operates to," "is operable to," and the like.
[0023] The following discussion may at times refer to various system or device functional modules. It should be understood that the functional modules were selected for illustrative clarity and not necessarily for providing distinctly separate hardware and/or software modules. For example, any one or more of the modules discussed herein may be implemented by shared hardware, including for example a shared processor. Also for example, any one or more of the modules discussed herein may share software portions, including for example subroutines. Additionally for example, any one or more of the modules discussed herein may be implemented with independent dedicated hardware and/or software. Accordingly, the scope of various aspects of this disclosure should not be limited by arbitrary boundaries between modules unless explicitly claimed. Additionally, it should be understood that when the discussion herein refers to a module performing a function, the discussion is generally referring to either a pure hardware module implementation and/or a processor operating in accordance with software. Such software may, for example, be stored on a non-transitory machine-readable medium.
[0024] In various example embodiments discussed herein, a chip is defined to include at least one substrate typically formed from a semiconductor material. A single chip may for example be formed from multiple substrates, where the substrates are mechanically bonded and electrically connected to preserve the functionality. Multiple chip (or multi-chip) includes at least 2 substrates, wherein the 2 substrates are electrically connected, but do not require mechanical bonding.
[0025] A package provides electrical connection between the bond pads on the chip (or for example a multi-chip module) to a metal lead that can be soldered to a printed circuit board (or PCB). A package typically comprises a substrate and a cover. An Integrated Circuit (IC) substrate may refer to a silicon substrate with electrical circuits, typically CMOS circuits. A MEMS substrate provides mechanical support for the MEMS structure(s). The MEMS structural layer is attached to the MEMS substrate. The MEMS substrate is also referred to as handle substrate or handle wafer. In some embodiments, the handle substrate serves as a cap to the MEMS structure.
[0026] In the described embodiments, an electronic device incorporating a sensor may, for example, employ a motion tracking module also referred to as Motion Processing Unit (MPU) that includes at least one sensor in addition to electronic circuits. The at least one sensor may comprise any of a variety of sensors, such as for example a gyroscope, a compass, a magnetometer, an accelerometer, a microphone, a pressure sensor, a proximity sensor, a moisture sensor, a temperature sensor, a biometric sensor, or an ambient light sensor, among others known in the art.
[0027] Some embodiments may, for example, comprise an accelerometer, gyroscope, and magnetometer or other compass technology, which each provide a measurement along three axes that are orthogonal relative to each other, and may be referred to as a 9-axis device. Other embodiments may, for example, comprise an accelerometer, gyroscope, compass, and pressure sensor, and may be referred to as a 10-axis device. Other embodiments may not include all the sensors or may provide measurements along one or more axes. [0028] The sensors may, for example, be formed on a first substrate. Various embodiments may, for example, include solid-state sensors and/or any other type of sensors. The electronic circuits in the MPU may, for example, receive measurement outputs from the one or more sensors. In various embodiments, the electronic circuits process the sensor data. The electronic circuits may, for example, be implemented on a second silicon substrate. In some embodiments, the first substrate may be vertically stacked, attached and electrically connected to the second substrate in a single semiconductor chip, while in other embodiments, the first substrate may be disposed laterally and electrically connected to the second substrate in a single semiconductor package. (e.g., both attached to a common packaging substrate or other material). In other embodiments, the sensors may, for example, be formed on different respective substrates (e.g. , all attached to a common packaging substrate or other material).
[0029] In an example embodiment, the first substrate is attached to the second substrate through wafer bonding, as described in commonly owned U.S. Patent No. 7, 104, 129, which is hereby incorporated herein by reference in its entirety, to simultaneously provide electrical connections and hermetically seal the MEMS devices. This fabrication technique advantageously enables technology that allows for the design and manufacture of high performance, multi-axis, inertial sensors in a very small and economical package. Integration at the wafer-level minimizes parasitic capacitances, allowing for improved signal-to-noise relative to a discrete solution. Such integration at the wafer-level also enables the incorporation of a rich feature set which minimizes the need for external amplification.
[0030] In the described embodiments, raw data refers to measurement outputs from the sensors which are not yet processed. Motion data refers to processed raw data. Processing may, for example, comprise applying a sensor fusion algorithm or applying any other algorithm. In the case of a sensor fusion algorithm, data from one or more sensors may be combined and/or processed to provide an orientation of the device. In the described embodiments, an MPU may include processors, memory, control logic and sensors among structures.
[0031] For various device operational characteristics, it may be beneficial to know how the device is presently being used or carried. For example, in a mobile telephone example, it may be beneficial for the operating system of the telephone to know how the user is utilizing the phone while walking. In particular, it may be beneficial for the operating system of the telephone to know whether the phone is presently in a user's pocket while the user is walking, whether the phone is being held in a user' s hand in front of the user while the user is walking, whether the phone is being held in a hand at the user's side while the user is walking, etc. For example, when a phone is being carried in a user' s pocket, the operating system may safely turn off various phone functionality (e.g., visual display functionality, positioning functionality, etc.) and may also, for example, turn up a volume of audio notifications. Also for example, when a phone is being carried by a user out in front of the user, the operating system may turn on or keep on the visual display and/or other functionality, and may also turn down a volume of audio notifications.
[0032] Accordingly, various aspects of this disclosure comprise processing one or more respective signal characteristics of one or more signals indicative of device orientation to determine a position of a device relative to a user (e.g. , a user that is moving). In a non-limiting example, various aspects of this disclosure comprise analyzing, over time, one or more characteristics of a first signal indicative of the alignment of a first device axis with a first reference direction and one or more characteristics of a second signal indicative of the alignment of a second device axis with a second reference direction (e.g., where the first and second reference directions may be the same or different), and determining the position of the device based at least in part on such analysis. The position of the device may, for example, be determined in relation to the user of the device (e.g., in the user's pocket, in the user's hand at the user' s side, in the user's hand held in front of the user, etc.). The discussion will now turn to discussing various aspects in view of the attached figures.
[0033] Turning first to Figure 1, such figure shows a block diagram of an example electronic device 100 comprising position classification capability, in accordance with various aspects of the present disclosure. As will be appreciated, the device 100 may be implemented as a device or apparatus, such as a handheld and/or wearable device that can be moved in space by a user, and its motion and/or orientation in space therefore sensed. For example, such a handheld and/or wearable device may comprise a mobile phone (e.g. , a cellular phone, a phone running on a local network, or any other telephone handset), wired telephone (e.g. , a phone attached by a wire and/or optical tether), personal digital assistant (PDA), pedometer, personal activity and/or health monitoring device, video game player, video game controller, navigation device, mobile internet device (MID), personal navigation device (PND), digital still camera, digital video camera, binoculars, telephoto lens, portable music, video, or media player, remote control, or other handheld device, headset, eyeglasses, or a combination of one or more of these devices.
[0034] In some embodiments, the device 100 may be a self-contained device that comprises its own display and/or other user output devices in addition to the user input devices as described below. However, in other embodiments, the device 100 may function in conjunction with another portable device or a non-portable device such as a desktop computer, electronic tabletop device, server computer, smart phone, etc., which can communicate with the device 100, e.g. , via network connections. The device 100 may, for example, be capable of communicating via a wired connection using any type of wire-based communication protocol (e.g., serial transmissions, parallel transmissions, packet-based data communications), wireless connection (e.g. , electromagnetic radiation, infrared radiation or other wireless technology), or a combination of one or more wired connections and one or more wireless connections.
[0035] As shown, the example device 100 comprises an MPU 120, application (or host) processor 112, application (or host) memory 114, and may comprise one or more sensors, such as external sensor(s) 116. The application processor 112 may, for example, be configured to perform the various computations and operations involved with the general function of the device 100 (e.g. , running applications, performing operating system functions, performing power management functionality, controlling user interface functionality for the device 100, etc.). The application processor 112 may, for example, be coupled to MPU 120 through a communication interface 118, which may be any suitable bus or interface, such as a peripheral component interconnect express (PCIe) bus, a universal serial bus (USB), a universal asynchronous receiver/transmitter (UART) serial bus, a suitable advanced microcontroller bus architecture (AMBA) interface, an Inter-Integrated Circuit (I2C) bus, a serial digital input output (SDIO) bus, or other equivalent. The application memory 114 may, for example, comprise programs, drivers or other data that utilize information provided by the MPU 120. Details regarding example suitable configurations of the application (or host) processor 112 and MPU 120 may be found in co-pending, commonly owned U.S. Patent Application Serial No. 12/106,921, filed April 21, 2008, which is hereby incorporated herein by reference in its entirety. [0036] In this example embodiment, the MPU 120 is shown to comprise a sensor processor 130, internal memory 140 and one or more internal sensors 150. The internal sensors 150 comprise a gyroscope 151, an accelerometer 152, a compass 153 (for example a magnetometer), a pressure sensor 154, a microphone 155, and a proximity sensor 156. Though not shown, the internal sensors 150 may comprise any of a variety of sensors, for example, a temperature sensor, light sensor, moisture sensor, biometric sensor, etc. All or some of the internal sensors 150 may, for example, be implemented as MEMS -based motion sensors, including inertial sensors such as a gyroscope or accelerometer, or an electromagnetic sensor such as a Hall effect or Lorentz field magnetometer. As desired, one or more of the internal sensors 150 may be configured to provide raw data output measured along three orthogonal axes or any equivalent structure. The internal memory 140 may store algorithms, routines or other instructions for processing data output by one or more of the internal sensors 120, including the position classification module 142 and sensor fusion module 144, as described in more detail herein. If provided, external sensor(s) 116 may comprise one or more sensors, such as accelerometers, gyroscopes, magnetometers, pressure sensors, microphones, proximity sensors, and ambient light sensors, biometric sensors, temperature sensors, and moisture sensors, among other sensors. As used herein, an internal sensor generally refers to a sensor implemented, for example using MEMS techniques, for integration with the MPU 120 into a single chip. Similarly, an external sensor as used herein generally refers to a sensor carried on-board the device 100 that is not integrated into the MPU 120.
[0037] Even though various embodiments may be described herein in the context of internal sensors implemented in the MPU 120, these techniques may be applied to a non-integrated sensor, such as an external sensor 116, and likewise position classification module 142 may be implemented using instructions stored in any available memory resource, such as for example the application memory 114, and may be executed using any available processor, such as for example the application processor 112. Still further, the functionality performed by the position classification module 142 may be implemented using any combination of hardware, firmware and software.
[0038] As will be appreciated, the application (or host) processor 112 and/or sensor processor 130 may be one or more microprocessors, central processing units (CPUs), microcontrollers or other processors, which run software programs for the device 100 and/or for other applications related to the functionality of the device 100. For example, different software application programs such as menu navigation software, games, camera function control, navigation software, and telephone, or a wide variety of other software and functional interfaces, can be provided. In some embodiments, multiple different applications can be provided on a single device 100, and in some of those embodiments, multiple applications can run simultaneously on the device 100. Multiple layers of software can, for example, be provided on a computer readable medium such as electronic memory or other storage medium such as hard disk, optical disk, flash drive, etc., for use with application processor 112 and sensor processor 130. For example, an operating system layer can be provided for the device 100 to control and manage system resources in real time, enable functions of application software and other layers, and interface application programs with other software and functions of the device 100. In various example embodiments, one or more motion algorithm layers may provide motion algorithms for lower- level processing of raw sensor data provided from internal or external sensors. Further, a sensor device driver layer may provide a software interface to the hardware sensors of the device 100. Some or all of these layers can be provided in the application memory 114 for access by the application processor 112, in internal memory 140 for access by the sensor processor 130, or in any other suitable architecture (e.g. , including distributed architectures).
[0039] In some example embodiments, it will be recognized that the example architecture depicted in Figure 1 may provide for position classification to be performed using the MPU 120 and might not require involvement of the application (or host) processor 112 and/or application memory 114. Such example embodiments may, for example, be implemented with one or more internal sensor sensors 150 on a single chip and/or a multi-chip. Moreover, as will be described below, the position classification techniques may be implemented using computationally efficient algorithms to reduce processing overhead and power consumption.
[0040] As mentioned herein, a position classification module may be implemented by a processor (e.g., the sensor processor 130) operating in accordance with software instructions (e.g., the position classification module 142 stored in the internal memory 140), or by a pure hardware solution. The discussion of Figures 2- 13 will provide further example details of at least the operation of the sensor fusion software module 144 and/or the position classification module 142 (e.g. , when executed by a processor such as the sensor processor 130). It should be understood that any or all of the functional modules discussed herein may be implemented in a pure hardware implementation and/or by a processor operating in accordance with software instructions. It should also be understood that any or all software instructions may be stored in a non-transitory computer-readable medium
[0041] Various aspects of this disclosure comprise determining or classifying a position of a device by, at least in part, analyzing transformation coefficients that are generally utilized to transform a position, vector, velocity, acceleration, etc., from a first coordinate system to a second coordinate system. Such a transformation may be generally performed by multiplying an input vector expressed in the first coordinate system by a transformation matrix. General transformation matrices comprise translation coefficients and rotation coefficients. For illustrative clarity, the discussion herein will focus on rotation coefficients. Note, however, that the scope of various aspects of this disclosure is not limited to rotation coefficients.
[0042] The rotational aspects of a transformation matrix may, for example, be expressed as a rotation matrix. In general, a rotation matrix (e.g., a direction cosine matrix or DCM) may be utilized to rotationally transform coordinates (e.g. , of a vector, position, velocity, acceleration, force, etc.) expressed in a first coordinate system to coordinates expressed in a second coordinate system. For example, a direction cosine matrix R may look like:
Figure imgf000014_0001
[0043] In an example scenario in which a first vector Ab expresses coordinates of a point in a body (or device) coordinate system and a second vector Aw expresses coordinates of a point in a world (or inertial) coordinate system, the following equation may be used to determine Aw from Ab:
Aw = R Ab
[0044] The third row of the rotation matrix R may, for example, be generally concerned with determining the z-axis component of the world coordinate system, Aw z, as a function of the matrix coefficients R31, R32, and R33 multiplied by respective At,x, Aby, and At,z values of the body coordinate vector Ab, which are then summed.
[0045] For example R33, which may also be referred to herein as gz, is the extent to which the z axis of the body coordinate system is aligned with the z axis of the world coordinate system. In a mobile telephone scenario, the z axis in the body coordinate system may, for example, be defined as extending orthogonally from the face of the telephone. The z axis of the world coordinate system may, for example, be aligned with gravity and point upward from the ground. A value of R33 = 1 means that the z axis in the body coordinate system is perfectly aligned with the z axis of the world coordinate system, and thus there is a 1-to-l mapping.
[0046] Also for example R32, which may also be referred to herein as gy, is the extent to which the y axis of the body coordinate system is aligned with the z axis of the world coordinate system. In a telephone scenario, the y axis may, for example, be defined as extending out the top of the phone along the longitudinal axis of the phone. A value of R32 = 1 means that the y axis in the body coordinate system is perfectly aligned with the z axis of the world coordinate system, and thus there is a 1-to-l mapping.
[0047] Additionally for example R31, which may also be referred to herein as gx, is the extent to which the x axis of the body coordinate system is aligned with the z axis of the world coordinate system. In a telephone scenario, the x axis may for example be defined as extending out the right side of the phone when looking at the face of the phone along the lateral axis of the phone. A value of R31 = 1 means that the x axis in the body coordinate system is perfectly aligned with the z axis of the world coordinate system, and thus there is a 1-to-l mapping.
[0048] As a last example, if R31 = 0, R32 = l/sqrt(2), and R33 = l/sqrt(2), then:
1) The x axis of the body coordinate system is orthogonal to the z axis of the world coordinate system;
2) The y axis of the body coordinate system is 45° relative to the z axis of the world coordinate system; and
3) The z axis of the body coordinate system is 45° relative to the z axis of the world coordinate system. [0049] The coefficients of the rotation matrix R express an instantaneous rotational relationship, but as a device moves, the coefficients change over time. In such a scenario, the matrix R coefficients may, for example, be updated on a periodic basis at an update rate that is implementation dependent (e.g. , at a sensor update rate, at a user step rate, at 100 Hz, 10 Hz, 1 Hz, 51 Hz, 200 Hz, 500 Hz, 1000 Hz rate, a sensor update rate, etc.) and/or situation dependent. Each of the matrix R coefficients may thus be viewed and/or processed individually and/or in aggregate as a discrete time signal.
[0050] A rotation matrix R may, for example, be output from one or more system modules that integrate information from various sensors (e.g., acceleration sensors, gyroscopes, compasses, pressure sensors, etc.) to ascertain the present orientation of a device. Such a rotation matrix R may, for example in various implementations, be derived from quaternion processing.
[0051] In an example implementation, also discussed elsewhere herein, a Direction Cosine Matrix (DCM) module may receive orientation information as input, for example quaternion information and/or Euler angle information from a sensor fusion module, and process such input orientation information to determine the rotation matrix R. In an example implementation, the DCM module may receive quaternion information that is updated at a sensor rate (or sensor sample rate), for example 51 Hz or a different rate less than or greater than 51 Hz. In an example implementation, the DCM module may, however, determine the rotation matrix R at a rate that is equal to a user step rate, a multiple of the user step rate, a fraction of the user step rate, some other function of the user step rate, etc. In other words, the DCM module may determine the rotation matrix R at a rate that is less than the update rate of the information (e.g. , orientation information) input to the DCM module. For example, in an example implementation, the DCM module may determine the rotation matrix R only when a step has been detected and/or suspected. Thus, when no stepping is detected, no updating of the rotation matrix R occurs. Though the DCM module is not specifically illustrated in the attached figures, the R coefficients of the rotation matrix R may generally be determined by a DCM module. The DCM module may, for example, be a component of the Attitude Determination Modules discussed herein.
[0052] Analyzing the values of the rotation matrix R coefficients, for example as they change over time and/or instantaneously, provides insight into how a user device is positioned, for example providing insight into how a user is utilizing the device. Such analysis may, for example, result in a determined device position (e.g. , in relation to the user thereof).
[0053] For illustrative simplicity, the following discussion will address analyzing various rotation matrix R coefficients over time, for example discrete time signals, to determine device position, for example to classify the device position by one of a finite set of position classifications. The scope of this disclosure is not, however, limited to a particular number of coefficients being analyzed and/or the manner in which a discrete time signal associated with a particular one or more coefficients is analyzed.
[0054] Additionally for example, though the following discussion will generally address analyzing rotation matrix coefficients, other signals indicative of orientation may similarly be analyzed, for example, raw sensor data, motion data, sensor data transformed to the world coordinate system, etc. The analysis of rotational matrix coefficients generally disclosed herein is presented for illustrative convenience and clarity, but the scope of various aspects of this disclosure should not be limited thereby.
[0055] Various aspects of this disclosure refer to a body coordinate system and a world coordinate system. Unless identified more specifically, references to the body coordinate system include a device coordinate system, a component or package coordinate system, a chip coordinate system, a sensor coordinate system, etc. The world coordinate system may also be referred to herein as an inertial coordinate system.
[0056] Empirical evidence has shown a correlation between various signal characteristics (e.g. , rotational matrix coefficients over time) and device position. For example, through observation of mobile telephone utilization and rotation matrix coefficients behavior over time, it has been determined that the rotation matrix coefficient R32, which as explained above is indicative of the degree of alignment between the y-axis of the body coordinate system and the z-axis of the world coordinate system, includes information that is highly indicative of device position, for example as a user moves with the device. Similarly, the coefficient R33 has been found to include useful information, along with R31. The following discussion focuses on analysis of the R32 and R33 coefficients, but the scope of this disclosure is not limited to the analysis of such coefficients. [0057] Various aspects of this disclosure will now be presented by discussion of additional example systems. It should be noted that the systems herein are presented for illustrative clarity and convenience, and the scope of this disclosure should not be limited by any particular characteristics of the example(s) presented herein.
[0058] Turning now to Figure 2A, such figure shows an example signal plot, in accordance with various aspects of this disclosure. For example, Figure 2A shows an example function of yl = R32(t) as it may look when a device (e.g. , a mobile telephone) is held in a hand freely swinging at the user' s side while the user is walking. In practice, the function will generally be a discrete time function Yl = R32(nT), but the general idea remains the same. The scope of the present disclosure should not be limited by characteristics of analog and/or digital implementations. The fundamental frequency of yl may, for example, be generally aligned with the cadence of the user, with each period or complete cycle corresponding to two user steps. As shown, yl is generally smooth, for example including a relatively small amount of significant higher frequency components (e.g. signal energy at frequencies higher than the fundamental frequency). Contextually, this may for example correspond to a relatively smooth arm swing by a user that is walking.
[0059] Turning now to Figure 2B, such figure shows an example signal plot, in accordance with various aspects of this disclosure. For example, Figure 2B shows a function of y2 = R32(t) as it may look when a device (e.g. , a mobile telephone) is positioned in the user' s pocket while the user is walking. In practice, the function will generally be a discrete time function Y2 = R32(nT), but the general idea remains the same. The fundamental frequency of y2 may, for example, be generally aligned with the cadence of the user, with each period or complete cycle corresponding to two user steps. As shown, y2 has similar energy at the fundamental frequency to yl of Figure 2A. Also, as shown, y2 has more fluctuation than yl of Figure 2A. This may, for example, be caused by relatively higher amounts of jostling for a device positioned in a pocket compared to a device held in a swinging hand. Signal y2 may, for example, be viewed as having a substantial amount of energy at frequencies higher than the fundamental frequency.
[0060] Turning now to Figure 2C, such figure shows an example signal plot, in accordance with various aspects of this disclosure. For example, Figure 2C shows a function of y3 = R32(t) as it may look when a user is holding a device (e.g. , a mobile telephone) out in front of the user' s body, for example while interacting with a display of the device. In practice, the function will generally be a discrete time function Y3 = R32(nT), but the general idea remains the same. The fundamental frequency of y3 is generally aligned with the cadence of the user, with each period or complete cycle corresponding to two user steps but may also in various use scenarios have a substantial frequency component at the single-step cadence (e.g. , depending on how the device is being held by the user). As shown, y3 has substantially less energy at the fundamental frequency than y2 shown in Figure 2B or yl shown in Figure 2A. For example, the amount of device motion in the held-in-front scenario may be less than the respective amounts of motion in the hand-swinging and/or in-pocket scenarios. Also, as shown, y3 has more fluctuation than yl, and for example a similar amount of fluctuation to y2. This may, for example, be caused by relatively higher amounts of jostling for a device being held in front of the user than for a device held in a swinging hand. Signal y3 may, for example, be viewed as having a substantial amount of energy at frequencies higher than the fundamental frequency.
[0061] Turning next to Figure 3, such figure shows a chart 300 of two signal processing results, which may also be referred to herein as features or functions, in accordance with various aspects of the present disclosure. The chart 300 generally shows empirical results, for example regions of scatter plot results obtain through experimentation. At a high level, the F feature shown on the horizontal axis of the chart 300 is a reflection of the presence of substantial higher frequency content (e.g. , substantial frequency content at frequencies higher than user stepping frequencies, for example higher than the primary frequency of the plots shown in Figures 2A-2C) in a rotation matrix signal (e.g. , the R32 signal). For example, it is seen from Figure 3 that a mobile phone (or other device) positioned in a user' s pocket while the user is walking (e.g. , shown at region 310) or held by the user in front of the user while the user is walking (e.g., shown at region 320) experiences more substantial higher frequency movement (e.g., jostling) than a phone that is held in the user's hand during typical walking with the user's hand swinging naturally (e.g. , shown at region 330). From such empirical results, a threshold T can be set, below which a phone can be determined to be held in a swinging hand, and above which the phone may be determined to be either in the pocket of the user or held by the user in front of the user.
[0062] At a high level, the F2 feature shown on the vertical axis of the chart 300 is a reflection of signal amplitude in one or more rotation matrix signals (for example, the R32 signal, the R33 signal, the combined amplitude of the R32 and R33 signals, etc.). For example, it is seen from Figure 3 that a mobile phone (or other device) positioned in a user' s pocket while the user is walking (e.g. , shown at region 310) or held in the user's hand during typical walking with the user's hand swinging naturally (e.g., shown at region 330) experiences higher amounts of movement than a phone that the user is holding in front of the user while walking (e.g. , shown at region 320). From such empirical results, a threshold T2 can be set, below which a phone may be determined to be held by the user in front of the user while the user is walking, and above which the phone may be determined to be either in the user' s pocket or held in the swinging hand of the user.
[0063] Though Figure 3 only shows three position classifications, more may be added without departing from the scope of this disclosure. For example, one or more additional regions of the chart 300 may empirically be found to correspond to a device being positioned in a fanny pack, a device carried in an arm or leg band, in a stocking, in a pair of glasses, in a backpack, etc.
[0064] Additionally note that there may be confidence regions defined on the chart 300 that are associated with a degree of certainty that a device falls into one of the categories. For example, an Fi/F2 result that falls within a particular distance of either Ti and/or T2 may be associated with less certainty than a result that is at least a particular distance away from such thresholds. For example, such certainty thresholds may be offset from the Ti and/or T2 values by an absolute value (e.g. T +/- CO, by a relative value (e.g. , T +/- Ci ), etc. Graphically, such thresholds may be viewed as horizontal lines above and below the T2 line and vertical lines to the left and right of the Ti line of Figure 3.
[0065] Turning next to Figure 4, such figure shows an example processing system 400 for determining (or classifying) a position of a device, in accordance with various aspects of the present disclosure. Any or all aspects of the example processing system 400 may, for example, share any or all characteristics with the example device 100 illustrated in Figure 1 and discussed herein (for example, implemented by a position classification module). As mentioned herein, the Fi feature is a reflection of higher order frequency content in a signal. In Figure 4, the Critical Point and/or Inflexion Point Estimation (CPIPE) module 410 processes the input signal (e.g. , the R32 rotation matrix coefficient as may, for example be received from a DCM module) over time and outputs the signal Fi, which is an indication of the amount of higher order frequency content in the input signal. The CPIPE module 410 may perform such processing in any of a variety of manners, non-limiting examples of which may comprise determining a number of maxima and/or minima of a signal over a period of time (for example, critical points where the first derivative is zero), determining a number of inflexion points over a period of time (for example, points where the curvature changes between convex and concave or at which the second derivative of the signal changes sign), etc. In such a manner, the number of critical and/or inflexion points during a time window may be indicative of an amount of higher order frequency content. Note that the CPIPE module 410 may, for example, comprise a low pass filter to filter out noise.
[0066] As mentioned herein, the F2 feature is a reflection of signal amplitude. In the example shown in Figure 4, processing the R32 signal, a first High-Pass Filter (HPF) module 420 with a low cut-off frequency (e.g. , 0.1 Hz) is used to remove the DC bias of the R32 signal (e.g. , that reflective of steady state orientation).
[0067] After being processed by the first HPF module 420, the signal is provided to a first Window module 422 that windows the signal. The window may, for example, comprise static sequential blocks of time, rolling blocks of time, etc. For example, the window may be two seconds in duration, but may also be more or less than two seconds. The duration of the window may also be adjustable during system operation. Note that there are many ways to window a signal. The scope of this disclosure is not limited by characteristics of any particular manner of windowing a signal.
[0068] After being processed by the first Window module 422, the signal is provided to a first ABS module 424. The first ABS module 424 may, for example, determine and output a signal indicative of the amplitude of the signal (e.g. , exactly equal to the amplitude, indicative of the amplitude of the signal scaled or squared, etc.). Note that there are many ways to determine an amplitude of a signal. The scope of this disclosure is not limited by characteristics of any particular manner of determining an amplitude of a signal.
[0069] After being processed by the first ABS module 424, the signal is provided to a first MAX module 426. The first MAX module 426 may, for example, identify a maximum magnitude of the signal (e.g. , over the window). Note that there are many ways to determine a maximum amplitude of a signal. The scope of this disclosure is not limited by characteristics of any particular manner of determining a maximum amplitude of a signal.
[0070] Similarly, the signal R33 is processed by a second HPF module 430, a second Window module 432, a second ABS module 434, and a second MAX module 436 to identify its maximum amplitude, for example during a time window. Such "second" modules (e.g., 430, 432, 434, and 436) may share any or all characteristics with the "first" modules (e.g. , 420, 422, 424, and 426) discussed herein. In an example implementation, the "first" and "second" modules may be performed by same respective modules. For example, an HPF module may process both R32 and R33 (e.g. , in a time-multiplexed manner). The "first" and "second" modules may also, for example, be performed by separate distinct modules, for example providing enhanced parallelism for processing.
[0071] Since empirical studies have shown that observing the amplitudes of multiple signals may be beneficial, the system 400 illustrated in Figure 4 adds the maximum amplitudes of the R32 and R33 signals with a Summer module 440. The sum is then low-pass filtered with an LPF module 445 (e.g. , with a cutoff frequency of 0.8 Hz or other value greater or less than 0.8 Hz), for example to reduce the instantaneous effects of anomalies, to generate the F2 feature. As discussed above, the F2 feature may, for example, comprise an indication of movement magnitude.
[0072] Lastly, the Position Determination module 450 analyzes the Fi and F2 features (or signals representative thereof), for example comparing such signals with the thresholds Ti and T2 discussed with regard to the chart of Figure 3, to determine (or classify) the position of the device. For example, the Position Determination module 450 may output a signal indicating whether a user is walking (or otherwise moving) with the user device (e.g. , a mobile phone) in the user's swinging hand, in the user's pocket, being held in front of the user, etc. The Position Determination module 450 may, for example, communicate the output signal to an operating system, host processor, etc., of a device incorporating the system 400. As discussed previously, the Position Determination module 450 may also determine a confidence level associated with the position classification (e.g. , by comparing the features with respective confidence thresholds on either side of the thresholds Ti and T2. [0073] The empirical analysis discussed above included particular device (or phone) use scenarios. A user may also, for example, hold or carry a device in a non-typical manner (e.g., a non-typical orientation in the hand, sideways versus upright, sideways or angled in a pocket instead of upright, etc.). Depending on the orientation and/or general movement of the device, particular signals may have relatively more or more reliable information than other signals. As an example, depending on the usage scenario, the R31 signal may have more useful characteristics (e.g. , amplitude, higher frequency energy, and/or noise characteristics) than the R32 signal. In such a scenario, it may be beneficial to have the system 400 flexibly select signals to analyze, for example to select one or more particular rotation matrix coefficients to analyze. Figure 5 shows an example system 500 like that of Figure 4, but with a Rab Selection module 560 to select the rotation matrix coefficients to be analyzed by the system 500.
[0074] Turning next to Figure 5, such figure shows an example processing system 500 for determining a position of a device, for example incorporating signal selection, in accordance with various aspects of the present disclosure. The system 500 may, for example, share any or all characteristics with the systems 100 and 400 discussed with regard to Figures 1 and 4, and/or with any systems discussed herein. For example, the HPF modules 520 and 530, Window modules 522 and 532, ABS modules 524 and 534, MAX modules 526 and 536, Summer module 541, LPF module 545, CPIPE module 510, and/or Position Determination module 550 may share any or all characteristics with other similarly-named modules discussed herein. Note that also, in various implementations, such modules may have different respective operating parameters.
[0075] The Rab Selection module 560 may, for example, receive a plurality (e.g., some or all) of rotation matrix coefficients as input. An example source of such coefficients is shown as an Attitude Determination module 562. As discussed herein, the Attitude Determination module 562 may comprise a DCM module that forms a rotation matrix based, for example, on various sensor signals. As mentioned herein, signal selection may be based at least in part on characteristics of the selected signals themselves (e.g., amplitude or energy levels, frequency content, noise content, etc.), on external sources of information (e.g. , information from the operating system regarding how the device is currently being utilized, information from non- inertial sensors like light sensors, microphones, thermometers, etc.). For example, the Rab Selection module 560 may select for analysis the Ralbi signal as the signal of R32 or R31 with the highest energy, or may select both signals. Focusing the signal analysis on dominant signals may, for example, reduce instances of an incorrect position determination.
[0076] Also for example, the Ra Selection module 560 may select for analysis the Ra2b2 signal as the signal of R33 or another signal (e.g. , regarding rotation matrix coefficients and/or other parameters) with the highest energy. Again, focusing the signal analysis on dominant signals may, for example, reduce instances of an incorrect position determination. Though only two signals are shown analyzed by the system 500, note that any number of signals may be analyzed, for example if found to be significant by the Ra Selection module 560.
[0077] As mentioned herein, the system 500 may classify the device position by processing rotation matrix coefficients. Information from any of a variety of sensors and/or the operating system may be analyzed instead of or in addition to the rotation matrix coefficients. Figure 6 shows an example system 600 like that of Figure 5, but with a Non-inertial Sensor Data module 670 added as a source of information gathered from non-inertial sensors.
[0078] Turning next to Figure 6, such figure shows an example processing system 600 for determining a position of a device, for example incorporating non-inertial sensor data, in accordance with various aspects of the present disclosure. The system 600 may, for example, share any or all characteristics with the systems 100, 400 and 500 discussed with regard to Figures 1, 4 and 5, and/or with any systems discussed herein. For example, the HPF modules 620 and 630, Window modules 622 and 632, ABS modules 624 and 634, MAX modules 626 and 636, Summer module 641, LPF module 645, CPIPE module 610, Position Determination module 660, Attitude Determination module 662, and/or Rab Selection module 660 may share any or all characteristics with other similarly-named modules discussed herein. Note that also, in various implementations, such modules may have different respective operating parameters.
[0079] The Non-inertial Sensor Data module 670 may, for example, receive and/or condition signals from one or more of a variety of non-inertial sensors. Example non-inertial sensors may, for example, comprise light sensors, microphones, pressure sensors, biometric sensors, temperature sensors, moisture sensors, clocks, compasses, magnetometers, etc. The Position Determination module 650 may use this additional information to classify the device position. For example, a light sensor may detect relatively low levels of light when in a user's pocket and/or different frequency content based on whether it is swinging or being held mostly stationary. Also for example, a sound sensor may hear different sounds and/or sound characteristics when stored in a user's pocket, when held in the user's hand, when held with two hands, etc. For example, a pocket location will detect fabric noise and/or muffled ambient noise, while a hand-held position will hear less fabric noise and brighter ambient noise. Further for example, a biometric sensor may have little or no signals in a pocket, a medium-quality signal when held in a single hand, a strong signal when held with both hands, etc. Additionally for example, a temperature sensor may detect elevated temperatures when being held in a hand and/or when being exposed to sunlight, as apposed to being carried in a pocket.
[0080] In such scenarios, the Position Determination module 650 may utilize information from such sensors (or from the device O/S, or other source) to augment and/or replace the analysis performed based on rotational matrix coefficients. Such augmentation may be particularly beneficial when a level of certainty in a classification based only on rotation matrix coefficients is relatively low. For example, when relatively uncertain whether a phone is in a pocket or being held by a user, a temperature increase due to the phone being held in the hand and/or exposed to sunlight would support a "hand-held" classification decision. In an example scenario in which the analysis of various sensor signals result in a solution in which the system 600 (e.g. , the Position Determination module 650) is confident, other sensors may be shut down, placed into a power-save mode, etc.
[0081] As discussed herein, high-pass filters may be utilized to filter out steady state (or DC) bias from the signals being analyzed. In some instances, the bias information, which may be indicative of steady state device orientation, may be beneficial in determining device position. For example, in a scenario in which a phone held in front of a user is generally held at an average angle of 45°, information of such average orientation may assist the Position Determination module 650 in determining that a phone is being held in front of the user. Similarly, in a scenario in which a phone held in the user's pocket is vertical on-average, information of such average vertical orientation may assist the Position Determination module 650 in determining that the phone is presently located in the user' s pocket. Similarly, in a scenario in which a phone held in the user's hand at the user's side is horizontal on-average, information of such average horizontal orientation may assist the Position Determination module 650 in determining that the phone is presently located in the user's hand at the user's side. Figure 7 shows a system 700 that is generally analogous to the Figure 6 system, but with a second Low-Pass Filter (LPF) module 775 coupling one or more coefficients of the rotation matrix R to the Position Determination block 750.
[0082] In general, Figure 7 shows an example processing system for determining a position of a device, for example incorporating orientation, in accordance with various aspects of the present disclosure. The system 700 may, for example, share any or all characteristics with the systems 100, 400, 500, and 600 discussed with regard to Figures 1, 4, 5, and 6, and/or with any systems discussed herein. For example, the HPF modules 720 and 730, Window modules 722 and 732, ABS modules 724 and 734, MAX modules 726 and 736, Summer module 741, LPF module 745, CPIPE module 710, Position Determination module 760, Attitude Determination module 762, Rab Selection module 760, and Non-inertial Sensor Data module 770 may share any or all characteristics with other similarly-named modules discussed herein. Note that also, in various implementations, such modules may have different respective operating parameters.
[0083] The second LPF module 775 low-pass filters one or more coefficients of the rotation matrix R to, for example, provide an indication of steady-state orientation to the Position Determination module 750.
[0084] The previous discussion of various systems presented a detailed description of such systems. The scope of various aspects of this disclosure is not, however, limited to the details discussed previously. For example, Figures 8-13 present various high-level system diagrams for the illustration of various general aspects of this disclosure, for example both structure and/or functional aspects.
[0085] Referring to Figure 8, such figure shows a system 800 in which different characteristics of a same signal are analyzed to determine (or classify) device position. For example, input signal S i may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R32), but need not be. The system 800 may, for example at the First Characteristic Analysis module 811, analyze a first characteristic of the input signal Si (e.g. , the presence of absence of substantial higher level frequency components) and output the results of such analysis at F1 ; and the system 800 may also, for example at the Second Characteristic Analysis module 821 analyze a second characteristic of the input signal S (e.g. , maximum amplitude value in a time window) and output the results of such analysis at F2. The Position Determination module 850 may, as discussed previously, then analyze the F and F2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
[0086] Referring next to Figure 9, such figure shows a system 900 in which same or different respective characteristics of different respective signals are analyzed to determine (or classify) device position. For example, input signal Si may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R32), but need not be. Similarly, input signal S2 may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R33), but need not be. For example, both input signals S and S2 may be related to inertial sensors. Also for example, one of such input signals may be related to inertial sensors and the other related to a non-inertial sensor. Additionally for example, both of such input signals may be related to non-inertial sensors. Input signal S may, for example, be analyzed by the First Signal Analysis module 912 for one or more of a variety of signal characteristics, various non- limiting examples are presented herein (e.g. , higher frequency content, amplitude, bias, etc.). Input signal S2 may, for example, be analyzed by the Second Signal Analysis module 922 for one or more of a variety of signal characteristics, various non-limiting examples are presented herein (e.g. , higher frequency content, amplitude, bias, etc.). The Position Determination module 950 may then process the results of such signal analysis to determine or classify the device position.
[0087] Referring now to Figure 10, such figure shows a system 1000 in which a first characteristic of a first signal is analyzed, a second characteristic of the first signal and of a second signal is analyzed, and the results of such analyses are processed to classify device position. For example, input signal S i may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R32), but need not be. The system 1000 may, for example at the First Signal Analysis Module 1013, analyze a first characteristic of the input signal S (e.g. , the presence of absence of substantial higher level frequency components) and output the results of such analysis at Fi. Also for example, input signal S2 may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for a coefficient R33), but need not be. The system 1000 may also, for example at the Second Signal Analysis Module 1023, analyze a second characteristic of the input signals S i and S2 (e.g. , maximum combined amplitude value in a time window) and output the results of such analysis at F2. The Position Determination module 1050 may, as discussed previously, then analyze the Fi and F2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
[0088] Referring next to Figure 11, such figure shows a system 1100 similar to the system 1000 of Figure 10, but with the first characteristic (e.g. , a frequency content characteristic) of the second signal S2 also being analyzed. For example, input signal S i may be a discrete time signal (or series) of rotation matrix coefficient values (e.g. , for coefficient R32), but need not be. The system 1000 may, for example at the First Signal Analysis Module 1114, analyze a first characteristic of the input signals S i and S2 (e.g. , the presence of absence of substantial higher level frequency components) and output the results of such analysis at F\ . Also for example, input signal S2 may be a discrete time signal (or series) of rotation matrix coefficient values (e.g., for a coefficient R33), but need not be. The system 1100 may also, for example at the Second Signal Analysis Module 1124, analyze a second characteristic of the input signals S i and S2 (e.g. , maximum combined amplitude value in a time window) and output the results of such analysis at F2. The Position Determination module 1150 may, as discussed previously, then analyze the Fi and F2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
[0089] Referring to Figure 12, as discussed herein the signals to be analyzed may, for example at the Signal Selection module 1201 be selected (e.g. , at startup, during real-time as the device is used, etc.) for processing. For example, in a first use scenario a first set of n (e.g. , one or more) signals may be analyzed, for example at the First Signal Analysis Module 1215, for a first signal characteristic (e.g. , a frequency or spectral content characteristics) and a second set of m (e.g., one or more) signals may be analyzed, for example at the Second Signal Analysis Module 1225 for a second signal characteristic (e.g. , an amplitude characteristics). A second use scenario may, for example, result in one or more different sets of signals being analyzed. For example, a general device orientation may change, causing the Signal Selection module 1201 to select different signals associated with different rotation directions, different inertial or non-inertial sensors, etc. The Position Determination module 1250 may, as discussed previously, then analyze the F and F2 signals to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
[0090] Referring to Figure 13, such figure shows a system 1300 that is generally similar to the system 1200 shown in Figure 12, but with the addition of a Non-inertial Sensor Data module 1370 providing information directly to the Position Determination module 1350. Such a system configuration is not meant to exclude non-inertial sensor information from being selected and processed to obtain features Fi and F2. The signals to be analyzed may, for example at the Signal Selection module 1301 be selected (e.g. , at startup, during real-time as the device is used, etc.) for processing. For example, in a first use scenario a first set of n (e.g. , one or more) signals may be analyzed, for example at the First Signal Analysis Module 1316, for a first signal characteristic (e.g. , a frequency or spectral content characteristics) and a second set of m (e.g. , one or more) signals may be analyzed, for example at the Second Signal Analysis Module 1326 for a second signal characteristic (e.g. , an amplitude characteristics). A second use scenario may, for example, result in one or more different sets of signals being analyzed. For example, a general device orientation may change, causing the Signal Selection module 1301 to select different signals associated with different rotation directions, different inertial or non-inertial sensors, etc. The Position Determination module 1250 may, as discussed previously, then analyze the Fi and F2 signals, along with the data from the Non-inertial Sensor Data module 1370, to classify device position, for example in a manner generally analogous to the manners discussed elsewhere herein.
[0091] The systems illustrated in Figures 1 and 4-13 were presented to illustrate various aspects of the disclosure. Any of the systems presented herein may share any or all characteristics with any of the other systems presented herein. Additionally, it should be understood that the various modules were separated out for the purpose of illustrative clarity, and that the scope of various aspects of this disclosure should not be limited by arbitrary boundaries between modules. For example, any one or more of the modules may share hardware and/or software with any one or more other modules.
[0092] As discussed herein, any one or more of the modules and/or functions discussed herein may be implemented by a pure hardware solution or by a processor (e.g., an application or host processor, a sensor processor, etc.) executing software instructions. Similarly, other embodiments may comprise or provide a non-transitory computer readable medium and/or storage medium, and/or a non-transitory machine readable medium and/or storage medium, having stored thereon, a machine code and/or a computer program having at least one code section executable by a machine and/or a computer (or processor), thereby causing the machine and/or computer to perform the methods as described herein.
[0093] In summary, various aspects of the present disclosure provide a system and method for determining device position (e.g. , in relation to a user thereof). While the foregoing has been described with reference to certain aspects and embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from its scope. Therefore, it is intended that the disclosure not be limited to the particular embodiment(s) disclosed, but that the disclosure will include all embodiments falling within the scope of the appended claims.

Claims

CLAIMS What is claimed is:
1. A system for classifying position of a device, the system comprising:
at least one module operable to, at least:
receive a discrete time signal representation of a rotation matrix coefficient;
analyze a first characteristic of the discrete time signal;
analyze a second characteristic, different from the first characteristic, of the discrete time signal; and
classify the position of the device based, at least in part, on the analysis of the first characteristic and the analysis of the second characteristic.
2. The system of claim 1, wherein:
the discrete time signal is characterized by a fundamental frequency component; and
the first characteristic comprises frequency content of the discrete time signal at higher frequencies than the fundamental frequency component.
3. The system of claim 2, wherein the at least one module is operable to analyze the first characteristic of the discrete time signal by, at least in part, counting at least one of critical points and/or inflection points of the discrete time signal.
4. The system of claim 1, wherein the second characteristic comprises amplitude of the discrete time signal.
5. The system of claim 4, wherein the at least one module is operable to analyze the second characteristic of the discrete time signal by, at least in part, determining a maximum amplitude of the discrete time signal during a window.
6. The system of claim 1, wherein the at least one module is further operable to:
receive a second discrete time signal representation of a second rotation matrix coefficient;
analyze the second characteristic of the second discrete time signal; and classify the position of the device based further, at least in part, on the analysis of the second characteristic of the second discrete time signal.
7. The system of claim 1, wherein the at least one module is operable to select the rotation matrix coefficient from a plurality of rotation matrix coefficients.
8. The system of claim 1, wherein the at least one module is operable to classify the position of the device based further, at least in part, on non-inertial sensor data.
9. The system of claim 1, wherein the at least one module is operable to classify the position of the device by at least in part selecting the position of the device from a set of positions, the set of positions comprising: device in pocket, device held in front of user, and device held at side of user.
10. The system of claim 1, wherein the rotation matrix coefficient comprises an R32 coefficient.
11. The system of claim 6, wherein:
the rotation matrix coefficient comprises an R32 coefficient; and
the second rotation matrix coefficient comprises an R33 coefficient.
12. A system for classifying position of a device, the system comprising:
at least one module operable to, at least:
analyze a first rotation matrix coefficient over time;
analyze a second rotation matrix coefficient over time; and
classify the position of the device based, at least in part, on the analysis of the first rotation matrix coefficient and the analysis of the second rotation matrix coefficient.
13. The system of claim 12, wherein the at least one module is operable to analyze the first rotation matrix coefficient over time by, at least in part, analyzing frequency content of the first rotation matrix coefficient over time at frequencies above a fundamental frequency of the first rotation matrix coefficient over time.
14. The system of claim 12, wherein the at least one module is operable to analyze the second rotation matrix coefficient over time by, at least in part, analyzing amplitude of the second rotation matrix coefficient over time.
15. The system of claim 12, wherein:
the at least one module is operable to analyze the first rotation matrix coefficient over time by, at least in part:
analyzing frequency content of the first rotation matrix coefficient over time at frequencies above a fundamental frequency of the first rotation matrix coefficient over time; and
analyzing amplitude of the first rotation matrix coefficient over time; and
the at least one module is operable to analyze the second rotation matrix coefficient over time by, at least in part, analyzing amplitude of the second rotation matrix coefficient over time.
16. The system of claim 12, wherein at least one of the first and second rotation matrix coefficients comprises an R32 coefficient.
17. The system of claim 12, wherein at least one of the first and second rotation matrix coefficients comprises an R33 coefficient.
18. A system for classifying position of a device, the system comprising:
at least one module operable to, at least:
receive a signal indicative of orientation of a device;
perform a first analysis of at least a first characteristic of the received signal; perform a second analysis of at least a second characteristic, different from the first characteristic, of the received signal; and
classify the position of the device based, at least in part, on the first analysis and the second analysis.
19. The system of claim 18, wherein the received signal comprises a fundamental frequency component, and the first analysis comprises analyzing frequency content of the received signal at higher frequencies than the fundamental frequency.
20. The system of claim 18, wherein the second analysis comprises analyzing amplitude of the received signal.
21. The system of claim 18, wherein the at least one module is operable to:
receive a second signal indicative of orientation of the device;
perform a third analysis of at least the second characteristic of the received second signal; and
classify the position of the device based further, at least in part, on the third analysis.
22. The system of claim 18, wherein the received signal comprises a discrete time signal representation of a rotation matrix coefficient.
23. The system of claim 18, wherein the at least one module is operable to select the signal from a plurality of signals.
24. The system of claim 18, wherein the at least one module is operable to classify the position of the device based further, at least in part, on non-inertial sensor data.
25. The system of claim 18, wherein the at least one module is operable to classify the position of the device by at least in part selecting the position from a set of positions, the set of positions comprising: device in pocket, device held in front of user, and device held at side of user.
PCT/US2015/059854 2014-11-10 2015-11-10 System and method for device position classification WO2016077286A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/537,568 US20160131484A1 (en) 2008-04-21 2014-11-10 System and method for device position classification
US14/537,568 2014-11-10

Publications (1)

Publication Number Publication Date
WO2016077286A1 true WO2016077286A1 (en) 2016-05-19

Family

ID=55025315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/059854 WO2016077286A1 (en) 2014-11-10 2015-11-10 System and method for device position classification

Country Status (1)

Country Link
WO (1) WO2016077286A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019081048A1 (en) * 2017-10-27 2019-05-02 HELLA GmbH & Co. KGaA Method of driving a component of a vehicle, system, computer program product and computer-readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7104129B2 (en) 2004-02-02 2006-09-12 Invensense Inc. Vertically integrated MEMS structure with electronics in a hermetically sealed cavity
WO2012135726A1 (en) * 2011-03-31 2012-10-04 Qualcomm Incorporated Devices, methods, and apparatuses for inferring a position of a mobile device
EP2527791A1 (en) * 2011-05-25 2012-11-28 CSR Technology Holdings Inc. Hierarchical context detection method to determine location of a mobile device on a person's body
US20130046505A1 (en) * 2011-08-15 2013-02-21 Qualcomm Incorporated Methods and apparatuses for use in classifying a motion state of a mobile device
WO2014129166A1 (en) * 2013-02-22 2014-08-28 旭化成株式会社 Carry-state determination device and program
US10692108B1 (en) 2017-04-10 2020-06-23 BoardActive Corporation Platform for location and time based advertising

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7104129B2 (en) 2004-02-02 2006-09-12 Invensense Inc. Vertically integrated MEMS structure with electronics in a hermetically sealed cavity
WO2012135726A1 (en) * 2011-03-31 2012-10-04 Qualcomm Incorporated Devices, methods, and apparatuses for inferring a position of a mobile device
EP2527791A1 (en) * 2011-05-25 2012-11-28 CSR Technology Holdings Inc. Hierarchical context detection method to determine location of a mobile device on a person's body
US20130046505A1 (en) * 2011-08-15 2013-02-21 Qualcomm Incorporated Methods and apparatuses for use in classifying a motion state of a mobile device
WO2014129166A1 (en) * 2013-02-22 2014-08-28 旭化成株式会社 Carry-state determination device and program
US20150358783A1 (en) * 2013-02-22 2015-12-10 Asahi Kasei Kabushiki Kaisha Hold state judging apparatus and computer readable medium
US10692108B1 (en) 2017-04-10 2020-06-23 BoardActive Corporation Platform for location and time based advertising

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019081048A1 (en) * 2017-10-27 2019-05-02 HELLA GmbH & Co. KGaA Method of driving a component of a vehicle, system, computer program product and computer-readable medium

Similar Documents

Publication Publication Date Title
US10534014B2 (en) System and method for drop detection
US10072956B2 (en) Systems and methods for detecting and handling a magnetic anomaly
US10514279B2 (en) System and method for MEMS sensor system synchronization
US10823555B2 (en) Trajectory estimation system
US20160051167A1 (en) System and method for activity classification
US20140244209A1 (en) Systems and Methods for Activity Recognition Training
US20160077166A1 (en) Systems and methods for orientation prediction
US20170191831A1 (en) Systems and methods for synthetic sensor signal generation
US10652696B2 (en) Method and apparatus for categorizing device use case for on foot motion using motion sensor data
US10830606B2 (en) System and method for detecting non-meaningful motion
US10386203B1 (en) Systems and methods for gyroscope calibration
US10239750B2 (en) Inferring ambient atmospheric temperature
US10018481B1 (en) Multi-band pedometer with mobility mode indicator
Windau et al. Situation awareness via sensor-equipped eyeglasses
US20160131484A1 (en) System and method for device position classification
US11395633B2 (en) Systems and methods for determining engagement of a portable device
US20200225949A1 (en) Systems and Methods for Interfacing A Sensor and A Processor
WO2015105007A1 (en) Travel direction determination device, map matching device, travel direction determination method and program
US10551195B2 (en) Portable device with improved sensor position change detection
WO2016077286A1 (en) System and method for device position classification
US9921335B1 (en) Systems and methods for determining linear acceleration
US20150362315A1 (en) Systems and methods for determining position information using environmental sensing
WO2016028420A1 (en) System and method for drop detection
EP3104126B1 (en) Systems and methods for synthetic sensor signal generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15816907

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15816907

Country of ref document: EP

Kind code of ref document: A1