US20140301666A1 - Geo-coding images - Google Patents
Geo-coding images Download PDFInfo
- Publication number
- US20140301666A1 US20140301666A1 US14/225,778 US201414225778A US2014301666A1 US 20140301666 A1 US20140301666 A1 US 20140301666A1 US 201414225778 A US201414225778 A US 201414225778A US 2014301666 A1 US2014301666 A1 US 2014301666A1
- Authority
- US
- United States
- Prior art keywords
- images
- geo
- image
- coded
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06F17/30244—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
Definitions
- images can be annotated or geo-coded based on the geographic locations associated with their content. This can be accomplished in part by selecting one or more images to geo-code. The selected images can then be dragged and dropped onto a map, thus triggering a pointed marker such as a virtual push pin, flag, or thumbtack to appear.
- the marker can provide greater precision and accuracy when pinpointing the desired location. As the marker is moved on the map, corresponding locations can appear to assist the user in identifying them and in knowing where to place the marker.
- the map can be viewed at various zoom levels, and images can be geo-coded at any zoom level.
- various navigation controls can be employed to facilitate viewing the images as they are arranged on the map.
- hovering over the image can cause a thumbnail view of the image (or at least a part of the image) to appear. Clicking on the thumbnail can expand the image to a full view. Hovering over the image can also reveal different types of information about the image such as the image name, date, location name, description of the location, and/or its coordinates.
- Image sharing with family and friends and even the general public in some instances has become a more popular practice.
- a privacy or security control can be employed to verify permission or access levels before allowing any one but to owner to access or view them.
- geo-coded images can be employed to assist with providing driving directions or to assist with telling a visual story using the geo-coded images and time stamps associated with each image.
- FIG. 1 is a block diagram of a system that facilitates geo-based storage and retrieval of images based on annotating the images with location data.
- FIG. 2 is a block diagram of a system that facilitates geo-based storage and retrieval of images through browsing based on the location data associated with each image and on privacy controls.
- FIG. 3 is an exemplary user interface for a browser that facilitates accessing and retrieving stored images that may be employed in the systems of FIG. 1 and/or FIG. 2 .
- FIG. 4 is an exemplary user interface for photo images that can be stored locally or remotely but retrieved and viewed for geo-based annotation.
- FIG. 5 is an exemplary user interface demonstrating a plurality of photos available for geo-code annotation as well as images on the map that previously have been geo-coded.
- FIG. 6 is an exemplary user interface of a map view that results from right clicking on any point on the map which can be employed to geo-code one or more images.
- FIG. 7 is an exemplary user interface of the map view that follows from FIG. 6 where “tag photo” is selected and triggers a box to open for geo-coding one or more images.
- FIG. 8 is an exemplary user interface that demonstrates a hover operation performed on an image marker and an expansion of a thumbnail view to a full view of an image.
- FIG. 9 is a block diagram of a system that facilitates generating map related directions and including one or more geo-coded images where appropriate to serve as landmarks.
- FIG. 10 is a flow diagram of an exemplary method that facilitates geo-based storage and retrieval of images based on annotating the images with location data.
- FIG. 11 is a flow diagram of an exemplary method that facilitates storing and browsing images by location given the requisite permission levels to do so.
- FIG. 12 is a flow diagram of a method that facilitates generating customized directions which incorporate one or more geo-coded images where appropriate for use as landmarks.
- FIG. 13 illustrates an exemplary environment for implementing various aspects of the invention.
- a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and a computer.
- a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and a computer.
- an application running on a server and the server can be a component.
- One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
- the subject application provides a unique system, method, and user interface that facilitate geo-annotating digital content to improve indexing, storing, and retrieval of such content.
- the user interface includes a mechanism that allows users to drag and place digital content such as images or photographs onto specific locations on a map in order to be automatically geo-coded with that particular location information.
- Visualization cues can be employed as well to identify images that have been geo-coded or to assist in selecting a geo-code or location with a high level of granularity.
- FIG. 1 there is a general block diagram of a system 100 that facilitates geo-based storage and retrieval of images based on annotating the images with location data.
- the system 100 includes an image store 110 that can accumulate and save a plurality of images for later viewing and/or modification (e.g., editing).
- the database may be located on a remote server or on a user's local machine.
- a geo-code annotation component 120 can annotate one or more images selected from the image store 110 with a respective geographic location associated with each image. The geo-coded images can then be viewed on a map by way of a map-based display component 130 .
- the selected images corresponding to Milan can be dragged from the general photo view and onto the map. This can trigger a virtual marker which becomes associated or connected to these photos to appear on the map. The marker can lead the user to pinpoint the desired location. Once the correct location is found, a submit control can be clicked on which communicates the geo-code information for these photos back to the image store 110 for storage with the photos. Thus, these photos can forever be associated with and searchable by this particular location.
- the geo-coded images stored in the image store 110 can be queried via a location-based query component 140 .
- location based search input can be entered such as Milan, Italy.
- the query component 140 can search through the metadata maintained in the image store 110 for any matches to the search input.
- the corresponding image can be positioned on the map and viewed according to its location (e.g., Milan, Italy).
- Milan, Italy e.g., Milan, Italy
- a cluster of markers can appear on top of or over Milan. Even more so, when viewing a map of Europe, there can be markers in Paris, France, and throughout Spain and Germany as well which indicate that there are other stored images related to those locations which also have been geo-coded.
- the markers can appear alone but when hovering there over, at least a partial view of the image can be viewed.
- an icon for each geo-coded image can appear along side its respective marker.
- its final position as given by the user can be displayed as in the form of geolat: latitude and geolong: longitude.
- the actual name of the location including a street address if applicable can also be provided to the user.
- FIG. 2 there is a block diagram of a system 200 that facilitates indexing and retrieving geo-based images through browsing based on the location data associated with each image as well as privacy controls.
- the privacy controls manage the access, viewing, and editing of the images and in particular, the metadata, through the use of permissions.
- permission levels include but are not limited to access permission, viewing permission, and editing permission. For instance, some images can be viewed by the public whereas other images can be marked for private viewing only by the owner and/or other designated users.
- a privacy control component 210 can control an image browse component 220 .
- the image browse component 220 can browse through the image store 110 by way of one or more maps displayed via the map-based display component 130 .
- a map view of a user's geo-coded images can be viewable by any one with the requisite permissions.
- the geo-coded images can be presented in the map view for easier visibility and more relevant context. Take for example a group of pictures taken in London, England. A viewer of such pictures can readily understand the context of the pictures (e.g., London) without having to rely on the names of each picture.
- the privacy control component 120 can also control the geo-code annotation component 120 .
- edit permissions can be required in order to annotate any images with location metadata.
- images from the image store can be geo-coded by the annotation component 120 .
- the geo-code data can be stored along with the respective image, and the geo-coded images can be viewed on the map of the relevant region or area (e.g., particular country, state, city, street, continent, etc.).
- FIGS. 3-8 there are illustrated a series of exemplary user interfaces that can be employed by the system ( 100 , 200 ) in order to facilitate the geo-coding of images for map-based viewing and browsing.
- the user interface 300 demonstrates an exemplary introductory screen that provides a user with an option of geo-coding images stored on a selected database or browsing those images.
- Security or privacy login data may be requested depending on which choice is selected and/or depending on whether any public or private data exists in the image store.
- this user interface names one image store from which to browse or access images, it should be appreciated that any available image store can be included here.
- the user could select one image store to browse, view, or edit at a time.
- multiple image stores could be accessed at once particularly when some images belonging to the same location are stored in disparate locations.
- FIG. 4 demonstrates an exemplary user interface for photo images that can be accessed and viewed once the relevant image store is selected in FIG. 3 .
- Groups of photos can be represented as a photo set.
- the user can select one or more photo sets to expose the individual images in the Photos screen. Any photo image can then be dragged and dropped on to the map to be geo-coded. For example, suppose a user has a set of photos from various locations within San Francisco. Some were taken from the Golden Gate Bridge, Ghirardelli Square, a trolley, Lombard Street, and Chinatown. The user can drag one or more of these images to the map and then pin point them using a virtual push pin or marker to San Francisco or the relevant location on the map. Depending on the zoom view of the map, the user can also pin point any image to a street or address as well.
- the zoom view of the map can dictate the type of information readily visible on the map.
- the virtual markers may only be visible.
- the user can hover on the marker.
- both the marker and image can be readily visible.
- the marker itself can stem out from a top edge of the image as demonstrated in FIG. 5 .
- only the images may be visible as icons without the markers. In this latter case, the marker can appear when hovering over the image icon and other related image information can be shown as well.
- FIG. 5 there is a plurality of photo images 510 in view by the user.
- One of the images named Eisa 520 has just been geo-coded on a location in Japan.
- the image above Eisa in the Photos list ( 530 ) has previously been geo-coded as indicated by the symbol 540 (e.g., globe) next to the image name.
- the map view can be panned to various locations by manually scrolling, panning, and zooming to find a desired location.
- a find location operation can be performed. For example, if the user now wants to find San Francisco (after geo-coding images in Japan), rather than manually navigate the map to that location, he can enter the location in a designated field and the map view can change to show California and in particular, San Francisco. That is, the center or focal point of the map view can change as needed.
- FIGS. 6 and 7 Yet another option is demonstrated in FIGS. 6 and 7 .
- the user can select any point on the map by right-clicking on that point.
- a set of coordinates can be visualized that correspond to that point as illustrated in FIG. 6 .
- the actual location such as street or city and state can be identified here as well though not included in the figure.
- the user can select tag photo to geo-code any images with these coordinates.
- Selecting tag photo can trigger another window to open as depicted in FIG. 7 that allows the user to drag and drop several images into the window for geo-coding with the same coordinates (31.4122, 98.6106).
- the window in FIG. 7 appears to be large but the top right hand corner 710 (circled in black) indicates the precise point on the map that the geo-code relates to. Any images dropped into the window can be geo-coded with these coordinates after the submit button is selected.
- FIG. 8 there is an exemplary user interface 800 that demonstrates a hover operation performed on an image marker and an expansion of a thumbnail view 810 to a full view of an image 820 .
- the user has entered Okinawa, Japan in the Find field 830 , and thus the current map view is of Japan.
- image icons 840 circled in black for emphasis
- a thumbnail view of an image can be obtained by clicking or right clicking on the respective image icon.
- the thumbnail view can be further expanded to the full size view of the image by clicking on the thumbnail.
- the thumbnail can be a partial image of the real image—contrary to the partial image shown in the thumbnail view.
- the system 900 includes the image store 110 comprising geo-coded images and a map engine processor 910 .
- the map engine processor 910 can process a query such as for driving directions based in part on the geo-coded images in the image store 110 . For example, people often can follow directions better when physical landmarks are provided together with or in the absence of street names. In this case, the map engine processor 910 can retrieve the most relevant geo-coded images to include in a set of customized driving directions.
- the directions can include the following: Turn right on Main Street—a large giraffe statue is on the corner. A picture of the large giraffe statue can accompany this line in or this portion of the directions and be viewable by the user.
- geo-coded images can also facilitate the creation of stories or summaries of a particular trip or experience as captured in the images. For instance, imagine that a user has a set of pictures from Washington, D.C. and he wants to share his pictures and his trip with his friends who have never been there. By geo-coding the pictures and ordering them by time taken, the user can create a story of his trip and the sights and tourist attractions he visited can be viewed as he experienced them. Thus, he could walk his friends through his trip by way of his geo-coded pictures.
- the method 1000 involves annotating at least one image with the geographic location data or geo-code associated therewith at 1010 .
- One or more images can be selected from an image store to be annotated individually (one by one) or in bulk to mitigate tedious and repetitive actions.
- the location data refers to the location that is associated with each image. For example, photos taken at the Fort Worth Stockyards can be associated with Fort Worth, Tex. Alternatively, a specific street name or address can be associated with the image and the image can be annotated accordingly.
- the geo-coded images can be displayed on a map according to their respective locations (and geo-codes).
- the images can be displayed as icons that can be clicked on to open the image or view its information.
- the geo-codes and maps can be based on any coordinate system such as latitude, longitude coordinates. Additional information about each image can be obtained by hovering over the image, by right clicking to view a number of different options, or by clicking on it to expand the view of the image.
- the method 1100 involves verifying a permissions level at 1110 in order to control access to any images including geo-coded and non-geo-coded images.
- a permissions level at 1110 in order to control access to any images including geo-coded and non-geo-coded images.
- users can be asked to provide login information in order to freely access, view, and/or edit their images in order to mitigate the unauthorized acts of others.
- geo-coded images can be browsed and/or viewed by selecting or entering a location on a map. Any images can be made public or be kept private depending on user preferences. However, certain actions can be controlled by verifying the permissions level(s) of each user or viewer.
- FIG. 12 there is a flow diagram of an exemplary method 1200 that facilitates generating customized directions which incorporate one or more geo-coded images where appropriate for use as landmarks.
- the method 1200 involves geo-coding one or more images with location metadata associated with each image at 1210 . Images with the same location metadata can be geo-coded at the same time to make the process more efficient.
- a query for map-related information such as driving directions can be received and processed.
- a customized set of directions can be generated whereby one or more geo-coded images are included and positioned within the directions to operate as visualized landmarks.
- geo-coded images can be viewed in the directions to make it easier for the user to find his way. For some users, this can be very helpful since they are no longer required to only rely on street names. Instead, they can be able to view landmarks or buildings along their route.
- FIG. 13 and the following discussion are intended to provide a brief, general description of a suitable operating environment 1310 in which various aspects of the mapping system and method may be implemented.
- the subject system and method can operate on any computing device—portable or non-portable including but not limited to desktop computers, laptops, PDAs, smart phones, mobile phones, and tablet PCs on which the social network can be accessed and viewed. While the invention is described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices, those skilled in the art will recognize that the invention can also be implemented in combination with other program modules and/or as a combination of hardware and software.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular data types.
- the operating environment 1310 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention.
- Other well known computer systems, environments, and/or configurations that may be suitable for use with the invention include but are not limited to, personal computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include the above systems or devices, and the like.
- an exemplary environment 1310 for implementing various aspects of the invention includes a computer 1312 .
- the computer 1312 includes a processing unit 1314 , a system memory 1316 , and a system bus 1318 .
- the system bus 1318 couples system components including, but not limited to, the system memory 1316 to the processing unit 1314 .
- the processing unit 1314 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1314 .
- the system bus 1318 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MCA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
- ISA Industrial Standard Architecture
- MCA Micro-Channel Architecture
- EISA Extended ISA
- IDE Intelligent Drive Electronics
- VLB VESA Local Bus
- PCI Peripheral Component Interconnect
- USB Universal Serial Bus
- AGP Advanced Graphics Port
- PCMCIA Personal Computer Memory Card International Association bus
- SCSI Small Computer Systems Interface
- the system memory 1316 includes volatile memory 1320 and nonvolatile memory 1322 .
- the basic input/output system (BIOS) containing the basic routines to transfer information between elements within the computer 1312 , such as during start-up, is stored in nonvolatile memory 1322 .
- nonvolatile memory 1322 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory.
- Volatile memory 1320 includes random access memory (RAM), which acts as external cache memory.
- RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
- SRAM synchronous RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM Synchlink DRAM
- DRRAM direct Rambus RAM
- Disk storage 1324 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick.
- disk storage 1324 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
- an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
- a removable or nonremovable interface is typically used such as interface 1326 .
- FIG. 13 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 1310 .
- Such software includes an operating system 1328 .
- Operating system 1328 which can be stored on disk storage 1324 , acts to control and allocate resources of the computer system 1312 .
- System applications 1330 take advantage of the management of resources by operating system 1328 through program modules 1332 and program data 1334 stored either in system memory 1316 or on disk storage 1324 . It is to be appreciated that the subject invention can be implemented with various operating systems or combinations of operating systems.
- Input devices 1336 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1314 through the system bus 1318 via interface port(s) 1338 .
- Interface port(s) 1338 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB).
- Output device(s) 1340 use some of the same type of ports as input device(s) 1336 .
- a USB port may be used to provide input to computer 1312 and to output information from computer 1312 to an output device 1340 .
- Output adapter 1342 is provided to illustrate that there are some output devices 1340 like monitors, speakers, and printers among other output devices 1340 that require special adapters.
- the output adapters 1342 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1340 and the system bus 1318 . It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1344 .
- Computer 1312 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1344 .
- the remote computer(s) 1344 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 1312 .
- only a memory storage device 1346 is illustrated with remote computer(s) 1344 .
- Remote computer(s) 1344 is logically connected to computer 1312 through a network interface 1348 and then physically connected via communication connection 1350 .
- Network interface 1348 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN).
- LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 1102.3, Token Ring/IEEE 1102.5 and the like.
- WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
- ISDN Integrated Services Digital Networks
- DSL Digital Subscriber Lines
- Communication connection(s) 1350 refers to the hardware/software employed to connect the network interface 1348 to the bus 1318 . While communication connection 1350 is shown for illustrative clarity inside computer 1312 , it can also be external to computer 1312 .
- the hardware/software necessary for connection to the network interface 1348 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
Abstract
A unique system, method, and user interface are provided that facilitate more efficient indexing and retrieval of images. In particular, the systems and methods involve annotating or geo-coding images with their location metadata. Geo-coded images can be displayed on a map and browsed or queried based on their location metadata. Images can be annotated one by one or in bulk to reduce repetitiveness and inconsistency among related images. More specifically, selected images can be dropped onto a map, thereby triggering a virtual marker to appear. The virtual marker facilitates pinpointing the precise location associated with the images on the map with a higher level of granularity. The system and method can also generate customized directions and include geo-coded images throughout to serve as visual landmarks. Privacy controls can be employed as well to control access and modification of the images.
Description
- This Nonprovisional patent application is a continuation of U.S. patent application Ser. No. 11/379,466, filed Apr. 20, 2006, entitled “GEO-CODING IMAGES,” which is incorporated by reference herein in its entirety.
- Digital image technology has advanced exponentially in the recent years as is evident by higher consumer demands for higher quality digital cameras and digital image processing that is fast, convenient, and inexpensive. In fact, digital photography has become ubiquitous. However, due to the ease and frequency of taking and collecting digital images, substantial storage and indexing issues have arisen. For instance, it is not uncommon for individuals to amass thousands of digital images which are often stored in several disparate locations. Some may be stored on an office computer, some on a PDA, some on a mobile phone, some on a laptop, some on a home computer, and some online and in any of these, there may be many different folders, subfolders, and naming conventions used for various sets of images depending on when and where they were stored. Moreover, quick and efficient retrieval of particular images becomes an increasingly difficult problem especially as the number or type of digital media rises.
- The following presents a simplified summary in order to provide a basic understanding of some aspects of the systems and/or methods discussed herein. This summary is not an extensive overview of the systems and/or methods discussed herein. It is not intended to identify key/critical elements or to delineate the scope of such systems and/or methods. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
- The subject application relates to systems, user interfaces, and/or methods that facilitate geo-based storage and retrieval of images. In particular, images can be annotated or geo-coded based on the geographic locations associated with their content. This can be accomplished in part by selecting one or more images to geo-code. The selected images can then be dragged and dropped onto a map, thus triggering a pointed marker such as a virtual push pin, flag, or thumbtack to appear. The marker can provide greater precision and accuracy when pinpointing the desired location. As the marker is moved on the map, corresponding locations can appear to assist the user in identifying them and in knowing where to place the marker. The map can be viewed at various zoom levels, and images can be geo-coded at any zoom level.
- The geo-based annotation can be performed on individual images or can be done on a group of images in order to make the annotation efficient and consistent among related images. The annotated images can be displayed on a map view according to their respective locations and appear as icons. Once geo-coded, the images retain this information regardless of their storage location. A symbol or some other visualization can appear along with the image name to denote that it has been geo-coded. As desired, images can be retrieved from a database and viewed according to their location such as when searching or browsing through images. For example, images annotated with Corpus Christi, Tex. can be retrieved by entering a query for Corpus Christi, Tex. in a search field. The relevant images and/or their markers can appear on the map of Texas and point to Corpus Christi. Depending on the zoom level, the map view can show just the markers without any corresponding image icons, the image icons alone, or both the markers and the related image icons.
- In addition, various navigation controls can be employed to facilitate viewing the images as they are arranged on the map. In particular, hovering over the image can cause a thumbnail view of the image (or at least a part of the image) to appear. Clicking on the thumbnail can expand the image to a full view. Hovering over the image can also reveal different types of information about the image such as the image name, date, location name, description of the location, and/or its coordinates.
- Image sharing with family and friends and even the general public in some instances has become a more popular practice. Thus to manage the viewing of images, a privacy or security control can be employed to verify permission or access levels before allowing any one but to owner to access or view them. Furthermore, geo-coded images can be employed to assist with providing driving directions or to assist with telling a visual story using the geo-coded images and time stamps associated with each image.
- To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the subject invention is intended to include all such aspects and their equivalents. Other advantages and novel features of the invention may become apparent from the following detailed description of the invention when considered in conjunction with the drawings.
-
FIG. 1 is a block diagram of a system that facilitates geo-based storage and retrieval of images based on annotating the images with location data. -
FIG. 2 is a block diagram of a system that facilitates geo-based storage and retrieval of images through browsing based on the location data associated with each image and on privacy controls. -
FIG. 3 is an exemplary user interface for a browser that facilitates accessing and retrieving stored images that may be employed in the systems ofFIG. 1 and/orFIG. 2 . -
FIG. 4 is an exemplary user interface for photo images that can be stored locally or remotely but retrieved and viewed for geo-based annotation. -
FIG. 5 is an exemplary user interface demonstrating a plurality of photos available for geo-code annotation as well as images on the map that previously have been geo-coded. -
FIG. 6 is an exemplary user interface of a map view that results from right clicking on any point on the map which can be employed to geo-code one or more images. -
FIG. 7 is an exemplary user interface of the map view that follows fromFIG. 6 where “tag photo” is selected and triggers a box to open for geo-coding one or more images. -
FIG. 8 is an exemplary user interface that demonstrates a hover operation performed on an image marker and an expansion of a thumbnail view to a full view of an image. -
FIG. 9 is a block diagram of a system that facilitates generating map related directions and including one or more geo-coded images where appropriate to serve as landmarks. -
FIG. 10 is a flow diagram of an exemplary method that facilitates geo-based storage and retrieval of images based on annotating the images with location data. -
FIG. 11 is a flow diagram of an exemplary method that facilitates storing and browsing images by location given the requisite permission levels to do so. -
FIG. 12 is a flow diagram of a method that facilitates generating customized directions which incorporate one or more geo-coded images where appropriate for use as landmarks. -
FIG. 13 illustrates an exemplary environment for implementing various aspects of the invention. - The subject systems and/or methods are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the systems and/or methods. It may be evident, however, that the subject systems and/or methods may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing them.
- As used herein, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
- The subject application provides a unique system, method, and user interface that facilitate geo-annotating digital content to improve indexing, storing, and retrieval of such content. More specifically, the user interface includes a mechanism that allows users to drag and place digital content such as images or photographs onto specific locations on a map in order to be automatically geo-coded with that particular location information. Visualization cues can be employed as well to identify images that have been geo-coded or to assist in selecting a geo-code or location with a high level of granularity. The figures which follow below provide further details regarding various aspects of the subject systems, user interfaces, and methods.
- Referring now to
FIG. 1 , there is a general block diagram of asystem 100 that facilitates geo-based storage and retrieval of images based on annotating the images with location data. Thesystem 100 includes animage store 110 that can accumulate and save a plurality of images for later viewing and/or modification (e.g., editing). The database may be located on a remote server or on a user's local machine. A geo-code annotation component 120 can annotate one or more images selected from theimage store 110 with a respective geographic location associated with each image. The geo-coded images can then be viewed on a map by way of a map-baseddisplay component 130. - In practice, for example, imagine that a user has multiple sets of photos corresponding to his vacations over the last few years. In order to organize and store them more efficiently for easier viewing, the user can geo-code each set of photos one photo at a time or in groups. Suppose one set of photos were taken in Milan, Italy. The desired location can be found on the map using a few different approaches. In one approach, the user can enter Milan, Italy in a ‘find’ operation to quickly move the map to a view of Italy. Alternatively, the user can drag, pan, or scroll across the map until Italy is in view.
- Once Italy is in view, the selected images corresponding to Milan can be dragged from the general photo view and onto the map. This can trigger a virtual marker which becomes associated or connected to these photos to appear on the map. The marker can lead the user to pinpoint the desired location. Once the correct location is found, a submit control can be clicked on which communicates the geo-code information for these photos back to the
image store 110 for storage with the photos. Thus, these photos can forever be associated with and searchable by this particular location. - Subsequently, the geo-coded images stored in the
image store 110 can be queried via a location-basedquery component 140. In particular, location based search input can be entered such as Milan, Italy. Thequery component 140 can search through the metadata maintained in theimage store 110 for any matches to the search input. When a match is found, the corresponding image can be positioned on the map and viewed according to its location (e.g., Milan, Italy). Thus, when viewing a map of Italy, a cluster of markers can appear on top of or over Milan. Even more so, when viewing a map of Europe, there can be markers in Paris, France, and throughout Spain and Germany as well which indicate that there are other stored images related to those locations which also have been geo-coded. - Depending on the zoom view of the map, the markers can appear alone but when hovering there over, at least a partial view of the image can be viewed. As the map is zoomed in for more detail, an icon for each geo-coded image can appear along side its respective marker. When hovering over the marker, its final position as given by the user can be displayed as in the form of geolat: latitude and geolong: longitude. The actual name of the location including a street address if applicable can also be provided to the user.
- Turning now to
FIG. 2 , there is a block diagram of asystem 200 that facilitates indexing and retrieving geo-based images through browsing based on the location data associated with each image as well as privacy controls. The privacy controls manage the access, viewing, and editing of the images and in particular, the metadata, through the use of permissions. Examples of permission levels include but are not limited to access permission, viewing permission, and editing permission. For instance, some images can be viewed by the public whereas other images can be marked for private viewing only by the owner and/or other designated users. - As shown in
FIG. 2 , aprivacy control component 210 can control animage browse component 220. Theimage browse component 220 can browse through theimage store 110 by way of one or more maps displayed via the map-baseddisplay component 130. For instance, a map view of a user's geo-coded images can be viewable by any one with the requisite permissions. The geo-coded images can be presented in the map view for easier visibility and more relevant context. Take for example a group of pictures taken in London, England. A viewer of such pictures can readily understand the context of the pictures (e.g., London) without having to rely on the names of each picture. This can be helpful since digital images or photographs are often named automatically using a convention that is specific to the camera or camera-based device; and oftentimes, changing the names for dozens or hundreds of digital pictures can be an extremely slow and tedious process that many times is left undone. Therefore, by viewing a set of images by location, the viewer is provided with some additional information about the images. - The
privacy control component 120 can also control the geo-code annotation component 120. In particular, edit permissions can be required in order to annotate any images with location metadata. However, when the user verifies permission such as by entering the correct login information, images from the image store can be geo-coded by theannotation component 120. The geo-code data can be stored along with the respective image, and the geo-coded images can be viewed on the map of the relevant region or area (e.g., particular country, state, city, street, continent, etc.). - Turning now to
FIGS. 3-8 , there are illustrated a series of exemplary user interfaces that can be employed by the system (100, 200) in order to facilitate the geo-coding of images for map-based viewing and browsing. Beginning withFIG. 3 , theuser interface 300 demonstrates an exemplary introductory screen that provides a user with an option of geo-coding images stored on a selected database or browsing those images. Security or privacy login data may be requested depending on which choice is selected and/or depending on whether any public or private data exists in the image store. Though this user interface names one image store from which to browse or access images, it should be appreciated that any available image store can be included here. Thus, if the user had images stored in multiple remote and/or local locations, they can all be listed on theuser interface 300 in some relevant order. The user could select one image store to browse, view, or edit at a time. Alternatively, multiple image stores could be accessed at once particularly when some images belonging to the same location are stored in disparate locations. -
FIG. 4 demonstrates an exemplary user interface for photo images that can be accessed and viewed once the relevant image store is selected inFIG. 3 . Groups of photos can be represented as a photo set. The user can select one or more photo sets to expose the individual images in the Photos screen. Any photo image can then be dragged and dropped on to the map to be geo-coded. For example, suppose a user has a set of photos from various locations within San Francisco. Some were taken from the Golden Gate Bridge, Ghirardelli Square, a trolley, Lombard Street, and Chinatown. The user can drag one or more of these images to the map and then pin point them using a virtual push pin or marker to San Francisco or the relevant location on the map. Depending on the zoom view of the map, the user can also pin point any image to a street or address as well. - Again, the zoom view of the map can dictate the type of information readily visible on the map. For example, on one zoom level, the virtual markers may only be visible. To view the corresponding image and its related location and image information, the user can hover on the marker. Alternatively, both the marker and image can be readily visible. The marker itself can stem out from a top edge of the image as demonstrated in
FIG. 5 . In another view, only the images may be visible as icons without the markers. In this latter case, the marker can appear when hovering over the image icon and other related image information can be shown as well. - According to
FIG. 5 , there is a plurality ofphoto images 510 in view by the user. One of the images namedEisa 520 has just been geo-coded on a location in Japan. The image above Eisa in the Photos list (530) has previously been geo-coded as indicated by the symbol 540 (e.g., globe) next to the image name. - The map view can be panned to various locations by manually scrolling, panning, and zooming to find a desired location. Alternatively, a find location operation can be performed. For example, if the user now wants to find San Francisco (after geo-coding images in Japan), rather than manually navigate the map to that location, he can enter the location in a designated field and the map view can change to show California and in particular, San Francisco. That is, the center or focal point of the map view can change as needed.
- Yet another option is demonstrated in
FIGS. 6 and 7 . Here, the user can select any point on the map by right-clicking on that point. A set of coordinates can be visualized that correspond to that point as illustrated inFIG. 6 . In addition, the actual location such as street or city and state can be identified here as well though not included in the figure. At this time, the user can select tag photo to geo-code any images with these coordinates. Selecting tag photo can trigger another window to open as depicted inFIG. 7 that allows the user to drag and drop several images into the window for geo-coding with the same coordinates (31.4122, 98.6106). The window inFIG. 7 appears to be large but the top right hand corner 710 (circled in black) indicates the precise point on the map that the geo-code relates to. Any images dropped into the window can be geo-coded with these coordinates after the submit button is selected. - Moving on to
FIG. 8 , there is anexemplary user interface 800 that demonstrates a hover operation performed on an image marker and an expansion of athumbnail view 810 to a full view of animage 820. As indicated in the figure, the user has entered Okinawa, Japan in theFind field 830, and thus the current map view is of Japan. In the southern end of Japan, there are a number if image icons 840 (circled in black for emphasis) on the map. A thumbnail view of an image can be obtained by clicking or right clicking on the respective image icon. The thumbnail view can be further expanded to the full size view of the image by clicking on the thumbnail. It should be appreciated that the thumbnail can be a partial image of the real image—contrary to the partial image shown in the thumbnail view. - Turning now to
FIG. 9 , there is a block diagram of a system 900 that facilitates generating map related directions and including one or more geo-coded images where appropriate to serve as landmarks. The system 900 includes theimage store 110 comprising geo-coded images and amap engine processor 910. Themap engine processor 910 can process a query such as for driving directions based in part on the geo-coded images in theimage store 110. For example, people often can follow directions better when physical landmarks are provided together with or in the absence of street names. In this case, themap engine processor 910 can retrieve the most relevant geo-coded images to include in a set of customized driving directions. In practice, for instance, the directions can include the following: Turn right on Main Street—a large giraffe statue is on the corner. A picture of the large giraffe statue can accompany this line in or this portion of the directions and be viewable by the user. - Though not depicted in the figures, geo-coded images can also facilitate the creation of stories or summaries of a particular trip or experience as captured in the images. For instance, imagine that a user has a set of pictures from Washington, D.C. and he wants to share his pictures and his trip with his friends who have never been there. By geo-coding the pictures and ordering them by time taken, the user can create a story of his trip and the sights and tourist attractions he visited can be viewed as he experienced them. Thus, he could walk his friends through his trip by way of his geo-coded pictures.
- Various methodologies will now be described via a series of acts. It is to be understood and appreciated that the subject system and/or methodology is not limited by the order of acts, as some acts may, in accordance with the subject application, occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the subject application.
- Referring now to
FIG. 10 , there is a flow diagram of anexemplary method 1000 that facilitates geo-based storage and retrieval of images based on annotating the images with their associated location data. Themethod 1000 involves annotating at least one image with the geographic location data or geo-code associated therewith at 1010. One or more images can be selected from an image store to be annotated individually (one by one) or in bulk to mitigate tedious and repetitive actions. The location data refers to the location that is associated with each image. For example, photos taken at the Fort Worth Stockyards can be associated with Fort Worth, Tex. Alternatively, a specific street name or address can be associated with the image and the image can be annotated accordingly. - At 1020, the geo-coded images can be displayed on a map according to their respective locations (and geo-codes). For example, the images can be displayed as icons that can be clicked on to open the image or view its information. The geo-codes and maps can be based on any coordinate system such as latitude, longitude coordinates. Additional information about each image can be obtained by hovering over the image, by right clicking to view a number of different options, or by clicking on it to expand the view of the image.
- Turning now to
FIG. 11 , there is a flow diagram of an exemplary method that facilitates storing and browsing images by location given the requisite permission levels to do so. In particular, the method 1100 involves verifying a permissions level at 1110 in order to control access to any images including geo-coded and non-geo-coded images. Thus, at a minimum, users can be asked to provide login information in order to freely access, view, and/or edit their images in order to mitigate the unauthorized acts of others. At 1120, geo-coded images can be browsed and/or viewed by selecting or entering a location on a map. Any images can be made public or be kept private depending on user preferences. However, certain actions can be controlled by verifying the permissions level(s) of each user or viewer. - In
FIG. 12 , there is a flow diagram of anexemplary method 1200 that facilitates generating customized directions which incorporate one or more geo-coded images where appropriate for use as landmarks. Themethod 1200 involves geo-coding one or more images with location metadata associated with each image at 1210. Images with the same location metadata can be geo-coded at the same time to make the process more efficient. At 1220, a query for map-related information such as driving directions can be received and processed. At 1230, a customized set of directions can be generated whereby one or more geo-coded images are included and positioned within the directions to operate as visualized landmarks. Thus, geo-coded images can be viewed in the directions to make it easier for the user to find his way. For some users, this can be very helpful since they are no longer required to only rely on street names. Instead, they can be able to view landmarks or buildings along their route. - In order to provide additional context for various aspects of the subject mapping system and method,
FIG. 13 and the following discussion are intended to provide a brief, general description of a suitable operating environment 1310 in which various aspects of the mapping system and method may be implemented. The subject system and method can operate on any computing device—portable or non-portable including but not limited to desktop computers, laptops, PDAs, smart phones, mobile phones, and tablet PCs on which the social network can be accessed and viewed. While the invention is described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices, those skilled in the art will recognize that the invention can also be implemented in combination with other program modules and/or as a combination of hardware and software. - Generally, however, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular data types. The operating environment 1310 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Other well known computer systems, environments, and/or configurations that may be suitable for use with the invention include but are not limited to, personal computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include the above systems or devices, and the like.
- With reference to
FIG. 13 , an exemplary environment 1310 for implementing various aspects of the invention includes acomputer 1312. Thecomputer 1312 includes aprocessing unit 1314, asystem memory 1316, and a system bus 1318. The system bus 1318 couples system components including, but not limited to, thesystem memory 1316 to theprocessing unit 1314. Theprocessing unit 1314 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as theprocessing unit 1314. - The system bus 1318 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MCA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
- The
system memory 1316 includesvolatile memory 1320 andnonvolatile memory 1322. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within thecomputer 1312, such as during start-up, is stored innonvolatile memory 1322. By way of illustration, and not limitation,nonvolatile memory 1322 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory.Volatile memory 1320 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). -
Computer 1312 also includes removable/nonremovable, volatile/nonvolatile computer storage media.FIG. 13 illustrates, for example a disk storage 1324. Disk storage 1324 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick. In addition, disk storage 1324 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of the disk storage devices 1324 to the system bus 1318, a removable or nonremovable interface is typically used such asinterface 1326. - It is to be appreciated that
FIG. 13 describes software that acts as an intermediary between users and the basic computer resources described in suitable operating environment 1310. Such software includes anoperating system 1328.Operating system 1328, which can be stored on disk storage 1324, acts to control and allocate resources of thecomputer system 1312.System applications 1330 take advantage of the management of resources byoperating system 1328 throughprogram modules 1332 andprogram data 1334 stored either insystem memory 1316 or on disk storage 1324. It is to be appreciated that the subject invention can be implemented with various operating systems or combinations of operating systems. - A user enters commands or information into the
computer 1312 through input device(s) 1336.Input devices 1336 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to theprocessing unit 1314 through the system bus 1318 via interface port(s) 1338. Interface port(s) 1338 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1340 use some of the same type of ports as input device(s) 1336. Thus, for example, a USB port may be used to provide input tocomputer 1312 and to output information fromcomputer 1312 to anoutput device 1340.Output adapter 1342 is provided to illustrate that there are someoutput devices 1340 like monitors, speakers, and printers amongother output devices 1340 that require special adapters. Theoutput adapters 1342 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between theoutput device 1340 and the system bus 1318. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1344. -
Computer 1312 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1344. The remote computer(s) 1344 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative tocomputer 1312. For purposes of brevity, only amemory storage device 1346 is illustrated with remote computer(s) 1344. Remote computer(s) 1344 is logically connected tocomputer 1312 through anetwork interface 1348 and then physically connected viacommunication connection 1350.Network interface 1348 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 1102.3, Token Ring/IEEE 1102.5 and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL). - Communication connection(s) 1350 refers to the hardware/software employed to connect the
network interface 1348 to the bus 1318. Whilecommunication connection 1350 is shown for illustrative clarity insidecomputer 1312, it can also be external tocomputer 1312. The hardware/software necessary for connection to thenetwork interface 1348 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards. - What has been described above includes examples of the subject system and/or method. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the subject system and/or method, but one of ordinary skill in the art may recognize that many further combinations and permutations of the subject system and/or method are possible. Accordingly, the subject system and/or method are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.
Claims (9)
1. One or more computer-storage memory storing computer-executable instructions for performing a method that facilitates geo-based storage and retrieval of images, the method comprising:
receiving a user selection of one or more images in a photo sharing application;
receiving a user request to drag and drop the one or more images onto a map;
displaying a marker corresponding to each image or group of images;
annotating the image or the group of images corresponding to the marker with geographic location metadata to create a geo-coded image or a group of geo-coded images; and
based on a zoom view of the map, displaying one or more of the marker, the at least one geo-coded image, the group of geo-coded images, or at least one image icon representative of the at least one geo-coded image or the group of geo-coded images on the map at a position indicative of a respective location corresponding to the geographic location metadata associated with the at least one geo-coded image or group of geo-coded images.
2. The one or more computer-storage memory of claim 1 , wherein the marker comprises one of a virtual pushpin, flag, or thumbtack.
3. The one or more computer-storage memory of claim 1 , wherein the one or more images included in the user selection are located in an image store.
4. The one or more computer-storage memory of claim 1 , further comprising verifying a permission level before providing access, editing, or viewing rights to any stored images.
5. One or more computer-storage memory storing computer-executable instructions for performing a method that facilitates geo-based storage and retrieval of images, the method comprising:
receiving a user selection of one or more images in a photo sharing application;
receiving a user request to drag and drop the one or more images onto a map;
displaying a marker corresponding to each image or group of images;
annotating the image or the group of images corresponding to the marker with geographic location metadata to create a geo-coded image or a group of geo-coded images; and
receiving a user selection to display the at least one geo-coded image, the group of geo-coded images, or at least one image icon representative of the at least one geo-coded image or the group of geo-coded images on the map, in addition to the marker, at a position indicative of a respective location corresponding to the geographic location metadata associated with the at least one geo-coded image or group of geo-coded images.
6. The one or more computer-storage memory of claim 5 , wherein the marker comprises one of a virtual pushpin, flag, or thumbtack.
7. The one or more computer-storage memory of claim 5 , wherein the one or more images included in the user selection are located in an image store.
8. The one or more computer-storage memory of claim 5 , further comprising verifying a permission level before providing access, editing, or viewing rights to any stored images.
9. One or more computer-storage memory having computer-executable instructions embodied thereon for storing a system that facilitates geo-based storage and retrieval of images, the system comprising:
a geo-code annotation component that annotates at least one image together with geographic location metadata to create at least one geo-coded image after the at least one image has been dragged and dropped onto a map from a photo sharing application, the dropping of the at least one image onto the map triggers an appearance of a marker on the map at a location corresponding to the geographic location metadata; and
a map-based display component that displays, based on a zoom view of the map, one or more of:
(1) the marker associated with each of the at least one geo-coded image, and
(2) the at least one geo-coded image or at least one image icon representative of the at least one geo-coded image on the map at a position indicative of the location corresponding to the geographic location metadata.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/225,778 US20140301666A1 (en) | 2006-04-20 | 2014-03-26 | Geo-coding images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/379,466 US8712192B2 (en) | 2006-04-20 | 2006-04-20 | Geo-coding images |
US14/225,778 US20140301666A1 (en) | 2006-04-20 | 2014-03-26 | Geo-coding images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/379,466 Continuation US8712192B2 (en) | 2006-04-20 | 2006-04-20 | Geo-coding images |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140301666A1 true US20140301666A1 (en) | 2014-10-09 |
Family
ID=38661223
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/379,466 Active 2029-04-02 US8712192B2 (en) | 2006-04-20 | 2006-04-20 | Geo-coding images |
US14/225,778 Abandoned US20140301666A1 (en) | 2006-04-20 | 2014-03-26 | Geo-coding images |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/379,466 Active 2029-04-02 US8712192B2 (en) | 2006-04-20 | 2006-04-20 | Geo-coding images |
Country Status (1)
Country | Link |
---|---|
US (2) | US8712192B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109074372A (en) * | 2016-03-30 | 2018-12-21 | 微软技术许可有限责任公司 | Metadata is applied using drag and drop |
US10650621B1 (en) | 2016-09-13 | 2020-05-12 | Iocurrents, Inc. | Interfacing with a vehicular controller area network |
US20210233297A1 (en) * | 2009-11-13 | 2021-07-29 | Samsung Electronics Co., Ltd. | Server, user terminal, and service providing method, and control method thereof |
Families Citing this family (223)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US7769579B2 (en) | 2005-05-31 | 2010-08-03 | Google Inc. | Learning facts from semi-structured text |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9459622B2 (en) | 2007-01-12 | 2016-10-04 | Legalforce, Inc. | Driverless vehicle commerce network and community |
US9037516B2 (en) | 2006-03-17 | 2015-05-19 | Fatdoor, Inc. | Direct mailing in a geo-spatial environment |
US8965409B2 (en) | 2006-03-17 | 2015-02-24 | Fatdoor, Inc. | User-generated community publication in an online neighborhood social network |
US9002754B2 (en) | 2006-03-17 | 2015-04-07 | Fatdoor, Inc. | Campaign in a geo-spatial environment |
US9098545B2 (en) | 2007-07-10 | 2015-08-04 | Raj Abhyanker | Hot news neighborhood banter in a geo-spatial social network |
US9373149B2 (en) | 2006-03-17 | 2016-06-21 | Fatdoor, Inc. | Autonomous neighborhood vehicle commerce network and community |
US9064288B2 (en) | 2006-03-17 | 2015-06-23 | Fatdoor, Inc. | Government structures and neighborhood leads in a geo-spatial environment |
US9070101B2 (en) | 2007-01-12 | 2015-06-30 | Fatdoor, Inc. | Peer-to-peer neighborhood delivery multi-copter and method |
US7945852B1 (en) * | 2006-05-19 | 2011-05-17 | Washington State University Research Foundation | Strategies for annotating digital maps |
US7978207B1 (en) * | 2006-06-13 | 2011-07-12 | Google Inc. | Geographic image overlay |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8863245B1 (en) | 2006-10-19 | 2014-10-14 | Fatdoor, Inc. | Nextdoor neighborhood social network method, apparatus, and system |
US8122026B1 (en) | 2006-10-20 | 2012-02-21 | Google Inc. | Finding and disambiguating references to entities on web pages |
US20140143061A1 (en) * | 2006-11-22 | 2014-05-22 | Raj Abhyanker | Garage sales in a geo-spatial social network |
US8347202B1 (en) | 2007-03-14 | 2013-01-01 | Google Inc. | Determining geographic locations for place names in a fact repository |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10318110B2 (en) * | 2007-08-13 | 2019-06-11 | Oath Inc. | Location-based visualization of geo-referenced context |
KR101423928B1 (en) * | 2007-08-20 | 2014-07-28 | 삼성전자주식회사 | Image reproducing apparatus which uses the image files comprised in the electronic map, image reproducing method for the same, and recording medium which records the program for carrying the same method. |
US9092409B2 (en) * | 2007-10-11 | 2015-07-28 | Google Inc. | Smart scoring and filtering of user-annotated geocoded datasets |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
CA2707246C (en) | 2009-07-07 | 2015-12-29 | Certusview Technologies, Llc | Automatic assessment of a productivity and/or a competence of a locate technician with respect to a locate and marking operation |
US8290204B2 (en) | 2008-02-12 | 2012-10-16 | Certusview Technologies, Llc | Searchable electronic records of underground facility locate marking operations |
US8532342B2 (en) | 2008-02-12 | 2013-09-10 | Certusview Technologies, Llc | Electronic manifest of underground facility locate marks |
US20090254867A1 (en) * | 2008-04-03 | 2009-10-08 | Microsoft Corporation | Zoom for annotatable margins |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US8522143B2 (en) * | 2008-05-05 | 2013-08-27 | Microsoft Corporation | Scene-granular geographical-based video footage visualizations |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US20090307618A1 (en) * | 2008-06-05 | 2009-12-10 | Microsoft Corporation | Annotate at multiple levels |
US20090327229A1 (en) * | 2008-06-27 | 2009-12-31 | Microsoft Corporation | Automatic knowledge-based geographical organization of digital media |
US8788493B2 (en) | 2008-06-30 | 2014-07-22 | Verizon Patent And Licensing Inc. | Digital image tagging apparatuses, systems, and methods |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8902251B2 (en) | 2009-02-10 | 2014-12-02 | Certusview Technologies, Llc | Methods, apparatus and systems for generating limited access files for searchable electronic records of underground facility locate and/or marking operations |
US8572193B2 (en) | 2009-02-10 | 2013-10-29 | Certusview Technologies, Llc | Methods, apparatus, and systems for providing an enhanced positive response in underground facility locate and marking operations |
US20100250367A1 (en) * | 2009-03-31 | 2010-09-30 | Microsoft Corporation | Relevancy of virtual markers |
US20100250366A1 (en) * | 2009-03-31 | 2010-09-30 | Microsoft Corporation | Merge real-world and virtual markers |
US8274571B2 (en) * | 2009-05-21 | 2012-09-25 | Google Inc. | Image zooming using pre-existing imaging information |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US9104695B1 (en) * | 2009-07-27 | 2015-08-11 | Palantir Technologies, Inc. | Geotagging structured data |
US8704853B2 (en) * | 2009-08-26 | 2014-04-22 | Apple Inc. | Modifying graphical paths |
KR101604843B1 (en) * | 2009-12-30 | 2016-03-21 | 엘지전자 주식회사 | Mobile terminal and operation control method thereof |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US8295589B2 (en) | 2010-05-20 | 2012-10-23 | Microsoft Corporation | Spatially registering user photographs |
US20110307831A1 (en) * | 2010-06-10 | 2011-12-15 | Microsoft Corporation | User-Controlled Application Access to Resources |
US9094785B2 (en) * | 2010-07-16 | 2015-07-28 | Blackberry Limited | Application programming interface for mapping application |
US8890896B1 (en) | 2010-11-02 | 2014-11-18 | Google Inc. | Image recognition in an augmented reality application |
US9251173B2 (en) * | 2010-12-08 | 2016-02-02 | Microsoft Technology Licensing, Llc | Place-based image organization |
JP5677073B2 (en) * | 2010-12-15 | 2015-02-25 | キヤノン株式会社 | Image control apparatus, image control method, information processing apparatus, information processing method, program, and storage medium |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US9552376B2 (en) | 2011-06-09 | 2017-01-24 | MemoryWeb, LLC | Method and apparatus for managing digital files |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US9734167B2 (en) * | 2011-09-21 | 2017-08-15 | Horsetooth Ventures, LLC | Interactive image display and selection system |
US11068532B2 (en) | 2011-09-21 | 2021-07-20 | Horsetooth Ventures, LLC | Interactive image display and selection system |
US20130132846A1 (en) * | 2011-11-21 | 2013-05-23 | Clover Point Cartographics Ltd | Multiple concurrent contributor mapping system and method |
US8880535B1 (en) | 2011-11-29 | 2014-11-04 | Google Inc. | System and method for selecting user generated content related to a point of interest |
US9645724B2 (en) | 2012-02-01 | 2017-05-09 | Facebook, Inc. | Timeline based content organization |
US9235318B2 (en) | 2012-02-01 | 2016-01-12 | Facebook, Inc. | Transitions among hierarchical user-interface layers |
US9557876B2 (en) | 2012-02-01 | 2017-01-31 | Facebook, Inc. | Hierarchical user interface |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US20140040774A1 (en) * | 2012-08-01 | 2014-02-06 | Google Inc. | Sharing photos in a social network system |
US20140058881A1 (en) * | 2012-08-24 | 2014-02-27 | Neucadia, Llc | System and Process for Crop Scouting and Pest Cure Recommendation |
US9547647B2 (en) * | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US9251433B2 (en) | 2012-12-10 | 2016-02-02 | International Business Machines Corporation | Techniques for spatial semantic attribute matching for location identification |
US9501507B1 (en) | 2012-12-27 | 2016-11-22 | Palantir Technologies Inc. | Geo-temporal indexing and searching |
WO2014108907A1 (en) * | 2013-01-14 | 2014-07-17 | Bar-Ilan University | Location-based image retrieval |
US9123086B1 (en) | 2013-01-31 | 2015-09-01 | Palantir Technologies, Inc. | Automatically generating event objects from images |
US9323855B2 (en) * | 2013-02-05 | 2016-04-26 | Facebook, Inc. | Processing media items in location-based groups |
US9047847B2 (en) | 2013-02-05 | 2015-06-02 | Facebook, Inc. | Displaying clusters of media items on a map using representative media items |
US8903717B2 (en) | 2013-03-15 | 2014-12-02 | Palantir Technologies Inc. | Method and system for generating a parser and parsing complex data |
US8930897B2 (en) | 2013-03-15 | 2015-01-06 | Palantir Technologies Inc. | Data integration tool |
US8855999B1 (en) | 2013-03-15 | 2014-10-07 | Palantir Technologies Inc. | Method and system for generating a parser and parsing complex data |
US9465513B2 (en) * | 2013-04-11 | 2016-10-11 | General Electric Company | Visual representation of map navigation history |
US8799799B1 (en) | 2013-05-07 | 2014-08-05 | Palantir Technologies Inc. | Interactive geospatial map |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
WO2014200728A1 (en) | 2013-06-09 | 2014-12-18 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10474240B2 (en) | 2013-06-10 | 2019-11-12 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling gesture-based interaction between a touch/gesture controlled display and other networked devices |
US9804735B2 (en) | 2013-06-10 | 2017-10-31 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling transition of content in a user interface between a map-bound layer and a map-unbound layer |
US10114537B2 (en) | 2013-06-10 | 2018-10-30 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling touch/gesture controlled display for facility information and content with resolution dependent display and persistent content positioning |
US9672006B2 (en) | 2013-06-10 | 2017-06-06 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling a multi-modal user interface configured to display facility information |
US9619124B2 (en) | 2013-06-10 | 2017-04-11 | Honeywell International Inc. | Frameworks, devices and methods configured for enabling gesture-based controlled display for facility information and content in respect of a multi-level facility |
US9208171B1 (en) * | 2013-09-05 | 2015-12-08 | Google Inc. | Geographically locating and posing images in a large-scale image repository and processing framework |
US20150088853A1 (en) * | 2013-09-23 | 2015-03-26 | California State University Fresno | Geospatial search portal, methods of making and using the same, and systems including the same |
EP3051795A4 (en) * | 2013-09-25 | 2017-05-10 | Nec Corporation | Image capturing device, image capturing method, and program |
US8938686B1 (en) | 2013-10-03 | 2015-01-20 | Palantir Technologies Inc. | Systems and methods for analyzing performance of an entity |
US8924872B1 (en) | 2013-10-18 | 2014-12-30 | Palantir Technologies Inc. | Overview user interface of emergency call data of a law enforcement agency |
US9021384B1 (en) | 2013-11-04 | 2015-04-28 | Palantir Technologies Inc. | Interactive vehicle information map |
US8868537B1 (en) | 2013-11-11 | 2014-10-21 | Palantir Technologies, Inc. | Simple web search |
US9165339B2 (en) | 2013-11-22 | 2015-10-20 | Google Inc. | Blending map data with additional imagery |
US8984445B1 (en) * | 2014-01-31 | 2015-03-17 | Google Inc. | System and method for geo-locating images |
US9439367B2 (en) | 2014-02-07 | 2016-09-13 | Arthi Abhyanker | Network enabled gardening with a remotely controllable positioning extension |
US9727376B1 (en) | 2014-03-04 | 2017-08-08 | Palantir Technologies, Inc. | Mobile tasks |
US10347140B2 (en) * | 2014-03-11 | 2019-07-09 | Textron Innovations Inc. | Flight planning and communication |
WO2015139133A1 (en) * | 2014-03-19 | 2015-09-24 | Al Hassanat Fahed | Geo-referenced virtual anchor management system for media content access from physical location |
US9457901B2 (en) | 2014-04-22 | 2016-10-04 | Fatdoor, Inc. | Quadcopter with a printable payload extension system and method |
US9004396B1 (en) | 2014-04-24 | 2015-04-14 | Fatdoor, Inc. | Skyteboard quadcopter and method |
US9022324B1 (en) | 2014-05-05 | 2015-05-05 | Fatdoor, Inc. | Coordination of aerial vehicles through a central server |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9971985B2 (en) | 2014-06-20 | 2018-05-15 | Raj Abhyanker | Train based community |
US9441981B2 (en) | 2014-06-20 | 2016-09-13 | Fatdoor, Inc. | Variable bus stops across a bus route in a regional transportation network |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9129219B1 (en) | 2014-06-30 | 2015-09-08 | Palantir Technologies, Inc. | Crime risk forecasting |
US9451020B2 (en) | 2014-07-18 | 2016-09-20 | Legalforce, Inc. | Distributed communication of independent autonomous vehicles to provide redundancy and performance |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9471695B1 (en) * | 2014-12-02 | 2016-10-18 | Google Inc. | Semantic image navigation experiences |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US10372879B2 (en) | 2014-12-31 | 2019-08-06 | Palantir Technologies Inc. | Medical claims lead summary report generation |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
EP3611632A1 (en) | 2015-03-16 | 2020-02-19 | Palantir Technologies Inc. | Displaying attribute and event data along paths |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US11354007B2 (en) | 2015-04-07 | 2022-06-07 | Olympus America, Inc. | Diagram based visual procedure note writing tool |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US9460175B1 (en) | 2015-06-03 | 2016-10-04 | Palantir Technologies Inc. | Server implemented geographic information system with graphical interface |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US9600146B2 (en) | 2015-08-17 | 2017-03-21 | Palantir Technologies Inc. | Interactive geospatial map |
US10706434B1 (en) | 2015-09-01 | 2020-07-07 | Palantir Technologies Inc. | Methods and systems for determining location information |
US9639580B1 (en) | 2015-09-04 | 2017-05-02 | Palantir Technologies, Inc. | Computer-implemented systems and methods for data management and visualization |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
CA3037055A1 (en) * | 2015-09-17 | 2017-03-23 | Project Legacy Pty Ltd | System and method of discovering persons or objects of interest |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10109094B2 (en) | 2015-12-21 | 2018-10-23 | Palantir Technologies Inc. | Interface to index and display geospatial data |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10627246B2 (en) | 2016-03-25 | 2020-04-21 | Microsoft Technology Licensing, Llc | Multi modal annotation of maps |
US10068199B1 (en) | 2016-05-13 | 2018-09-04 | Palantir Technologies Inc. | System to catalogue tracking data |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
US9686357B1 (en) | 2016-08-02 | 2017-06-20 | Palantir Technologies Inc. | Mapping content delivery |
US10437840B1 (en) | 2016-08-19 | 2019-10-08 | Palantir Technologies Inc. | Focused probabilistic entity resolution from multiple data sources |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10515433B1 (en) | 2016-12-13 | 2019-12-24 | Palantir Technologies Inc. | Zoom-adaptive data granularity to achieve a flexible high-performance interface for a geospatial mapping system |
US10270727B2 (en) | 2016-12-20 | 2019-04-23 | Palantir Technologies, Inc. | Short message communication within a mobile graphical map |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10460602B1 (en) | 2016-12-28 | 2019-10-29 | Palantir Technologies Inc. | Interactive vehicle information mapping system |
US10579239B1 (en) | 2017-03-23 | 2020-03-03 | Palantir Technologies Inc. | Systems and methods for production and display of dynamically linked slide presentations |
US10382372B1 (en) | 2017-04-27 | 2019-08-13 | Snap Inc. | Processing media content based on original context |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK201770428A1 (en) | 2017-05-12 | 2019-02-18 | Apple Inc. | Low-latency intelligent automated assistant |
US20180330325A1 (en) | 2017-05-12 | 2018-11-15 | Zippy Inc. | Method for indicating delivery location and software for same |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | Far-field extension for digital assistant services |
US10895946B2 (en) | 2017-05-30 | 2021-01-19 | Palantir Technologies Inc. | Systems and methods for using tiled data |
US11334216B2 (en) | 2017-05-30 | 2022-05-17 | Palantir Technologies Inc. | Systems and methods for visually presenting geospatial information |
US10403011B1 (en) | 2017-07-18 | 2019-09-03 | Palantir Technologies Inc. | Passing system with an interactive user interface |
US10371537B1 (en) | 2017-11-29 | 2019-08-06 | Palantir Technologies Inc. | Systems and methods for flexible route planning |
US11599706B1 (en) | 2017-12-06 | 2023-03-07 | Palantir Technologies Inc. | Systems and methods for providing a view of geospatial information |
US10698756B1 (en) | 2017-12-15 | 2020-06-30 | Palantir Technologies Inc. | Linking related events for various devices and services in computer log files on a centralized server |
US10896234B2 (en) | 2018-03-29 | 2021-01-19 | Palantir Technologies Inc. | Interactive geographical map |
US10830599B2 (en) | 2018-04-03 | 2020-11-10 | Palantir Technologies Inc. | Systems and methods for alternative projections of geographical information |
US11585672B1 (en) | 2018-04-11 | 2023-02-21 | Palantir Technologies Inc. | Three-dimensional representations of routes |
US10429197B1 (en) | 2018-05-29 | 2019-10-01 | Palantir Technologies Inc. | Terrain analysis for automatic route determination |
US10467435B1 (en) | 2018-10-24 | 2019-11-05 | Palantir Technologies Inc. | Approaches for managing restrictions for middleware applications |
US11025672B2 (en) | 2018-10-25 | 2021-06-01 | Palantir Technologies Inc. | Approaches for securing middleware data access |
US10936178B2 (en) | 2019-01-07 | 2021-03-02 | MemoryWeb, LLC | Systems and methods for analyzing and organizing digital photos and videos |
CA3157392A1 (en) * | 2019-10-11 | 2021-04-15 | Foundat Pty Ltd | Geographically referencing an item |
US11086491B1 (en) | 2020-01-21 | 2021-08-10 | Honeywell International Inc. | Systems and methods for displaying video streams on a display |
US11830607B2 (en) | 2021-09-08 | 2023-11-28 | Ai Metrics, Llc | Systems and methods for facilitating image finding analysis |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6943825B2 (en) * | 2001-12-14 | 2005-09-13 | Intel Corporation | Method and apparatus for associating multimedia information with location information |
US7158878B2 (en) * | 2004-03-23 | 2007-01-02 | Google Inc. | Digital mapping system |
US20070055441A1 (en) * | 2005-08-12 | 2007-03-08 | Facet Technology Corp. | System for associating pre-recorded images with routing information in a navigation system |
US7256711B2 (en) * | 2003-02-14 | 2007-08-14 | Networks In Motion, Inc. | Method and system for saving and retrieving spatial related information |
US7551182B2 (en) * | 2005-01-18 | 2009-06-23 | Oculus Info Inc. | System and method for processing map data |
US7617246B2 (en) * | 2006-02-21 | 2009-11-10 | Geopeg, Inc. | System and method for geo-coding user generated content |
US7979204B2 (en) * | 2002-08-05 | 2011-07-12 | Sony Corporation | Electronic guide system, contents server for electronic guide system, portable electronic guide device, and information processing method for electronic guide system |
US8825370B2 (en) * | 2005-05-27 | 2014-09-02 | Yahoo! Inc. | Interactive map-based travel guide |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01173824A (en) * | 1987-12-28 | 1989-07-10 | Aisin Aw Co Ltd | Navigation device for vehicle with help function |
NL8901695A (en) * | 1989-07-04 | 1991-02-01 | Koninkl Philips Electronics Nv | METHOD FOR DISPLAYING NAVIGATION DATA FOR A VEHICLE IN AN ENVIRONMENTAL IMAGE OF THE VEHICLE, NAVIGATION SYSTEM FOR CARRYING OUT THE METHOD AND VEHICLE FITTING A NAVIGATION SYSTEM. |
US5559707A (en) * | 1994-06-24 | 1996-09-24 | Delorme Publishing Company | Computer aided routing system |
EP0807352A1 (en) * | 1995-01-31 | 1997-11-19 | Transcenic, Inc | Spatial referenced photography |
US6282362B1 (en) * | 1995-11-07 | 2001-08-28 | Trimble Navigation Limited | Geographical position/image digital recording and display system |
US5982298A (en) * | 1996-11-14 | 1999-11-09 | Microsoft Corporation | Interactive traffic display and trip planner |
US6199014B1 (en) * | 1997-12-23 | 2001-03-06 | Walker Digital, Llc | System for providing driving directions with visual cues |
JP4433236B2 (en) * | 1999-12-03 | 2010-03-17 | ソニー株式会社 | Information processing apparatus, information processing method, and program recording medium |
AU2001269902A1 (en) * | 2000-06-16 | 2001-12-24 | Verisae | Enterprise asset management system and method |
US8059815B2 (en) * | 2001-12-13 | 2011-11-15 | Digimarc Corporation | Transforming data files into logical storage units for auxiliary data through reversible watermarks |
WO2005001714A1 (en) * | 2003-06-30 | 2005-01-06 | Koninklijke Philips Electronics, N.V. | Enhanced organization and retrieval of digital images |
-
2006
- 2006-04-20 US US11/379,466 patent/US8712192B2/en active Active
-
2014
- 2014-03-26 US US14/225,778 patent/US20140301666A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6943825B2 (en) * | 2001-12-14 | 2005-09-13 | Intel Corporation | Method and apparatus for associating multimedia information with location information |
US7979204B2 (en) * | 2002-08-05 | 2011-07-12 | Sony Corporation | Electronic guide system, contents server for electronic guide system, portable electronic guide device, and information processing method for electronic guide system |
US7256711B2 (en) * | 2003-02-14 | 2007-08-14 | Networks In Motion, Inc. | Method and system for saving and retrieving spatial related information |
US7158878B2 (en) * | 2004-03-23 | 2007-01-02 | Google Inc. | Digital mapping system |
US7551182B2 (en) * | 2005-01-18 | 2009-06-23 | Oculus Info Inc. | System and method for processing map data |
US8825370B2 (en) * | 2005-05-27 | 2014-09-02 | Yahoo! Inc. | Interactive map-based travel guide |
US20070055441A1 (en) * | 2005-08-12 | 2007-03-08 | Facet Technology Corp. | System for associating pre-recorded images with routing information in a navigation system |
US7617246B2 (en) * | 2006-02-21 | 2009-11-10 | Geopeg, Inc. | System and method for geo-coding user generated content |
Non-Patent Citations (2)
Title |
---|
"Flickr +Google Maps = Geobloggers", https://txfx.net/2005/05/17/flickr-google-maps-geobloggers/, 5/17/2005 * |
Erie et al. Google Maps Hacks, Han 2006, O'Reilly * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210233297A1 (en) * | 2009-11-13 | 2021-07-29 | Samsung Electronics Co., Ltd. | Server, user terminal, and service providing method, and control method thereof |
US11776185B2 (en) * | 2009-11-13 | 2023-10-03 | Samsung Electronics Co., Ltd. | Server, user terminal, and service providing method, and control method thereof for displaying photo images within a map |
CN109074372A (en) * | 2016-03-30 | 2018-12-21 | 微软技术许可有限责任公司 | Metadata is applied using drag and drop |
US10650621B1 (en) | 2016-09-13 | 2020-05-12 | Iocurrents, Inc. | Interfacing with a vehicular controller area network |
US11232655B2 (en) | 2016-09-13 | 2022-01-25 | Iocurrents, Inc. | System and method for interfacing with a vehicular controller area network |
Also Published As
Publication number | Publication date |
---|---|
US20070258642A1 (en) | 2007-11-08 |
US8712192B2 (en) | 2014-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8712192B2 (en) | Geo-coding images | |
RU2403614C2 (en) | User interface application for managing media files | |
US7978207B1 (en) | Geographic image overlay | |
US8453060B2 (en) | Panoramic ring user interface | |
US8099679B2 (en) | Method and system for traversing digital records with multiple dimensional attributes | |
Toyama et al. | Geographic location tags on digital images | |
KR101083533B1 (en) | System and method for user modification of metadata in a shell browser | |
US7334190B2 (en) | Interactive video tour system editor | |
US8769396B2 (en) | Calibration and annotation of video content | |
US8996305B2 (en) | System and method for discovering photograph hotspots | |
US7840892B2 (en) | Organization and maintenance of images using metadata | |
US7475060B2 (en) | Browsing user interface for a geo-coded media database | |
US20140282099A1 (en) | Retrieval, identification, and presentation of media | |
US20100077355A1 (en) | Browsing of Elements in a Display | |
KR100882025B1 (en) | Method for searching geographic information system images based on web, geographical postion service and blog service and providing regional blog service | |
US20030149939A1 (en) | System for organizing and navigating through files | |
JP2007299172A (en) | Image viewer | |
EP2458512A1 (en) | Mobile data storage | |
JP5419644B2 (en) | Method, system and computer-readable recording medium for providing image data | |
Zhang et al. | Annotating and navigating tourist videos | |
US10885095B2 (en) | Personalized criteria-based media organization | |
TW201339946A (en) | Systems and methods for providing access to media content | |
US20080295010A1 (en) | Systems and Methods for Incorporating Data Into Digital Files | |
Nguyen et al. | TagNSearch: Searching and navigating geo-referenced collections of photographs | |
US20070101275A1 (en) | Network appliance device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034747/0417 Effective date: 20141014 Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:039025/0454 Effective date: 20141014 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |