WO2013076478A1 - Interactive media - Google Patents

Interactive media Download PDF

Info

Publication number
WO2013076478A1
WO2013076478A1 PCT/GB2012/052876 GB2012052876W WO2013076478A1 WO 2013076478 A1 WO2013076478 A1 WO 2013076478A1 GB 2012052876 W GB2012052876 W GB 2012052876W WO 2013076478 A1 WO2013076478 A1 WO 2013076478A1
Authority
WO
WIPO (PCT)
Prior art keywords
active
media
zones
active zone
zone
Prior art date
Application number
PCT/GB2012/052876
Other languages
French (fr)
Inventor
Martin Wright
Richard England
Duncan Brown
Laura Brown
Original Assignee
Martin Wright
Richard England
Duncan Brown
Laura Brown
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Martin Wright, Richard England, Duncan Brown, Laura Brown filed Critical Martin Wright
Publication of WO2013076478A1 publication Critical patent/WO2013076478A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/93Document management systems
    • G06F16/94Hypermedia
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42224Touch pad or touch panel provided on the remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8583Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by creating hot-spots

Definitions

  • This invention relates to a method of creating interactive media, adapting media to be interactive and a system for displaying the media produced therefrom.
  • the invention is directed to a method of adapting a video to be interactive as defined in claim 1.
  • Optional features are disclosed in the dependent claims.
  • the invention may include adapting stereoscopic video in which images are perceived to extend inwards to / outwards from the screen. This may be obtained by the use of a z- coordinate in combination with x- and y-coordinates.
  • the present invention may be directed to a method of adapting media to be interactive comprising the steps of:
  • An active layer is as a zone containing moving hyperlinks which is congruent to video window.
  • the moving hyperinks, or active zones, are linked to moving images contained within the associated video sequence.
  • the active layer may be either a physical screen, preferably in the form of a touch-screen, or a transparent (or translucent) digital layer overlaid on top of the media.
  • the active layer carries dynamic active zones, which are related to timecode positions within the underlying video. These active zones specify the clickable, touchable, or otherwise activatable, zones.
  • the feedback may be in the form of a 'pop-up' containing information, a link to a website, information sourced from a database that is displayed either in a separate part of the visible media, within the media, or in a boarder region around the outside of the media.
  • a list of the objects selected by a user whilst viewing the media may be stored and a list provided later.
  • the feedback may also be linked to a tag, or marker, in a video such that on activation, the user 'skips' to a later part of the video, or to a different video.
  • the active zone may open an application or program when activated. Because the active zones are within the active layer that is overlaid on the media, processing of the media to create active zones, and changes to the size, shape and location of the active zones is not necessarily embedded within, or directly connected to, the media itself. This makes processing the media quicker and easier. Additionally, where amendments are made to the media, the equivalent changes can be quickly and easily made to the active layer.
  • the active zone may be effectively invisible to a user by making it transparent, thereby essentially creating an invisible button. Alternatively, it might be visible, by using either a level of translucence or making the zone opaque. It may be in the form of an icon or a logo, if preferred.
  • the media may be in the form of picture, film, video, games, including video games, or other visual formats.
  • the media is a moving image and an active zone changes position according to where an object in the media moves, in order that the active zone corresponds to the object.
  • the active zone is defined so as to correspond to an object or item in the image. The location of the active zone then moves in the active layer as the object or item moves in the moving media. Because the active zone tracks the object or item, a user is able to activate the active zone at any time by selecting the object in the moving media.
  • the object When performed on a computer screen, the object may be selected using a conventional 'mouse and pointer' or a touch screen.
  • the media may be in the form of a video or electronic game with active zones laid thereover.
  • the active zone is a predefined region, it can be easily relocated according to where the item in the moving media is located. Additionally, the shape and size can easily be manipulated to correspond to any changes in orientation or size resulting from the movement of the object or item, or of the viewer's frame of reference.
  • the moving active zone is produced by the steps of:
  • a start and end location can be identified and a path between the two created according to those locations. This may be done manually or automatically using software to create the most likely path and filling in the necessary information required. This can save time over conventional systems that require frame analysis.
  • the information is software code, and, more preferably, the
  • the active software code may be in the form of an internet link, a macro or javascript or other programming environments.
  • Alternative languages or actions may be desirable, for example a process that places an order for particular items from a supplier upon activation of the active zone.
  • the invention extends to an interactive multimedia system comprising a visual display upon which media is displayed, wherein an active layer is overlaid onto the media being displayed, the active layer comprising a plurality of active zones therein and wherein when an active zone is activated, a sequence is initiated providing the user with feedback.
  • the active layer may be a virtual, or software, layer, or a physical layer in the form of an overlay to be positioned over media or a touch screen.
  • an object within the media is identified and an active zone assigned thereto, and wherein the object is subsequently tracked over time and the active zone relocated accordingly so as to correspond to the position of the object to which it is assigned.
  • the system is used to provide augmented reality.
  • augmented reality may be particularly advantageous for training videos so that a user is shown information that would otherwise not be present.
  • the user can learn more about their surroundings. This may be especially advantageous when used in combination with GPS to provide an information on the user's location and what can be augmented.
  • the system is applied to three-dimensional media.
  • Enhanced three- dimensional media with dynamic active zones allows a user the benefit of selecting multiple dynamic active zones that may be 'stacked' upon one another within the active layer.
  • the 'depth' of the user's selection can be determined from a wireless controller, or via speech recognition.
  • Stereoscopic 3 -dimensional media can be enhanced using the present invention by adding a "z" coordinate to a 2-dimensional active zone; in such a situation a square active zone will become cuboidal and circles, become cylinders, although it may be desirable for a circle to become conical or spherical and likewise a square or triangle become pyramids.
  • Other 3 -dimensional shapes may also be advantageous according to the situation and desired usage of the active zone.
  • the active zones may change size and shape according to the progression of the media. This may be particularly
  • Figure 1 is a diagram showing active zones on still media, in accordance with the present invention.
  • Figure 2 is a screenshot of a video with static active zones shown thereon;
  • FIG. 3 is a diagram showing dynamic active zones in accordance with the present invention.
  • Figure 4 is a flow diagram of a method of creating active zones for moving media, in accordance with the present invention
  • Figure 5 shows a frame from a video enhanced using the present invention.
  • Figure 6 shows a frame from real-time action of a sports match enhanced using the present invention.
  • Figure 1 shows an active layer 10, having an area 12, comprising active zones 14 and 16 within its area 12.
  • the active zones 14 and 16 which might be considered to be buttons overlaid on the media, can be activated or triggered by a user.
  • a control zone 18 is provided within the area of the active layer 10, in which no active zones are located. This control zone provides an area for media controls (not show) to be provided to avoid active zones being triggered when a user is controlling the media.
  • Figure 2 shows the active layer 10 of Figure 1, overlaid onto media 20.
  • the area 12 of the active layer 10 conforms to the area 22 of the media 20.
  • Controls 24 are provided at the bottom of the media 20 for adjusting the media 20.
  • the active zones 14 and 16 are static within the active layer 10 and allow the user to 'navigate' around a scene.
  • the media can skip to a latter part of the media 20 so as to give the impression that the user has continued 'forward' .
  • the media can either skip to a latter part of the media 20, or open another media (not shown) and display that.
  • FIG 3 shows an active layer 40 with an area 42, which is suitable for placing over media (not shown), which comprises moving objects.
  • the media may, for example, be in the form of a moving image, or video, such as in SWF format such an Adobe® Flash® video.
  • the active zone 44 which is intended to correspond to an object (not shown) within a video over which the active layer is positioned, moves within the active layer according to where the object is located.
  • the first location of the object is located and the active zone 44 is associated therewith at position 46.
  • the location of the same object within the video is identified and the active zone 44 associated therewith at position 48. The movement of the object is then traced using inbetweening.
  • Figure 4 shows the process of tracking an object and inbetweening according to the present invention.
  • the active zones 14, 16 and 44 are embedded with a macro in the form of an Extensible Markup Language (XML) file. When triggered, the active zones 14, 16 and 44 follow the procedure contained within the XML file to trigger feedback to be provided.
  • XML code for a typical dynamic active zone (DAZ) in a media player is shown below:
  • the active zone is defined by a location using X and Y coordinates, mapped from the top left hand corner of the media - a Z coordinate can be added if required - and the size of the active zone is defined thereafter.
  • the user is directed to a separate video named "video2".
  • the user can be taken to a specific part of the second video, for example ⁇ goToTime>20 ⁇ /goToTime> will take the user to 20 seconds into the video.
  • the video will play a new video. This could be amended to pause the existing video until the user does activate the active zone, or it could perform another instruction, such as looping.
  • a series of videos can be put into a single file and the appropriate part skipped to or paused as required. Tagging the video in appropriate positions may assist in skipping to the correct part.
  • the active zone is active only for the period in which it is defined as the start and finish times towards the end of the code. Therefore, as shown above an active zone can be displayed when the video is 5 seconds into playing, and then deactivated when the video is at 10 seconds. Times are written in seconds alone, so one and half minutes will be 90 seconds. Clearly, the time finish value must be greater than the time start value.
  • the user's actions may be stored in a log, stored in a content management system (CMS), or a learning management system (LMS), as a record of the user's actions over time. This can then be used to assess a user's performance, which may be especially useful where the video is used for health and safety training, or risk assessment.
  • CMS content management system
  • LMS learning management system
  • the present invention may be used for interactive television programs, such as dramas, game-shows and children's programs, and can enhance such programs by providing games and/or information linked to items within the program. It may also be applied to movies. By allowing a user to interact with the film or program the user can experience a closer connection with the media. This, in turn, can promote sales and increase the value of the brand.
  • Extra footage for use in the interactive media can be recorded at the same time as filming the main media. Additionally, with the progression of electronic books and devices for reading the same, the present invention may be applied thereto. Books and the film versions thereof can be linked so that clips are shown at convenient parts in the electronic book if a reader so wishes.
  • the present invention can be used to provide interactive adverts, including online adverts that are connected to the Internet, thereby allowing users to gain further information about a product shown in a video, and/or to purchase the product. Should a viewer particularly like a product shown in the background of a scene, they will be able to see the
  • the present invention may be used for education and training, especially training in languages, customer service, sales, medicine, health and safety, risk assessment and public services.
  • a user can identify objects and they can be swiftly provided with the word and/or pronunciation of the object in a chosen language. This allows a user to identify what is useful and/or interesting to them, thereby making the experience more enjoyable and relevant to the user.
  • human behaviours and reactions can be accurately represented, which can allow a user to become more familiar with how to deal with particular situations. This may be particularly advantageous for delivering bad news or dealing with aggressiveness or conflict. Additionally, it may be used in military training, for example for dealing with civilian populations or combat operations.
  • a user can call up the statistics of a horse's recent performance, which may be useful, especially in the case of in-game betting. Additionally, coverage of ministerial debates can be made interactive to allow a user to view how a minister voted on previous issues.
  • the 'live' video may be broadcast with a time delay to allow the processing of the video in order to provide interactivity.
  • Live action including fashion shows (as shown in Figure 6), may be enhanced by the use of the present invention.
  • a viewer By providing a viewer with a multimedia system with a visual display and a video camera, the viewer can be provided with an augmented reality view of the live action, wherein items such as clothing or people can link to further information and statistics by selecting an items overwhich an active zone is positioned. This may also assist with sales and advertising.
  • the movement of an active zone may be performed on a frame-by-frame basis instead of inbetweening, if required.
  • a different method of tracking an object in moving media may be used and the active zone then associated with the object. For example, manual coding of selected frames may be undertaken. Key frame analysis may be desirable.
  • the active zones may be triggered using with a wireless controller, connected by a short- range radio device, for example Bluetooth®, or infra-red, wherein the controller comprises at least one motion sensor, for example an accelerometer, and a control button, so that a user can interact with media, either in 2-dimensions or 3-dimenions.
  • a wireless controller connected by a short- range radio device, for example Bluetooth®, or infra-red
  • the controller comprises at least one motion sensor, for example an accelerometer, and a control button, so that a user can interact with media, either in 2-dimensions or 3-dimenions.
  • Other sensors or connection mechanisms may be desirable either in place of those mentioned or in addition thereto.
  • the media is views on a computer, or the like, the usual selector, such as a mouse-controlled pointer can be used. It is anticipated that eye- control, that is tracking the user' s eye-gaze and activating the dynamic active zones using one's eyes, may be desirable. Other forms of interface may also be
  • a level of dominance may be given to each active zone so that only one is activated when triggered.
  • the user's choice is more likely to be correct.
  • the user may be given a choice of which zone they wish to activate.
  • Another alternative where two active zones overlap is for a time to elapse from triggering the overlapped area wherein a further triggering may activate one zone and a lack of the further triggering activates the other zone.
  • the active zones may track an object in the media, it may be desirable for them to exist without tracking an object, for example in augmented reality applications.
  • 2 dimensional hyperlinks may be specified by either elliptical or rectangular areas.
  • Interaction with 2 dimensional hyperlinks can be through conventional pointing and clicking.
  • 3 dimensional hyperlinks may be sections of cones projecting inwards to / outwards from the plane of the screen. These cones may have either rectangular or elliptical cross section.
  • Interaction with 3 dimensional hyperlinks can be through gesture recognition systems, such as Microsoft's Kinect® technology. The system may be applied to video or time-based media and incorporated time-driven hotspots which are driven by time codes, or times from a start point, of the video.
  • Dynamic action zones may be an XML code set which describes locations of hyperlink zones at a given time uniquely linked to specific videos. This may be in 2 dimensions or in 3 dimensions, i.e. stereoscopic applications.
  • the system may be applied to any of the following:
  • An XML file describes the properties of the moving hyperlinks, or active zones, in three dimensions and the moving hyperlinks are associated with movement within the image.

Abstract

A method of adapting media to be interactive comprising the steps of: positioning an active layer over the media; identifying a plurality of active zones on the active layer; and linking the active zones to information obtained from a source such that when an active zone of the active layer is triggered, feedback is presented to a user. The invention extends to a device incorporating media adapted to be interactive.

Description

Interactive Media
Field of the Invention
This invention relates to a method of creating interactive media, adapting media to be interactive and a system for displaying the media produced therefrom.
Background to the Invention
When viewers are looking at media, especially video media, it is often desirable to know more about a particular item in the media, for example, where else the model or actor have appeared, or where an item in a scene can be purchased. So called "interactive media" is becoming more commonplace as technology progresses. However, a problem with existing interactive media systems, especially interactive videos is that the items shown in the video do not always correspond to the region that can be activated to provide more information about the item. This is especially common when items that user wishes to know more about move over time within the media.
Further uses of interactive media include learning and training. In such applications, it is particularly important that the items correspond to the link to the information.
Currently available interactive media systems include the use of providing interactive regions of a video within the video itself, created by editing video to track objects frame- by-frame. This is a very time consuming process because each frame needs processing and a standard video having 24 frames per second leads to a large amount of analysis required for even a short video. Another method of creating interactive videos is via pixel tracking, which can be unreliable.
Summary of the Invention
The invention is directed to a method of adapting a video to be interactive as defined in claim 1. Optional features are disclosed in the dependent claims.
The invention may include adapting stereoscopic video in which images are perceived to extend inwards to / outwards from the screen. This may be obtained by the use of a z- coordinate in combination with x- and y-coordinates. The present invention may be directed to a method of adapting media to be interactive comprising the steps of:
Positioning an active layer over the media;
Identifying a plurality of active zones on the active layer; and
Linking the active zones to information obtained from a source such that when an active zone of the active layer is triggered, feedback is presented to a user.
An active layer is as a zone containing moving hyperlinks which is congruent to video window. The moving hyperinks, or active zones, are linked to moving images contained within the associated video sequence.
The active layer may be either a physical screen, preferably in the form of a touch-screen, or a transparent (or translucent) digital layer overlaid on top of the media. The active layer carries dynamic active zones, which are related to timecode positions within the underlying video. These active zones specify the clickable, touchable, or otherwise activatable, zones. By presenting an active layer of the media, the location of a user's interaction can be calculated and any, or each, active zone corresponding to that location can be activated accordingly. The feedback may be in the form of a 'pop-up' containing information, a link to a website, information sourced from a database that is displayed either in a separate part of the visible media, within the media, or in a boarder region around the outside of the media. Alternatively, or additionally, a list of the objects selected by a user whilst viewing the media may be stored and a list provided later. The feedback may also be linked to a tag, or marker, in a video such that on activation, the user 'skips' to a later part of the video, or to a different video. The active zone may open an application or program when activated. Because the active zones are within the active layer that is overlaid on the media, processing of the media to create active zones, and changes to the size, shape and location of the active zones is not necessarily embedded within, or directly connected to, the media itself. This makes processing the media quicker and easier. Additionally, where amendments are made to the media, the equivalent changes can be quickly and easily made to the active layer.
Whilst a single active zone could appear on the media, it is preferable that a plurality exist. The active zone may be effectively invisible to a user by making it transparent, thereby essentially creating an invisible button. Alternatively, it might be visible, by using either a level of translucence or making the zone opaque. It may be in the form of an icon or a logo, if preferred. The media may be in the form of picture, film, video, games, including video games, or other visual formats.
Preferably, the media is a moving image and an active zone changes position according to where an object in the media moves, in order that the active zone corresponds to the object. The active zone is defined so as to correspond to an object or item in the image. The location of the active zone then moves in the active layer as the object or item moves in the moving media. Because the active zone tracks the object or item, a user is able to activate the active zone at any time by selecting the object in the moving media. When performed on a computer screen, the object may be selected using a conventional 'mouse and pointer' or a touch screen. The media may be in the form of a video or electronic game with active zones laid thereover. As the active zone is a predefined region, it can be easily relocated according to where the item in the moving media is located. Additionally, the shape and size can easily be manipulated to correspond to any changes in orientation or size resulting from the movement of the object or item, or of the viewer's frame of reference. Advantageously, the moving active zone is produced by the steps of:
identifying a first location corresponding to a first position of the object;
identifying a second location corresponding to a second position of the object; and using inbetweening to provide a smoother transition from the first position to the second position of the active zone.
By inbetweening, also called '"tweening", a start and end location can be identified and a path between the two created according to those locations. This may be done manually or automatically using software to create the most likely path and filling in the necessary information required. This can save time over conventional systems that require frame analysis.
It is preferable that the information is software code, and, more preferably, the
information is stored in an XML file or similar. The active software code may be in the form of an internet link, a macro or javascript or other programming environments.
Alternative languages or actions may be desirable, for example a process that places an order for particular items from a supplier upon activation of the active zone.
The invention extends to an interactive multimedia system comprising a visual display upon which media is displayed, wherein an active layer is overlaid onto the media being displayed, the active layer comprising a plurality of active zones therein and wherein when an active zone is activated, a sequence is initiated providing the user with feedback. Again, as with the above method, the active layer may be a virtual, or software, layer, or a physical layer in the form of an overlay to be positioned over media or a touch screen.
In an advantageous construction, an object within the media is identified and an active zone assigned thereto, and wherein the object is subsequently tracked over time and the active zone relocated accordingly so as to correspond to the position of the object to which it is assigned.
Preferably, the system is used to provide augmented reality. The provision of augmented reality may be particularly advantageous for training videos so that a user is shown information that would otherwise not be present. Additionally, with the ability to access further information or facilities from triggering an active zone, the user can learn more about their surroundings. This may be especially advantageous when used in combination with GPS to provide an information on the user's location and what can be augmented.
Advantageously, the system is applied to three-dimensional media. Enhanced three- dimensional media with dynamic active zones allows a user the benefit of selecting multiple dynamic active zones that may be 'stacked' upon one another within the active layer. The 'depth' of the user's selection can be determined from a wireless controller, or via speech recognition. Stereoscopic 3 -dimensional media can be enhanced using the present invention by adding a "z" coordinate to a 2-dimensional active zone; in such a situation a square active zone will become cuboidal and circles, become cylinders, although it may be desirable for a circle to become conical or spherical and likewise a square or triangle become pyramids. Other 3 -dimensional shapes may also be advantageous according to the situation and desired usage of the active zone.
Additionally, the active zones, especially 3 -dimensional active zones, may change size and shape according to the progression of the media. This may be particularly
advantageous in fields such as medical training, especially for surgery, or for engineering techniques that are better learned in 3 -dimensions rather than two. By providing active zones in 3 -dimensional media, a user can become more familiar with the task to be undertaken and can enquire what certain parts and elements are and how they work in relation to the other parts that are present, by activating the active zone. Brief Description of the Drawings
An embodiment of the invention will now be described, by way of example only, and with reference to the accompanying figures, in which:
Figure 1 is a diagram showing active zones on still media, in accordance with the present invention;
Figure 2 is a screenshot of a video with static active zones shown thereon;
Figure 3 is a diagram showing dynamic active zones in accordance with the present invention;
Figure 4; is a flow diagram of a method of creating active zones for moving media, in accordance with the present invention
Figure 5 shows a frame from a video enhanced using the present invention; and
Figure 6 shows a frame from real-time action of a sports match enhanced using the present invention.
Detailed Description of Exemplary Embodiments
Figure 1 shows an active layer 10, having an area 12, comprising active zones 14 and 16 within its area 12. The active zones 14 and 16, which might be considered to be buttons overlaid on the media, can be activated or triggered by a user. When the user activates any of the said active zones 14 or 16, instructions are followed and feedback provided so as to supply the user with information. A control zone 18 is provided within the area of the active layer 10, in which no active zones are located. This control zone provides an area for media controls (not show) to be provided to avoid active zones being triggered when a user is controlling the media.
Figure 2 shows the active layer 10 of Figure 1, overlaid onto media 20. The area 12 of the active layer 10 conforms to the area 22 of the media 20. Controls 24 are provided at the bottom of the media 20 for adjusting the media 20.
The active zones 14 and 16 are static within the active layer 10 and allow the user to 'navigate' around a scene. When the active zone 16 is activated, the media can skip to a latter part of the media 20 so as to give the impression that the user has continued 'forward' . Should the user activate active zone 14, the media can either skip to a latter part of the media 20, or open another media (not shown) and display that.
Figure 3, shows an active layer 40 with an area 42, which is suitable for placing over media (not shown), which comprises moving objects. The media may, for example, be in the form of a moving image, or video, such as in SWF format such an Adobe® Flash® video. Within the area 42 of the active layer 40 is an active zone 44 in a first position 46 and a second position 48. The active zone 44, which is intended to correspond to an object (not shown) within a video over which the active layer is positioned, moves within the active layer according to where the object is located. In order to track the object in the media, the first location of the object is located and the active zone 44 is associated therewith at position 46. At the end of a predetermined time, the location of the same object within the video is identified and the active zone 44 associated therewith at position 48. The movement of the object is then traced using inbetweening.
Figure 4 shows the process of tracking an object and inbetweening according to the present invention.
The active zones 14, 16 and 44 are embedded with a macro in the form of an Extensible Markup Language (XML) file. When triggered, the active zones 14, 16 and 44 follow the procedure contained within the XML file to trigger feedback to be provided. An example of XML code for a typical dynamic active zone (DAZ) in a media player is shown below:
<DAZs>
<DAZ>
<name>DAZl</name>
<imagefile>images/moving_DAZ.swf</imagefile>
<x>0</x>
<y>0</y>
<z>0</0>
<width>100</width>
<height> 100</height>
<depth>50</depth>
<goToVid>video2</goToVid>
<goToTime>0</goToTime> <noclick_vid>Packwhouse_stopl-look_left_click_tools_.flv</ noclick _vid>
<noclick_time>0</ noclick time>
<play Action>pl ay </pl ay Action>
<timeStart>5</timeStart>
<timeFinish>10</timeFinish>
<result>User clicked DAZ l</result>
< DAZ>
<DAZs>
As can be seen from the XML data, the active zone is defined by a location using X and Y coordinates, mapped from the top left hand corner of the media - a Z coordinate can be added if required - and the size of the active zone is defined thereafter. In the above example, when the active zone is activated, the user is directed to a separate video named "video2". By entering a time at the "goToTime" step, the user can be taken to a specific part of the second video, for example <goToTime>20</goToTime> will take the user to 20 seconds into the video.
If the active zone is not activated before the finish time, the video will play a new video. This could be amended to pause the existing video until the user does activate the active zone, or it could perform another instruction, such as looping. By having the media player skip to a different part of the video when no active zones are activated, a series of videos can be put into a single file and the appropriate part skipped to or paused as required. Tagging the video in appropriate positions may assist in skipping to the correct part.
The active zone is active only for the period in which it is defined as the start and finish times towards the end of the code. Therefore, as shown above an active zone can be displayed when the video is 5 seconds into playing, and then deactivated when the video is at 10 seconds. Times are written in seconds alone, so one and half minutes will be 90 seconds. Clearly, the time finish value must be greater than the time start value.
The user's actions may be stored in a log, stored in a content management system (CMS), or a learning management system (LMS), as a record of the user's actions over time. This can then be used to assess a user's performance, which may be especially useful where the video is used for health and safety training, or risk assessment. The present invention may be used for interactive television programs, such as dramas, game-shows and children's programs, and can enhance such programs by providing games and/or information linked to items within the program. It may also be applied to movies. By allowing a user to interact with the film or program the user can experience a closer connection with the media. This, in turn, can promote sales and increase the value of the brand. Extra footage for use in the interactive media can be recorded at the same time as filming the main media. Additionally, with the progression of electronic books and devices for reading the same, the present invention may be applied thereto. Books and the film versions thereof can be linked so that clips are shown at convenient parts in the electronic book if a reader so wishes.
The present invention can be used to provide interactive adverts, including online adverts that are connected to the Internet, thereby allowing users to gain further information about a product shown in a video, and/or to purchase the product. Should a viewer particularly like a product shown in the background of a scene, they will be able to see the
manufacturer and purchase the item. This has particular advantages where "product placement" is permitted. The present invention may be used for education and training, especially training in languages, customer service, sales, medicine, health and safety, risk assessment and public services. By creating active zones for items within language teaching media, a user can identify objects and they can be swiftly provided with the word and/or pronunciation of the object in a chosen language. This allows a user to identify what is useful and/or interesting to them, thereby making the experience more enjoyable and relevant to the user. For interpersonal training, human behaviours and reactions can be accurately represented, which can allow a user to become more familiar with how to deal with particular situations. This may be particularly advantageous for delivering bad news or dealing with aggressiveness or conflict. Additionally, it may be used in military training, for example for dealing with civilian populations or combat operations. By creating a video of a workplace and introducing active zones, training in fire safety and other safety training can be provided that shows the real workplace and assists with a person becoming more familiar with that location. Whilst the process has been mentioned in respect of post-production creation of the dynamic active zones, by the use of software to speed up the process, it is envisaged that dynamic active zones can be overlaid on 'live' video. By coding active zones to follow specific real objects, real-time interactivity can be achieved for live broadcasts. As an example, one might access information and statistics during a sports event, as shown in Figure 5 (all trade marks are the property of their respective owners). Film footage of sporting events can be coded to allow spectators to interact and select to view more information as the action infolds. For example, in horse racing, a user can call up the statistics of a horse's recent performance, which may be useful, especially in the case of in-game betting. Additionally, coverage of ministerial debates can be made interactive to allow a user to view how a minister voted on previous issues. The 'live' video may be broadcast with a time delay to allow the processing of the video in order to provide interactivity.
Live action, including fashion shows (as shown in Figure 6), may be enhanced by the use of the present invention. By providing a viewer with a multimedia system with a visual display and a video camera, the viewer can be provided with an augmented reality view of the live action, wherein items such as clothing or people can link to further information and statistics by selecting an items overwhich an active zone is positioned. This may also assist with sales and advertising.
The movement of an active zone may be performed on a frame-by-frame basis instead of inbetweening, if required. Alternatively, a different method of tracking an object in moving media may be used and the active zone then associated with the object. For example, manual coding of selected frames may be undertaken. Key frame analysis may be desirable.
The active zones may be triggered using with a wireless controller, connected by a short- range radio device, for example Bluetooth®, or infra-red, wherein the controller comprises at least one motion sensor, for example an accelerometer, and a control button, so that a user can interact with media, either in 2-dimensions or 3-dimenions. Other sensors or connection mechanisms may be desirable either in place of those mentioned or in addition thereto. Alternatively, where the media is views on a computer, or the like, the usual selector, such as a mouse-controlled pointer can be used. It is anticipated that eye- control, that is tracking the user' s eye-gaze and activating the dynamic active zones using one's eyes, may be desirable. Other forms of interface may also be preferred.
Where one or more active zones overlap, a level of dominance may be given to each active zone so that only one is activated when triggered. By giving the active zones dominance according to which is most likely to be selected or most commonly selected, the user's choice is more likely to be correct. Alternatively, the user may be given a choice of which zone they wish to activate. Another alternative where two active zones overlap, is for a time to elapse from triggering the overlapped area wherein a further triggering may activate one zone and a lack of the further triggering activates the other zone.
Whilst the active zones may track an object in the media, it may be desirable for them to exist without tracking an object, for example in augmented reality applications.
2 dimensional hyperlinks may be specified by either elliptical or rectangular areas.
Interaction with 2 dimensional hyperlinks can be through conventional pointing and clicking. 3 dimensional hyperlinks may be sections of cones projecting inwards to / outwards from the plane of the screen. These cones may have either rectangular or elliptical cross section. Interaction with 3 dimensional hyperlinks can be through gesture recognition systems, such as Microsoft's Kinect® technology. The system may be applied to video or time-based media and incorporated time-driven hotspots which are driven by time codes, or times from a start point, of the video.
Dynamic action zones may be an XML code set which describes locations of hyperlink zones at a given time uniquely linked to specific videos. This may be in 2 dimensions or in 3 dimensions, i.e. stereoscopic applications.
There may be more than one active layer applied to the media.
The system may be applied to any of the following:
Connected TV;
Internet pages;
Tablet applications; and
Mobile Apps.
Additionally, or alternatively, it can be overlaid onto any video.
An XML file describes the properties of the moving hyperlinks, or active zones, in three dimensions and the moving hyperlinks are associated with movement within the image.

Claims

Claims
1. A method of adapting stereoscopic moving media to be interactive comprising the steps of:
Positioning at least one active layer over the stereoscopic moving media;
Identifying a plurality of active zones on the active layer, linked to the position of objects within the stereoscopic moving media; and
Linking the active zones to information obtained from a source such that when an active zone of the active layer is triggered, feedback is presented to a user,
Wherein the active zone changes position according to where an object in the stereoscopic media moves, in order that the active zone corresponds to the object.
2. A method according to claim 1, wherein the moving active zone is produced by the steps of:
identifying a first location corresponding to a first position of the object;
identifying a second location corresponding to a second position of the object; and using inbetweening to provide a smoother transition from the first position to the second position of the active zone.
3. A method according to any preceding claim, wherein the information is stored in a code file that identifies the location of an active zone within the active layer at a given time using coordinates in three dimensions, and wherein the active zone is a hyperlink.
4. A method according to claim 3, wherein the code file is an XML file.
5. A method according to any preceding claim, wherein the active layer is a virtual layer.
6. A method according to any one of claims 1 to 5, wherein the active layer is a physical layer positioned over a media displaying device and comprises a transparent section containing the active zones.
7. An interactive moving media system comprising a visual display upon which moving media is displayed, wherein the moving media is adapted to be interactive in accordance with the method of claims 1 to 6.
8. An interactive moving media system according to claim 7, wherein a user interacts with the active zones of the stereoscopic media by way of gesture recognition.
9. A system according to claim 7 or claim 8, wherein the system is used to provide stereoscopic augmented reality.
PCT/GB2012/052876 2011-11-21 2012-11-21 Interactive media WO2013076478A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1120049.0A GB2497071A (en) 2011-11-21 2011-11-21 A method of positioning active zones over media
GB1120049.0 2011-11-21

Publications (1)

Publication Number Publication Date
WO2013076478A1 true WO2013076478A1 (en) 2013-05-30

Family

ID=45475466

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2012/052876 WO2013076478A1 (en) 2011-11-21 2012-11-21 Interactive media

Country Status (2)

Country Link
GB (1) GB2497071A (en)
WO (1) WO2013076478A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3014467A4 (en) * 2013-06-26 2017-03-01 Touchcast LLC System and method for providing and interacting with coordinated presentations
US9661256B2 (en) 2014-06-26 2017-05-23 Touchcast LLC System and method for providing and interacting with coordinated presentations
US9666231B2 (en) 2014-06-26 2017-05-30 Touchcast LLC System and method for providing and interacting with coordinated presentations
US9787945B2 (en) 2013-06-26 2017-10-10 Touchcast LLC System and method for interactive video conferencing
US9852764B2 (en) 2013-06-26 2017-12-26 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10075676B2 (en) 2013-06-26 2018-09-11 Touchcast LLC Intelligent virtual assistant system and method
US10084849B1 (en) 2013-07-10 2018-09-25 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10255251B2 (en) 2014-06-26 2019-04-09 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10297284B2 (en) 2013-06-26 2019-05-21 Touchcast LLC Audio/visual synching system and method
US10356363B2 (en) 2013-06-26 2019-07-16 Touchcast LLC System and method for interactive video conferencing
US10523899B2 (en) 2013-06-26 2019-12-31 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10757365B2 (en) 2013-06-26 2020-08-25 Touchcast LLC System and method for providing and interacting with coordinated presentations
US11405587B1 (en) 2013-06-26 2022-08-02 Touchcast LLC System and method for interactive video conferencing
WO2022188733A1 (en) * 2021-03-08 2022-09-15 Hangzhou Taro Positioning Technology Co., Ltd. Scenario triggering and interaction based on target positioning and identification
US11488363B2 (en) 2019-03-15 2022-11-01 Touchcast, Inc. Augmented reality conferencing system and method
US11659138B1 (en) 2013-06-26 2023-05-23 Touchcast, Inc. System and method for interactive video conferencing

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014049311A1 (en) * 2012-09-29 2014-04-03 Gross Karoline Liquid overlay for video content

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997042601A1 (en) * 1996-05-06 1997-11-13 Sas Institute, Inc. Integrated interactive multimedia process
US6496981B1 (en) * 1997-09-19 2002-12-17 Douglass A. Wistendahl System for converting media content for interactive TV use
US20050069225A1 (en) * 2003-09-26 2005-03-31 Fuji Xerox Co., Ltd. Binding interactive multichannel digital document system and authoring tool
US20060012675A1 (en) * 2004-05-10 2006-01-19 University Of Southern California Three dimensional interaction with autostereoscopic displays
US20100225734A1 (en) * 2009-03-03 2010-09-09 Horizon Semiconductors Ltd. Stereoscopic three-dimensional interactive system and method
WO2010141939A1 (en) * 2009-06-05 2010-12-09 Mozaik Multimedia, Inc. Ecosystem for smart content tagging and interaction
US20110137753A1 (en) * 2009-12-03 2011-06-09 Armin Moehrle Automated process for segmenting and classifying video objects and auctioning rights to interactive sharable video objects

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5918012A (en) * 1996-03-29 1999-06-29 British Telecommunications Public Limited Company Hyperlinking time-based data files
KR100321839B1 (en) * 1997-04-02 2002-03-08 포만 제프리 엘 Method and apparatus for integrating hyperlinks in video
US6229541B1 (en) * 1999-09-03 2001-05-08 Isurftv Use of templates for cost-effective secure linking of video stream objects
US7885951B1 (en) * 2008-02-15 2011-02-08 Lmr Inventions, Llc Method for embedding a media hotspot within a digital media file
US8400548B2 (en) * 2010-01-05 2013-03-19 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997042601A1 (en) * 1996-05-06 1997-11-13 Sas Institute, Inc. Integrated interactive multimedia process
US6496981B1 (en) * 1997-09-19 2002-12-17 Douglass A. Wistendahl System for converting media content for interactive TV use
US20050069225A1 (en) * 2003-09-26 2005-03-31 Fuji Xerox Co., Ltd. Binding interactive multichannel digital document system and authoring tool
US20060012675A1 (en) * 2004-05-10 2006-01-19 University Of Southern California Three dimensional interaction with autostereoscopic displays
US20100225734A1 (en) * 2009-03-03 2010-09-09 Horizon Semiconductors Ltd. Stereoscopic three-dimensional interactive system and method
WO2010141939A1 (en) * 2009-06-05 2010-12-09 Mozaik Multimedia, Inc. Ecosystem for smart content tagging and interaction
US20110137753A1 (en) * 2009-12-03 2011-06-09 Armin Moehrle Automated process for segmenting and classifying video objects and auctioning rights to interactive sharable video objects

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10297284B2 (en) 2013-06-26 2019-05-21 Touchcast LLC Audio/visual synching system and method
US11310463B2 (en) 2013-06-26 2022-04-19 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10356363B2 (en) 2013-06-26 2019-07-16 Touchcast LLC System and method for interactive video conferencing
US9787945B2 (en) 2013-06-26 2017-10-10 Touchcast LLC System and method for interactive video conferencing
US9852764B2 (en) 2013-06-26 2017-12-26 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10033967B2 (en) 2013-06-26 2018-07-24 Touchcast LLC System and method for interactive video conferencing
US10075676B2 (en) 2013-06-26 2018-09-11 Touchcast LLC Intelligent virtual assistant system and method
EP3014467A4 (en) * 2013-06-26 2017-03-01 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10523899B2 (en) 2013-06-26 2019-12-31 Touchcast LLC System and method for providing and interacting with coordinated presentations
US11457176B2 (en) 2013-06-26 2022-09-27 Touchcast, Inc. System and method for providing and interacting with coordinated presentations
US11659138B1 (en) 2013-06-26 2023-05-23 Touchcast, Inc. System and method for interactive video conferencing
US11405587B1 (en) 2013-06-26 2022-08-02 Touchcast LLC System and method for interactive video conferencing
US10121512B2 (en) 2013-06-26 2018-11-06 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10531044B2 (en) 2013-06-26 2020-01-07 Touchcast LLC Intelligent virtual assistant system and method
US10757365B2 (en) 2013-06-26 2020-08-25 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10911716B2 (en) 2013-06-26 2021-02-02 Touchcast LLC System and method for interactive video conferencing
US10084849B1 (en) 2013-07-10 2018-09-25 Touchcast LLC System and method for providing and interacting with coordinated presentations
US9661256B2 (en) 2014-06-26 2017-05-23 Touchcast LLC System and method for providing and interacting with coordinated presentations
US9666231B2 (en) 2014-06-26 2017-05-30 Touchcast LLC System and method for providing and interacting with coordinated presentations
US10255251B2 (en) 2014-06-26 2019-04-09 Touchcast LLC System and method for providing and interacting with coordinated presentations
US11488363B2 (en) 2019-03-15 2022-11-01 Touchcast, Inc. Augmented reality conferencing system and method
WO2022188733A1 (en) * 2021-03-08 2022-09-15 Hangzhou Taro Positioning Technology Co., Ltd. Scenario triggering and interaction based on target positioning and identification

Also Published As

Publication number Publication date
GB201120049D0 (en) 2012-01-04
GB2497071A (en) 2013-06-05

Similar Documents

Publication Publication Date Title
WO2013076478A1 (en) Interactive media
US9092061B2 (en) Augmented reality system
US9652046B2 (en) Augmented reality system
JP6823915B2 (en) Systems and methods for participating in video display in real time and game systems
US8656282B2 (en) Authoring tool for providing tags associated with items in a video playback
US20120326993A1 (en) Method and apparatus for providing context sensitive interactive overlays for video
US20170036106A1 (en) Method and System for Portraying a Portal with User-Selectable Icons on a Large Format Display System
US20110262103A1 (en) Systems and methods for updating video content with linked tagging information
CA3033484C (en) Methods and system for customizing immersive media content
CN106464773B (en) Augmented reality device and method
US20160110884A1 (en) Systems and methods for identifying objects within video content and associating information with identified objects
CN111064999B (en) Method and system for processing virtual reality input
US10360946B1 (en) Augmenting content with interactive elements
US20160110041A1 (en) Systems and methods for selecting and displaying identified objects within video content along with information associated with the identified objects
Bove et al. Adding hyperlinks to digital television
US20110261258A1 (en) Systems and methods for updating video content with linked tagging information
US20160110607A1 (en) Systems and methods for processing video content to identify objects and associate information with identified objects
KR101929130B1 (en) Method and Platform for Game using Augmented Game Character in Multiple Screens Environment
Rahman et al. Answering Mickey Mouse: A Novel Authoring-Based Learning Movie System to Promote Active Movie Watching for the Young Viewers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12799252

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12799252

Country of ref document: EP

Kind code of ref document: A1