US20140293014A1 - Video Capture System Control Using Virtual Cameras for Augmented Reality - Google Patents
Video Capture System Control Using Virtual Cameras for Augmented Reality Download PDFInfo
- Publication number
- US20140293014A1 US20140293014A1 US14/307,356 US201414307356A US2014293014A1 US 20140293014 A1 US20140293014 A1 US 20140293014A1 US 201414307356 A US201414307356 A US 201414307356A US 2014293014 A1 US2014293014 A1 US 2014293014A1
- Authority
- US
- United States
- Prior art keywords
- virtual
- camera
- rendering
- video capture
- feed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H04N13/004—
-
- H04N13/0239—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
Definitions
- the present invention relates generally to digital video. More particularly, the present invention relates to digital video rendering.
- Modem commodity PC hardware and videogame consoles are often equipped with sufficient processing capability to enable high-resolution real-time three-dimensional graphics rendering. Even portable devices such as mobile phones and handheld gaming systems are often equipped with scaled down real-time three-dimensional graphics support. Such low-cost commodity graphics processing hardware has enabled a wide variety of entertainment and productivity applications to support enhanced visual presentations for greater user engagement and enjoyment.
- real-time three-dimensional graphics rendering has found itself as a highly supportive role in live broadcast programming. For example, coverage of sports and other live events can be readily enhanced with composite renderings using three-dimensional graphics for alternative or substitute object rendering, strategy simulations, information boxes, alternative viewpoints, and other effects.
- three-dimensional analysis tools that allow for real-time modification of live footage exist, they are limited to modifying existing video camera footage using prior camera paths and viewpoints. As a result, viewer engagement is - low since the scope of analysis is so limited.
- FIG. 1 presents a system for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality, according to one embodiment of the present invention
- FIG. 2 presents a diagram of a robotic camera path configured to match a virtual camera path, according to one embodiment of the present invention
- FIG. 3 presents a diagram of a composite render being generated, according to one embodiment of the present invention.
- FIG. 4 shows a flowchart describing the steps, according to one embodiment of the present invention, by a virtual rendering system and a video capture system may be integrated for outputting a composite render of an augmented reality to a display.
- the present application is directed to a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality.
- the following description contains specific information pertaining to the implementation of the present invention.
- One skilled in the art will recognize that the present invention may be implemented in a manner different from that specifically discussed in the present application. Moreover, some of the specific details of the invention are not discussed in order not to obscure the invention. The specific details not described in the present application are within the knowledge of a person of ordinary skill in the art.
- the drawings in the present application and their accompanying detailed description are directed to merely exemplary embodiments of the invention. To maintain brevity, other embodiments of the invention, which use the principles of the present invention, are not specifically described in the present application and are not specifically illustrated by the present drawings.
- FIG. 1 presents a system for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality, according to one embodiment of the present invention.
- Diagram 100 of FIG. 1 includes virtual rendering system 110 , video capture system 130 , master controller 150 , composite render 155 , live broadcast link 156 , and display 157 .
- Virtual rendering system 110 includes rendering engine controller 111 , auxiliary rendering engine 112 , slave rendering engines 113 a - 113 b, and virtual environment 120 .
- Virtual environment 120 includes virtual cameras 121 a - 121 b.
- Virtual camera 121 a includes data 122 a.
- Virtual camera 121 b includes data 122 b .
- Video capture system 130 includes camera motion controller 131 and real environment 140 .
- Real environment 140 includes robotic cameras 141 a - 141 b .
- Robotic camera 141 a includes data 142 a .
- Robotic camera 141 b includes data 142 b.
- Rendering engine controller 111 , auxiliary rendering engine 112 , slave rendering engine 1 13 a , and slave rendering engine 113 b may each execute on several separate servers, each comprising standard commodity PC hardware or a videogame console system. Alternatively, the engines of virtual rendering system 110 may be consolidated into a single server, or distributed remotely across a network to minimize the amount of necessary on-site hardware.
- Rendering engine controller 111 may coordinate control and data sharing between the different rendering subsystems, as shown in FIG. 1 .
- Auxiliary rendering engine 112 may provide static graphics overlays and other graphical effects that do not require input from virtual environment 120 .
- Slave rendering engines 113 a - 113 b each control virtual cameras 121 a - 121 b respectively to receive a virtually rendered feed of virtual environment 120 .
- Data 122 a - 122 b describing the configuration of virtual cameras 121 a - 121 b within virtual environment 120 may each include, for example, position data such as three-dimensional coordinates, camera field of view orientation data such as camera angle, focal length and focus distance, movement data such as a motion path or velocity and acceleration, and camera characteristics such as lens parameters, camera size, center of lens, and other camera modeling details.
- position data such as three-dimensional coordinates
- camera field of view orientation data such as camera angle, focal length and focus distance
- movement data such as a motion path or velocity and acceleration
- camera characteristics such as lens parameters, camera size, center of lens, and other camera modeling details.
- Three-dimensional coordinates between virtual environment 120 and real environment 140 may be defined using a common frame of reference, such as setting a particular corner of a field or a particular landmark as a common (0, 0, 0) coordinate.
- the motion path may then describe the changing of the above data parameters with respect to time, such as the three-dimensional coordinates with respect to time or the camera field of view with respect to time.
- Slave rendering engines 113 a - 113 b may then modify data 122 a - 122 b respectively to control camera paths for respective virtual cameras 121 a - 121 b.
- virtual rendering system 110 of FIG. 1 depicts only two slave rendering engines each controlling exactly one virtual camera
- alternative embodiments may use any arbitrary number of slave rendering engines to control any arbitrary number of virtual cameras. More specifically, each slave rendering engine may control more than one virtual camera.
- video capture system 130 of FIG. 1 only depicts two robotic cameras
- alternative embodiments may use any arbitrary number of robotic cameras to be controlled by camera motion controller 131 of video capture system 130 . In this manner, the composite rendering system shown in FIG. 1 can be scaled to as many camera angles and viewpoints as desired, in either virtual environment 120 or real environment 140 .
- Real environment 140 corresponds to an actual physical environment represented by virtual environment 120 .
- real environment 140 might comprise an indoor or outdoor sports field or stadium, a golf course, a natural environment, an urban environment, or any other locale.
- Virtual environment 120 may then be created using manual three-dimensional environmental modeling, automated photographic or video extrapolation, or some other manual or automated method.
- Master controller 150 may direct virtual rendering system 110 to control virtual cameras 121 a - 121 b according to particular parameters, and also direct video capture system 130 to control robotic cameras 141 a - 141 b using the same parameters.
- the particular parameters of camera behavior might be dictated by manual control, by tracking the motion of a particular object or focusing on a particular scene in either virtual environment 120 or real environment 140 , by replaying a previously recorded pattern of movement or another predetermined path, or by using some other criteria.
- Tracked objects may include, for example, a ball or a participating player of a game such as a sports match, and may be virtual or real.
- master controller 150 may then query virtual rendering system 110 for virtually rendered feeds and video capture system 130 for video capture feeds. Master controller 150 may then act as a rendering controller by combining the feeds smoothly using standard broadcast key technology such as chroma key or key/fill to generate composite render 155 , which includes real and virtual feed elements arranged in a specific desired manner for broadcast over live broadcast link 156 to display 157 .
- Live broadcast link 156 may comprise, for example, a satellite uplink to a television studio, from where the broadcasted material is disseminated to the general public.
- Display 157 may then represent, for example, a television of a viewer watching the broadcast.
- FIG. 2 presents a diagram of a robotic camera path configured to match a virtual camera path, according to' one embodiment of the present invention.
- Diagram 200 of FIG. 2 includes virtual environment 220 and real environment 240 .
- Virtual environment 220 includes virtual camera 221 , virtual camera path 223 , virtual object 225 , and virtual object path 226 .
- Real environment 240 includes robotic camera 241 and robotic camera path 243 .
- virtual environment 220 corresponds to virtual environment 120 from FIG. 1 and that real environment 240 corresponds to real environment 140 from FIG. 1 .
- FIG. 2 only depicts a single virtual camera and a single robotic camera for simplicity, alternative embodiments may use multiple virtual cameras and multiple robotic cameras.
- master controller 150 may direct video capture system 130 to control robotic cameras similarly to virtual cameras in virtual rendering system 110 .
- FIG. 2 shows an example of this manner of control, where robotic camera 241 is programmed to follow the movements of virtual camera 221 .
- virtual camera 221 may be programmed to focus on the movement of virtual object 225 following virtual object path 226 .
- virtual camera 221 may follow virtual camera path 223 , with camera orientation following virtual object path 226 as indicated by the dotted arrows.
- Virtual camera path 223 may then be recorded and programmed into robotic camera 241 of real environment 240 , so that robotic camera 241 can follow robotic camera path 243 minoring virtual camera path 223 .
- Robotic camera 241 may comprise, for example, a gantry supported fly-by camera, a programmable motion control camera, or another camera system supporting programmable movement.
- robotic camera 241 moves as if it were following virtual object path 226 within real environment 240 , even though there is no corresponding real object for virtual object 225 in real environment 240 .
- robotic camera 241 can thus be synchronized to the camera movements of virtual camera 221 .
- Composite rendering of real and virtual environments also known as “augmented reality”, is thus facilitated, as the camera views in virtual environment 220 and real environment 240 can be matched according to any desired virtual camera path, opening up limitless possibilities for dynamic camerawork.
- FIG. 2 might be used, for example, to present a dynamic panning camera view showing a hypothetical ball pass defined by virtual object 225 following virtual object path 226 , even though such a ball pass never happened in real environment 240 .
- a composite render might show a real sports field background and real players in a video feed captured from real environment 240 , but with a virtual ball rendered in virtual environment 220 as defined by virtual object 225 following virtual object path 226 .
- the composite render can provide a realistic camera fly-by with background elements from real environment 240 and a virtual ball rendered from virtual environment 220 . This may be used, for example, to present hypothetical plays and strategy analysis in a realistic and engaging manner for high viewer impact.
- FIG. 3 presents a diagram of a composite render being generated, according to one embodiment of the present invention.
- Diagram 300 of FIG. 3 includes virtual rendering system 310 , virtually rendered feeds 315 a - 315 b , video capture system 330 , video capture feeds 335 a - 335 b , master controller 350 , composite render 355 , live broadcast link 356 , and display 357 .
- virtual rendering system 310 corresponds to virtual rendering system 110 from FIG.
- video capture system 330 corresponds to video capture system 130
- master controller 350 corresponds to master controller 150
- composite render 355 corresponds to composite render 155
- live broadcast link 356 corresponds to live broadcast link 156
- display 357 corresponds to display 157 .
- virtual rendering system 310 provides master controller 350 with virtually rendered feeds 315 a - 315 b
- video capture system 330 provides master controller 350 with video capture feeds 33 . 5 a - 335 b
- video capture feed 335 a might correspond to a feed generated by robotic camera 241 in FIG. 2
- virtually rendered feed 31 Sa might correspond to a feed generated by virtual camera 221 in FIG. 2
- Virtually rendered feed 315 b may correspond to a feed created by an overhead virtual camera providing a bird's eye overview of virtual environment 220 from FIG. 2
- video capture feed 335 b may correspond to a feed created by an overhead robotic camera providing a bird's eye overview of real environment 240 from FIG. 2 .
- Master controller 350 may then combine virtually rendered feed 315 a and video capture feed 335 a for an augmented reality fly-by scene and also combine virtually rendered feed 315 b and video capture feed 335 b for an augmented reality bird's eye overview scene.
- master controller 350 may use standard broadcast key technologies to combine the different feeds smoothly so that the juxtaposition of real and virtual elements is visually unobtrusive.
- Master controller 350 may then use these combined scenes in composite render 355 through various presentation methods such as split screen, cascaded or tiled frames, “picture-in-picture”, three-dimensional surfaces, and other formatted layouts.
- Master controller 350 may then forward composite render 355 over live broadcast link 356 for showing on display 357 .
- Master controller 350 may repeat the above process of generating composite render 355 in a periodic manner, such as 24, 30, or 60 times per second or higher in order to accommodate a desired video broadcasting framerate.
- FIG. 3 only shows a single composite render 355
- master controller 350 may generate multiple composite renders to provide different camera views for multiple broadcast channels, to customize based on a target broadcast region or audience demographics, to focus on a particular team in a sports match, or to support any other broadcasting application that may require multiple concurrent video streams.
- augmented reality rendering systems can be readily scaled and configured to support large-scale projects.
- FIG. 4 shows a flowchart describing the steps, according to one embodiment of the present invention, by a virtual rendering system and a video capture system may be integrated for outputting a composite render of an augmented reality to a display.
- Certain details and features have been left out of flowchart 400 that are apparent to a person of ordinary skill in the art.
- a step may comprise one or more substeps or may involve specialized equipment or materials, as known in the art.
- steps 410 through 460 indicated in flowchart 400 are sufficient to describe one embodiment of the present invention, other embodiments of the invention may utilize steps different from those shown in flowchart 400 .
- step 410 of flowchart 400 comprises obtaining, from virtual rendering system 110 , data 122 a of virtual camera 121 a in virtual environment 120 .
- master controller 150 may query rendering engine controller 111 for a virtual camera configuration concerning virtual camera 121 a.
- Rendering engine controller 111 may then determine that slave rendering engine 113 a controls virtual camera 121 a, and correspondingly send a request to slave rendering engine 113 a to retrieve data 122 a .
- Data 122 a may then be retrieved by slave rendering engine 113 a , for relay back to master controller 150 via rendering engine controller 111 .
- data 122 a may contain various information concerning the configuration of virtual camera 121 a such as three-dimensional position and movement, camera focus and view, camera model parameters, and other details.
- step 420 of flowchart 400 comprises programming video capture system 130 using data 122 a obtained from step 410 to correspondingly control robotic camera 141 a in real environment 140 .
- master controller 150 may instruct camera motion controller 131 to program values into data 142 a to match data 122 a as closely as possible.
- data 122 a may include three-dimensional coordinates and camera field of view with respect to time.
- the result of setting data 142 a to match data 122 a may be manifested by robotic camera path 243 mimicking virtual camera path 223 , as shown in FIG. 2 .
- step 430 of flowchart 400 comprises capturing, from video capture system 330 , video capture feed 335 a of real environment 240 using robotic camera 241 . Since the motion path of robotic camera 241 was previously programmed in step 420 , step 430 results in master controller 350 receiving video capture feed 335 a comprising fly-by footage according to robotic camera path 243 .
- step 440 of flowchart 400 comprises obtaining, from virtual rendering system 310 , virtually rendered feed 315 a of virtual environment 220 using virtual camera 221 .
- virtual camera path 223 may be defined in any number of ways, such as by manual control, object tracking, recorded motion replay, or predetermined paths.
- virtual camera path 223 is defined as an arc with the camera field of view following virtual object 225 as it progresses through virtual object path 226 .
- master controller 350 may receive virtually rendered feed 315 a comprising fly-by footage according to virtual camera path 223 , wherein the feed includes a rendering of virtual object 225 .
- step 450 of flowchart 400 comprises rendering composite render 355 by processing video capture feed 335 a from step 430 and virtually rendered feed 315 a from step 440 .
- master controller 350 may accomplish step 450 using standard broadcast key technology such as chroma key or key/fill techniques to isolate and combine components from each feed to produce a result with a smooth visual juxtaposition of real and virtual elements.
- step 460 of flowchart 400 comprises outputting composite render 355 from step 450 to display 357 .
- master controller 350 may send composite render 355 using live broadcast link 356 , which might comprise a satellite uplink to a broadcast station for public dissemination.
- composite render 355 shows on display 357 , which might comprise the television of a viewer at home.
- steps 410 - 460 have been described with respect to a single virtual camera, a single robotic camera, and a single composite render, steps 410 - 460 may also be repeated as necessary to support multiple virtual cameras, multiple robotic cameras, and multiple composite renders, as previously described. In this manner, the described rendering system can be flexibly scaled to larger projects by increasing the number of slave rendering systems and robotic cameras to handle additional feeds in real-time.
- live events such as sports can be enhanced with high-impact augmented reality segments by leveraging the cost effective real-time graphical capabilities of modem commodity PC hardware and game consoles.
- This can provide networks with a competitive advantage by drawing in and retaining greater viewership by providing compelling augmented reality content while requiring only minor additional infrastructure outlays over standard rendering systems. Since commodity hardware parts are used and numerous effective virtual rendering systems and engines are available for licensing, expensive proprietary systems and vendor lockout may be avoided, further reducing total cost of ownership.
Abstract
There is provided a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality. There is provided a method for integrating a virtual rendering system and a video capture system for outputting a composite render to a display, the method comprising obtaining, from the virtual rendering system, a virtual camera configuration of a virtual camera in a virtual environment, programming the video capture system using the virtual camera configuration to correspondingly control a robotic camera in a real environment, capturing a video capture feed using the robotic camera, obtaining a virtually rendered feed using the virtual camera, rendering the composite render by processing the feeds, and outputting the composite render to the display.
Description
- 1. Field of the Invention
- The present invention relates generally to digital video. More particularly, the present invention relates to digital video rendering.
- 2. Background Art
- Modem commodity PC hardware and videogame consoles are often equipped with sufficient processing capability to enable high-resolution real-time three-dimensional graphics rendering. Even portable devices such as mobile phones and handheld gaming systems are often equipped with scaled down real-time three-dimensional graphics support. Such low-cost commodity graphics processing hardware has enabled a wide variety of entertainment and productivity applications to support enhanced visual presentations for greater user engagement and enjoyment.
- In particular, real-time three-dimensional graphics rendering has found itself as a highly supportive role in live broadcast programming. For example, coverage of sports and other live events can be readily enhanced with composite renderings using three-dimensional graphics for alternative or substitute object rendering, strategy simulations, information boxes, alternative viewpoints, and other effects. Although three-dimensional analysis tools that allow for real-time modification of live footage exist, they are limited to modifying existing video camera footage using prior camera paths and viewpoints. As a result, viewer engagement is-low since the scope of analysis is so limited.
- Accordingly, there is a need to overcome the drawbacks and deficiencies in the art by providing a way to create composite renderings using live footage with real-time three-dimensional graphics rendering for high viewer impact and engagement.
- There are provided systems and methods for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
- The features and advantages of the present invention will become more readily apparent to those ordinarily skilled in the art after reviewing the following detailed description and accompanying drawings, wherein:
-
FIG. 1 presents a system for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality, according to one embodiment of the present invention; -
FIG. 2 presents a diagram of a robotic camera path configured to match a virtual camera path, according to one embodiment of the present invention; -
FIG. 3 presents a diagram of a composite render being generated, according to one embodiment of the present invention; and -
FIG. 4 shows a flowchart describing the steps, according to one embodiment of the present invention, by a virtual rendering system and a video capture system may be integrated for outputting a composite render of an augmented reality to a display. - The present application is directed to a system and method for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality. The following description contains specific information pertaining to the implementation of the present invention. One skilled in the art will recognize that the present invention may be implemented in a manner different from that specifically discussed in the present application. Moreover, some of the specific details of the invention are not discussed in order not to obscure the invention. The specific details not described in the present application are within the knowledge of a person of ordinary skill in the art. The drawings in the present application and their accompanying detailed description are directed to merely exemplary embodiments of the invention. To maintain brevity, other embodiments of the invention, which use the principles of the present invention, are not specifically described in the present application and are not specifically illustrated by the present drawings.
-
FIG. 1 presents a system for integrating a virtual rendering system and a video capture system using flexible camera control to provide an augmented reality, according to one embodiment of the present invention. Diagram 100 ofFIG. 1 includesvirtual rendering system 110,video capture system 130,master controller 150,composite render 155,live broadcast link 156, anddisplay 157.Virtual rendering system 110 includesrendering engine controller 111, auxiliary rendering engine 112, slave rendering engines 113 a-113 b, andvirtual environment 120.Virtual environment 120 includes virtual cameras 121 a-121 b.Virtual camera 121 a includesdata 122 a.Virtual camera 121 b includesdata 122 b.Video capture system 130 includescamera motion controller 131 andreal environment 140.Real environment 140 includes robotic cameras 141 a-141 b.Robotic camera 141 a includesdata 142 a.Robotic camera 141 b includesdata 142 b. - Rendering
engine controller 111, auxiliary rendering engine 112, slave rendering engine 1 13 a, andslave rendering engine 113 b may each execute on several separate servers, each comprising standard commodity PC hardware or a videogame console system. Alternatively, the engines ofvirtual rendering system 110 may be consolidated into a single server, or distributed remotely across a network to minimize the amount of necessary on-site hardware. Renderingengine controller 111 may coordinate control and data sharing between the different rendering subsystems, as shown inFIG. 1 . Auxiliary rendering engine 112 may provide static graphics overlays and other graphical effects that do not require input fromvirtual environment 120. Slave rendering engines 113 a-113 b each control virtual cameras 121 a-121 b respectively to receive a virtually rendered feed ofvirtual environment 120. - Data 122 a-122 b describing the configuration of virtual cameras 121 a-121 b within
virtual environment 120 may each include, for example, position data such as three-dimensional coordinates, camera field of view orientation data such as camera angle, focal length and focus distance, movement data such as a motion path or velocity and acceleration, and camera characteristics such as lens parameters, camera size, center of lens, and other camera modeling details. Three-dimensional coordinates betweenvirtual environment 120 andreal environment 140 may be defined using a common frame of reference, such as setting a particular corner of a field or a particular landmark as a common (0, 0, 0) coordinate. The motion path may then describe the changing of the above data parameters with respect to time, such as the three-dimensional coordinates with respect to time or the camera field of view with respect to time. Slave rendering engines 113 a-113 b may then modify data 122 a-122 b respectively to control camera paths for respective virtual cameras 121 a-121 b. - Although
virtual rendering system 110 ofFIG. 1 depicts only two slave rendering engines each controlling exactly one virtual camera, alternative embodiments may use any arbitrary number of slave rendering engines to control any arbitrary number of virtual cameras. More specifically, each slave rendering engine may control more than one virtual camera. Similarly, althoughvideo capture system 130 ofFIG. 1 only depicts two robotic cameras, alternative embodiments may use any arbitrary number of robotic cameras to be controlled bycamera motion controller 131 ofvideo capture system 130. In this manner, the composite rendering system shown inFIG. 1 can be scaled to as many camera angles and viewpoints as desired, in eithervirtual environment 120 orreal environment 140. -
Real environment 140 corresponds to an actual physical environment represented byvirtual environment 120. For example,real environment 140 might comprise an indoor or outdoor sports field or stadium, a golf course, a natural environment, an urban environment, or any other locale. Although examples have so far focused on sports entertainment applications, other focuses such as educational or informational applications may also benefit from this use of augmented reality.Virtual environment 120 may then be created using manual three-dimensional environmental modeling, automated photographic or video extrapolation, or some other manual or automated method. -
Master controller 150 may directvirtual rendering system 110 to control virtual cameras 121 a-121 b according to particular parameters, and also directvideo capture system 130 to control robotic cameras 141 a-141 b using the same parameters. The particular parameters of camera behavior might be dictated by manual control, by tracking the motion of a particular object or focusing on a particular scene in eithervirtual environment 120 orreal environment 140, by replaying a previously recorded pattern of movement or another predetermined path, or by using some other criteria. Tracked objects may include, for example, a ball or a participating player of a game such as a sports match, and may be virtual or real. Once the virtual and robotic cameras are properly configured by appropriately programming the motion paths of data 122 a-122 b and 142 a-142 b,master controller 150 may then queryvirtual rendering system 110 for virtually rendered feeds andvideo capture system 130 for video capture feeds.Master controller 150 may then act as a rendering controller by combining the feeds smoothly using standard broadcast key technology such as chroma key or key/fill to generatecomposite render 155, which includes real and virtual feed elements arranged in a specific desired manner for broadcast overlive broadcast link 156 to display 157.Live broadcast link 156 may comprise, for example, a satellite uplink to a television studio, from where the broadcasted material is disseminated to the general public.Display 157 may then represent, for example, a television of a viewer watching the broadcast. -
FIG. 2 presents a diagram of a robotic camera path configured to match a virtual camera path, according to' one embodiment of the present invention. Diagram 200 ofFIG. 2 includesvirtual environment 220 andreal environment 240.Virtual environment 220 includesvirtual camera 221,virtual camera path 223,virtual object 225, andvirtual object path 226.Real environment 240 includesrobotic camera 241 androbotic camera path 243. With regards toFIG. 2 , it should be noted thatvirtual environment 220 corresponds tovirtual environment 120 fromFIG. 1 and thatreal environment 240 corresponds toreal environment 140 fromFIG. 1 . Moreover, althoughFIG. 2 only depicts a single virtual camera and a single robotic camera for simplicity, alternative embodiments may use multiple virtual cameras and multiple robotic cameras. - As previously discussed in
FIG. 1 ,master controller 150 may directvideo capture system 130 to control robotic cameras similarly to virtual cameras invirtual rendering system 110.FIG. 2 shows an example of this manner of control, whererobotic camera 241 is programmed to follow the movements ofvirtual camera 221. For example,virtual camera 221 may be programmed to focus on the movement ofvirtual object 225 followingvirtual object path 226. Thus,virtual camera 221 may followvirtual camera path 223, with camera orientation followingvirtual object path 226 as indicated by the dotted arrows.Virtual camera path 223 may then be recorded and programmed intorobotic camera 241 ofreal environment 240, so thatrobotic camera 241 can followrobotic camera path 243 minoringvirtual camera path 223.Robotic camera 241 may comprise, for example, a gantry supported fly-by camera, a programmable motion control camera, or another camera system supporting programmable movement. - As shown in
FIG. 2 , the camera orientation ofrobotic camera 241 moves as if it were followingvirtual object path 226 withinreal environment 240, even though there is no corresponding real object forvirtual object 225 inreal environment 240. By using the system described above inFIG. 1 ,robotic camera 241 can thus be synchronized to the camera movements ofvirtual camera 221. Composite rendering of real and virtual environments, also known as “augmented reality”, is thus facilitated, as the camera views invirtual environment 220 andreal environment 240 can be matched according to any desired virtual camera path, opening up limitless possibilities for dynamic camerawork. - The example shown in
FIG. 2 might be used, for example, to present a dynamic panning camera view showing a hypothetical ball pass defined byvirtual object 225 followingvirtual object path 226, even though such a ball pass never happened inreal environment 240. Thus, a composite render might show a real sports field background and real players in a video feed captured fromreal environment 240, but with a virtual ball rendered invirtual environment 220 as defined byvirtual object 225 followingvirtual object path 226. Thus, the composite render can provide a realistic camera fly-by with background elements fromreal environment 240 and a virtual ball rendered fromvirtual environment 220. This may be used, for example, to present hypothetical plays and strategy analysis in a realistic and engaging manner for high viewer impact. -
FIG. 3 presents a diagram of a composite render being generated, according to one embodiment of the present invention. Diagram 300 ofFIG. 3 includesvirtual rendering system 310, virtually rendered feeds 315 a-315 b,video capture system 330, video capture feeds 335 a-335 b,master controller 350, composite render 355, live broadcast link 356, anddisplay 357. With regards toFIG. 3 , it should be noted thatvirtual rendering system 310 corresponds tovirtual rendering system 110 fromFIG. 1 , thatvideo capture system 330 corresponds tovideo capture system 130, thatmaster controller 350 corresponds tomaster controller 150, that composite render 355 corresponds to composite render 155, that live broadcast link 356 corresponds to live broadcast link 156, and thatdisplay 357 corresponds to display 157. - As shown in
FIG. 3 ,virtual rendering system 310 providesmaster controller 350 with virtually rendered feeds 315 a-315 b, whilevideo capture system 330 providesmaster controller 350 with video capture feeds 33.5 a-335 b. For example, video capture feed 335 a might correspond to a feed generated byrobotic camera 241 inFIG. 2 , whereas virtually rendered feed 31 Sa might correspond to a feed generated byvirtual camera 221 inFIG. 2 . Virtually rendered feed 315 b may correspond to a feed created by an overhead virtual camera providing a bird's eye overview ofvirtual environment 220 fromFIG. 2 , whereas video capture feed 335 b may correspond to a feed created by an overhead robotic camera providing a bird's eye overview ofreal environment 240 fromFIG. 2 . -
Master controller 350 may then combine virtually rendered feed 315 a and video capture feed 335 a for an augmented reality fly-by scene and also combine virtually renderedfeed 315 b and video capture feed 335 b for an augmented reality bird's eye overview scene. As previously discussed,master controller 350 may use standard broadcast key technologies to combine the different feeds smoothly so that the juxtaposition of real and virtual elements is visually unobtrusive.Master controller 350 may then use these combined scenes in composite render 355 through various presentation methods such as split screen, cascaded or tiled frames, “picture-in-picture”, three-dimensional surfaces, and other formatted layouts.Master controller 350 may then forward composite render 355 over live broadcast link 356 for showing ondisplay 357.Master controller 350 may repeat the above process of generating composite render 355 in a periodic manner, such as 24, 30, or 60 times per second or higher in order to accommodate a desired video broadcasting framerate. - Although
FIG. 3 only shows a single composite render 355, alternative embodiments may use several composite renders. For example,master controller 350 may generate multiple composite renders to provide different camera views for multiple broadcast channels, to customize based on a target broadcast region or audience demographics, to focus on a particular team in a sports match, or to support any other broadcasting application that may require multiple concurrent video streams. By adding additional slave rendering engines and robotic cameras, augmented reality rendering systems can be readily scaled and configured to support large-scale projects. -
FIG. 4 shows a flowchart describing the steps, according to one embodiment of the present invention, by a virtual rendering system and a video capture system may be integrated for outputting a composite render of an augmented reality to a display. Certain details and features have been left out offlowchart 400 that are apparent to a person of ordinary skill in the art. For example, a step may comprise one or more substeps or may involve specialized equipment or materials, as known in the art. Whilesteps 410 through 460 indicated inflowchart 400 are sufficient to describe one embodiment of the present invention, other embodiments of the invention may utilize steps different from those shown inflowchart 400. - Referring to step 410 of
flowchart 400 inFIG. 4 and diagram 100 ofFIG. 1 , step 410 offlowchart 400 comprises obtaining, fromvirtual rendering system 110,data 122 a ofvirtual camera 121 a invirtual environment 120. As shown inFIG. 1 ,master controller 150 may queryrendering engine controller 111 for a virtual camera configuration concerningvirtual camera 121 a.Rendering engine controller 111 may then determine thatslave rendering engine 113 a controlsvirtual camera 121 a, and correspondingly send a request toslave rendering engine 113 a to retrievedata 122 a.Data 122 a may then be retrieved byslave rendering engine 113 a, for relay back tomaster controller 150 viarendering engine controller 111. As previously described,data 122 a may contain various information concerning the configuration ofvirtual camera 121 a such as three-dimensional position and movement, camera focus and view, camera model parameters, and other details. - Referring to step 420 of
flowchart 400 inFIG. 4 and diagram 100 ofFIG. 1 , step 420 offlowchart 400 comprises programmingvideo capture system 130 usingdata 122 a obtained fromstep 410 to correspondingly controlrobotic camera 141 a inreal environment 140. For example,master controller 150 may instructcamera motion controller 131 to program values intodata 142 a to matchdata 122 a as closely as possible. As previously discussed,data 122 a may include three-dimensional coordinates and camera field of view with respect to time. Assumingvirtual camera 221 androbotic camera 241 correspond tovirtual camera 121 a androbotic camera 141 a, the result of settingdata 142 a to matchdata 122 a may be manifested byrobotic camera path 243 mimickingvirtual camera path 223, as shown inFIG. 2 . - Referring to step 430 of
flowchart 400 inFIG. 4 , diagram 200 ofFIG. 2 , and diagram 300 ofFIG. 3 , step 430 offlowchart 400 comprises capturing, fromvideo capture system 330, video capture feed 335 a ofreal environment 240 usingrobotic camera 241. Since the motion path ofrobotic camera 241 was previously programmed instep 420, step 430 results inmaster controller 350 receiving video capture feed 335 a comprising fly-by footage according torobotic camera path 243. - Referring to step 440 of
flowchart 400 inFIG. 4 , diagram 200 ofFIG. 2 , and diagram 300 ofFIG. 3 , step 440 offlowchart 400 comprises obtaining, fromvirtual rendering system 310, virtually rendered feed 315 a ofvirtual environment 220 usingvirtual camera 221. As previously discussed,virtual camera path 223 may be defined in any number of ways, such as by manual control, object tracking, recorded motion replay, or predetermined paths. As shown inFIG. 2 ,virtual camera path 223 is defined as an arc with the camera field of view followingvirtual object 225 as it progresses throughvirtual object path 226. Thus,master controller 350 may receive virtually rendered feed 315 a comprising fly-by footage according tovirtual camera path 223, wherein the feed includes a rendering ofvirtual object 225. - Referring to step 450 of
flowchart 400 inFIG. 4 and diagram 300 ofFIG. 3 , step 450 offlowchart 400 comprises rendering composite render 355 by processing video capture feed 335 a fromstep 430 and virtually rendered feed 315 a fromstep 440. As previously discussed,master controller 350 may accomplish step 450 using standard broadcast key technology such as chroma key or key/fill techniques to isolate and combine components from each feed to produce a result with a smooth visual juxtaposition of real and virtual elements. - Referring to step 460 of
flowchart 400 inFIG. 4 and diagram 300 ofFIG. 3 , step 460 offlowchart 400 comprises outputting composite render 355 fromstep 450 to display 357. As shown inFIG. 3 ,master controller 350 may send composite render 355 using live broadcast link 356, which might comprise a satellite uplink to a broadcast station for public dissemination. Eventually, composite render 355 shows ondisplay 357, which might comprise the television of a viewer at home. - While the above steps 410-460 have been described with respect to a single virtual camera, a single robotic camera, and a single composite render, steps 410-460 may also be repeated as necessary to support multiple virtual cameras, multiple robotic cameras, and multiple composite renders, as previously described. In this manner, the described rendering system can be flexibly scaled to larger projects by increasing the number of slave rendering systems and robotic cameras to handle additional feeds in real-time.
- In this manner, live events such as sports can be enhanced with high-impact augmented reality segments by leveraging the cost effective real-time graphical capabilities of modem commodity PC hardware and game consoles. This can provide networks with a competitive advantage by drawing in and retaining greater viewership by providing compelling augmented reality content while requiring only minor additional infrastructure outlays over standard rendering systems. Since commodity hardware parts are used and numerous effective virtual rendering systems and engines are available for licensing, expensive proprietary systems and vendor lockout may be avoided, further reducing total cost of ownership.
- From the above description of the invention it is manifest that various techniques can be used for implementing the concepts of the present invention without departing from its scope. Moreover, while the invention has been described with specific reference to certain embodiments, a person of ordinary skills in the art would recognize that changes can be made in form and detail without departing from the spirit and the scope of the invention. As such, the described embodiments are to be considered in all respects as illustrative and not restrictive. It should also be understood that the invention is not limited to the particular embodiments described herein, but is capable of many rearrangements, modifications, and substitutions without departing from the scope of the invention.
Claims (17)
1-20. (canceled)
21. A method for integrating a virtual rendering system and a video capture system for outputting a composite render to a display, the method comprising:
obtaining, from the virtual rendering system, a first virtual camera configuration of a first virtual camera in a virtual environment, wherein the first virtual camera configuration includes camera position data, camera field of view orientation data, camera movement data and camera characteristics data;
programming the video capture system using the first virtual camera configuration to correspondingly control a first robotic camera in a real environment;
capturing, from the video capture system, a first video capture feed of the real environment using the first robotic camera;
obtaining, from the virtual rendering system, a first virtually rendered feed of the virtual environment using the first virtual camera;
rendering the composite render by processing the first video capture feed and the first virtually rendered feed; and
outputting the composite render to the display.
22. The method of claim 21 , wherein the first virtual camera configuration includes a first motion path of the first virtual camera in the virtual environment, and wherein the control of the first robotic camera uses the first motion path in the real environment.
23. The method of claim 22 , wherein the first motion path includes a three-dimensional position with respect to time.
24. The method of claim 22 , wherein the first motion path includes a camera orientation or field of view with respect to time.
25. The method of claim 22 , wherein the first motion path tracks a path of a virtual object.
26. The method of claim 25 , wherein the virtual object comprises a virtual player of a virtual game.
27. The method of claim 22 , wherein the first motion path is based on a predetermined path.
28. The method of claim 21 further comprising prior to the rendering of the composite render:
obtaining, from the virtual rendering system, a second virtual camera configuration of a second virtual camera in the virtual environment;
programming the video capture system using the second virtual camera configuration to correspondingly control a second robotic camera in the real environment;
capturing, from the video capture system, a second video capture feed of the real environment using the second robotic camera; and
obtaining, from the virtual rendering system, a second virtually rendered feed of the virtual environment using the second virtual camera;
wherein the rendering of the composite render further processes the second video capture feed and the second virtually rendered feed.
29. A rendering controller for outputting a composite render to a display, the rendering device comprising:
a processor configured to:
obtain, from a virtual rendering system, a virtual camera configuration of a first virtual camera in a virtual environment, wherein the first virtual camera configuration includes camera position data, camera field of view orientation data, camera movement data and camera characteristics data;
program a video capture system using the virtual camera configuration to correspondingly control a first robotic camera in a real environment;
capture, from the video capture system, a first video capture feed of the real environment using the first robotic camera;
obtain, from the virtual rendering system, a first virtually rendered feed of the virtual environment using the first virtual camera;
render the composite render by processing the first video capture feed and the first virtually rendered feed; and
output the composite render to the display.
30. The rendering controller of claim 29 , wherein the first virtual camera configuration includes a first motion path of the first virtual camera in the virtual environment, and wherein the control of the first robotic camera uses the first motion path in the real environment.
31. The rendering controller of claim 30 , wherein the first motion path includes a three-dimensional position with respect to time.
32. The rendering controller of claim 30 , wherein the first motion path includes a camera orientation or field of view with respect to time.
33. The rendering controller of claim 30 , wherein the first motion path tracks a path, of a virtual object.
34. The rendering controller of claim 33 , wherein the virtual object comprises a virtual player of a virtual game,
35. The rendering controller of claim 30 , wherein the first motion path is based on a predetermined path.
36. The rendering controller of claim 29 , wherein prior to the rendering of the composite render the processor is further configured to:
obtain, from the virtual rendering system, a second virtual camera configuration of a second virtual camera in the virtual environment;
program the video capture system using the second virtual camera configuration to correspondingly control a second robotic camera in the real environment;
capture, from the video capture system, a second video capture feed of the real environment using the second robotic camera; and
obtain, from the virtual rendering system, a second virtually rendered feed of the virtual environment using the second virtual camera;
wherein the processor is configured to render the composite render by further processing the second video capture feed and the second virtually rendered feed.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/307,356 US9794541B2 (en) | 2010-01-04 | 2014-06-17 | Video capture system control using virtual cameras for augmented reality |
US15/723,100 US10582182B2 (en) | 2010-01-04 | 2017-10-02 | Video capture and rendering system control using multiple virtual cameras |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/655,669 US8803951B2 (en) | 2010-01-04 | 2010-01-04 | Video capture system control using virtual cameras for augmented reality |
US14/307,356 US9794541B2 (en) | 2010-01-04 | 2014-06-17 | Video capture system control using virtual cameras for augmented reality |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/655,669 Continuation US8803951B2 (en) | 2010-01-04 | 2010-01-04 | Video capture system control using virtual cameras for augmented reality |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/723,100 Continuation US10582182B2 (en) | 2010-01-04 | 2017-10-02 | Video capture and rendering system control using multiple virtual cameras |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140293014A1 true US20140293014A1 (en) | 2014-10-02 |
US9794541B2 US9794541B2 (en) | 2017-10-17 |
Family
ID=44224501
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/655,669 Active 2031-03-30 US8803951B2 (en) | 2010-01-04 | 2010-01-04 | Video capture system control using virtual cameras for augmented reality |
US14/307,356 Active 2030-03-27 US9794541B2 (en) | 2010-01-04 | 2014-06-17 | Video capture system control using virtual cameras for augmented reality |
US15/723,100 Active US10582182B2 (en) | 2010-01-04 | 2017-10-02 | Video capture and rendering system control using multiple virtual cameras |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/655,669 Active 2031-03-30 US8803951B2 (en) | 2010-01-04 | 2010-01-04 | Video capture system control using virtual cameras for augmented reality |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/723,100 Active US10582182B2 (en) | 2010-01-04 | 2017-10-02 | Video capture and rendering system control using multiple virtual cameras |
Country Status (1)
Country | Link |
---|---|
US (3) | US8803951B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333747A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Stereoscopic photographing method and stereoscopic photographing equipment |
CN105872521A (en) * | 2016-04-26 | 2016-08-17 | 乐视控股(北京)有限公司 | 2D video playing method and device |
WO2017190351A1 (en) * | 2016-05-06 | 2017-11-09 | SZ DJI Technology Co., Ltd. | Systems and methods for video processing and display |
US11958183B2 (en) | 2020-09-18 | 2024-04-16 | The Research Foundation For The State University Of New York | Negotiation-based human-robot collaboration via augmented reality |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8907941B2 (en) * | 2009-06-23 | 2014-12-09 | Disney Enterprises, Inc. | System and method for integrating multiple virtual rendering systems to provide an augmented reality |
US8683387B2 (en) * | 2010-03-03 | 2014-03-25 | Cast Group Of Companies Inc. | System and method for visualizing virtual objects on a mobile device |
US9934581B2 (en) * | 2010-07-12 | 2018-04-03 | Disney Enterprises, Inc. | System and method for dynamically tracking and indicating a path of an object |
WO2012048252A1 (en) | 2010-10-07 | 2012-04-12 | Aria Glassworks, Inc. | System and method for transitioning between interface modes in virtual and augmented reality applications |
WO2012071463A2 (en) | 2010-11-24 | 2012-05-31 | Aria Glassworks, Inc. | System and method for presenting virtual and augmented reality scenes to a user |
US9017163B2 (en) | 2010-11-24 | 2015-04-28 | Aria Glassworks, Inc. | System and method for acquiring virtual and augmented reality scenes by a user |
US9041743B2 (en) | 2010-11-24 | 2015-05-26 | Aria Glassworks, Inc. | System and method for presenting virtual and augmented reality scenes to a user |
US8953022B2 (en) | 2011-01-10 | 2015-02-10 | Aria Glassworks, Inc. | System and method for sharing virtual and augmented reality scenes between users and viewers |
US9118970B2 (en) | 2011-03-02 | 2015-08-25 | Aria Glassworks, Inc. | System and method for embedding and viewing media files within a virtual and augmented reality scene |
US20120246223A1 (en) * | 2011-03-02 | 2012-09-27 | Benjamin Zeis Newhouse | System and method for distributing virtual and augmented reality scenes through a social network |
CN102915234A (en) * | 2011-08-04 | 2013-02-06 | 中国移动通信集团公司 | Method and device for realizing program interface in application program |
KR102010396B1 (en) | 2011-11-29 | 2019-08-14 | 삼성전자주식회사 | Image processing apparatus and method |
US20140002615A1 (en) * | 2012-07-02 | 2014-01-02 | Sony Pictures Technologies Inc. | System and method for correcting binocular photography with homographic transformations |
US9838651B2 (en) | 2012-08-10 | 2017-12-05 | Logitech Europe S.A. | Wireless video camera and connection methods including multiple video or audio streams |
US9626799B2 (en) | 2012-10-02 | 2017-04-18 | Aria Glassworks, Inc. | System and method for dynamically displaying multiple virtual and augmented reality scenes on a single display |
US10769852B2 (en) | 2013-03-14 | 2020-09-08 | Aria Glassworks, Inc. | Method for simulating natural perception in virtual and augmented reality scenes |
CN103237166B (en) * | 2013-03-28 | 2016-01-27 | 艾迪普(北京)文化科技股份有限公司 | A kind of video camera control method based on robot The Cloud Terrace and system |
US9083860B2 (en) | 2013-10-09 | 2015-07-14 | Motorola Solutions, Inc. | Method of and apparatus for automatically controlling operation of a user-mounted recording device based on user motion and event context |
US10977864B2 (en) | 2014-02-21 | 2021-04-13 | Dropbox, Inc. | Techniques for capturing and displaying partial motion in virtual or augmented reality scenes |
US10600245B1 (en) * | 2014-05-28 | 2020-03-24 | Lucasfilm Entertainment Company Ltd. | Navigating a virtual environment of a media content item |
US9710972B2 (en) * | 2014-05-30 | 2017-07-18 | Lucasfilm Entertainment Company Ltd. | Immersion photography with dynamic matte screen |
US9057508B1 (en) | 2014-10-22 | 2015-06-16 | Codeshelf | Modular hanging lasers to enable real-time control in a distribution center |
US9327397B1 (en) | 2015-04-09 | 2016-05-03 | Codeshelf | Telepresence based inventory pick and place operations through robotic arms affixed to each row of a shelf |
US9262741B1 (en) | 2015-04-28 | 2016-02-16 | Codeshelf | Continuous barcode tape based inventory location tracking |
CN104992205B (en) * | 2015-06-04 | 2018-11-06 | 西安教育文化数码有限责任公司 | Index type augmented reality system and method based on AR books |
WO2017026193A1 (en) * | 2015-08-12 | 2017-02-16 | ソニー株式会社 | Image processing device, image processing method, program, and image processing system |
US9679413B2 (en) | 2015-08-13 | 2017-06-13 | Google Inc. | Systems and methods to transition between viewpoints in a three-dimensional environment |
WO2017036953A1 (en) * | 2015-09-02 | 2017-03-09 | Thomson Licensing | Method, apparatus and system for facilitating navigation in an extended scene |
JPWO2017072975A1 (en) * | 2015-10-30 | 2018-08-30 | オリンパス株式会社 | Imaging system |
US10845188B2 (en) * | 2016-01-05 | 2020-11-24 | Microsoft Technology Licensing, Llc | Motion capture from a mobile self-tracking device |
EP3226213A1 (en) | 2016-03-29 | 2017-10-04 | Roland Judex | Method of computationally augmenting a video feed, data-processing apparatus, and computer program therefor |
WO2018067728A1 (en) * | 2016-10-04 | 2018-04-12 | Livelike Inc. | Picture-in-picture base video streaming for mobile devices |
US10325410B1 (en) * | 2016-11-07 | 2019-06-18 | Vulcan Inc. | Augmented reality for enhancing sporting events |
US10389935B2 (en) * | 2016-12-13 | 2019-08-20 | Canon Kabushiki Kaisha | Method, system and apparatus for configuring a virtual camera |
US11665308B2 (en) * | 2017-01-31 | 2023-05-30 | Tetavi, Ltd. | System and method for rendering free viewpoint video for sport applications |
US10297087B2 (en) * | 2017-05-31 | 2019-05-21 | Verizon Patent And Licensing Inc. | Methods and systems for generating a merged reality scene based on a virtual object and on a real-world object represented from different vantage points in different video data streams |
JP6727515B2 (en) * | 2018-07-31 | 2020-07-22 | 株式会社コナミデジタルエンタテインメント | Game system and computer program used therefor |
US10482678B1 (en) | 2018-12-14 | 2019-11-19 | Capital One Services, Llc | Systems and methods for displaying video from a remote beacon device |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040219980A1 (en) * | 2003-04-30 | 2004-11-04 | Nintendo Co., Ltd. | Method and apparatus for dynamically controlling camera parameters based on game play events |
US20060132482A1 (en) * | 2004-11-12 | 2006-06-22 | Oh Byong M | Method for inter-scene transitions |
US20060152434A1 (en) * | 2003-06-12 | 2006-07-13 | Frank Sauer | Calibrating real and virtual views |
US20060211463A1 (en) * | 1997-04-07 | 2006-09-21 | Takashi Nishiyama | Game machine system |
US20060221179A1 (en) * | 2004-04-12 | 2006-10-05 | Stereo Display, Inc. | Three-dimensional camcorder |
US20070238981A1 (en) * | 2006-03-13 | 2007-10-11 | Bracco Imaging Spa | Methods and apparatuses for recording and reviewing surgical navigation processes |
US20070282564A1 (en) * | 2005-12-06 | 2007-12-06 | Microvision, Inc. | Spatially aware mobile projection |
US20070296721A1 (en) * | 2004-11-08 | 2007-12-27 | Electronics And Telecommunications Research Institute | Apparatus and Method for Producting Multi-View Contents |
US20080192116A1 (en) * | 2005-03-29 | 2008-08-14 | Sportvu Ltd. | Real-Time Objects Tracking and Motion Capture in Sports Events |
US20080263460A1 (en) * | 2007-04-20 | 2008-10-23 | Utbk, Inc. | Methods and Systems to Connect People for Virtual Meeting in Virtual Reality |
US20080267450A1 (en) * | 2005-06-14 | 2008-10-30 | Maki Sugimoto | Position Tracking Device, Position Tracking Method, Position Tracking Program and Mixed Reality Providing System |
US20080280676A1 (en) * | 2007-05-07 | 2008-11-13 | Samsung Electronics Co. Ltd. | Wireless gaming method and wireless gaming-enabled mobile terminal |
US20090005140A1 (en) * | 2007-06-26 | 2009-01-01 | Qualcomm Incorporated | Real world gaming framework |
US20090070093A1 (en) * | 2007-09-11 | 2009-03-12 | Namco Bandai Games Inc. | Program, information storage medium, and game device |
US20090170600A1 (en) * | 2005-12-19 | 2009-07-02 | Konami Digital Entertainment Co., Ltd. | Game Machine, Game Machine Control Method, And Information Storage Medium |
US20090247250A1 (en) * | 2008-03-31 | 2009-10-01 | Namco Bandai Games Inc. | Program, game system, and movement control method |
US20090271715A1 (en) * | 2008-01-29 | 2009-10-29 | Tumuluri Ramakrishna J | Collaborative augmented virtuality system |
US7636087B2 (en) * | 2005-03-31 | 2009-12-22 | Namco Bandai Games, Inc. | Program, information storage medium, image generation system, and image generation method |
US8014985B2 (en) * | 1999-03-26 | 2011-09-06 | Sony Corporation | Setting and visualizing a virtual camera and lens system in a computer graphic modeling environment |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4209852A (en) * | 1974-11-11 | 1980-06-24 | Hyatt Gilbert P | Signal processing and memory arrangement |
US5796426A (en) * | 1994-05-27 | 1998-08-18 | Warp, Ltd. | Wide-angle image dewarping method and apparatus |
US6828962B1 (en) * | 1999-12-30 | 2004-12-07 | Intel Corporation | Method and system for altering object views in three dimensions |
US6774932B1 (en) * | 2000-09-26 | 2004-08-10 | Ewing Golf Associates, Llc | System for enhancing the televised broadcast of a golf game |
US20040104935A1 (en) * | 2001-01-26 | 2004-06-03 | Todd Williamson | Virtual reality immersion system |
AU2003220185B2 (en) * | 2002-03-12 | 2007-05-10 | Menache, Llc | Motion tracking system and method |
WO2004045734A1 (en) * | 2002-11-20 | 2004-06-03 | Sega Corporation | Game image display control program, game device, and recording medium |
US20050071306A1 (en) * | 2003-02-05 | 2005-03-31 | Paul Kruszewski | Method and system for on-screen animation of digital objects or characters |
US7497807B2 (en) * | 2003-07-15 | 2009-03-03 | Cube X Incorporated | Interactive computer simulation enhanced exercise machine |
US7787009B2 (en) * | 2004-05-10 | 2010-08-31 | University Of Southern California | Three dimensional interaction with autostereoscopic displays |
KR101238608B1 (en) | 2004-07-30 | 2013-02-28 | 익스트림 리얼리티 엘티디. | A system and method for 3D space-dimension based image processing |
US20070236514A1 (en) * | 2006-03-29 | 2007-10-11 | Bracco Imaging Spa | Methods and Apparatuses for Stereoscopic Image Guided Surgical Navigation |
CN101479765B (en) * | 2006-06-23 | 2012-05-23 | 图象公司 | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
JP5448021B2 (en) * | 2007-06-29 | 2014-03-19 | 株式会社セガ | Racing game device program, recording medium storing the program, and racing game device |
JP5157329B2 (en) * | 2007-08-31 | 2013-03-06 | 株式会社セガ | Game device |
JP5390093B2 (en) * | 2007-12-21 | 2014-01-15 | 任天堂株式会社 | GAME PROGRAM AND GAME DEVICE |
JP2009237680A (en) * | 2008-03-26 | 2009-10-15 | Namco Bandai Games Inc | Program, information storage medium, and image generation system |
US20100091036A1 (en) * | 2008-10-10 | 2010-04-15 | Honeywell International Inc. | Method and System for Integrating Virtual Entities Within Live Video |
JP5739674B2 (en) * | 2010-09-27 | 2015-06-24 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and information processing method |
-
2010
- 2010-01-04 US US12/655,669 patent/US8803951B2/en active Active
-
2014
- 2014-06-17 US US14/307,356 patent/US9794541B2/en active Active
-
2017
- 2017-10-02 US US15/723,100 patent/US10582182B2/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060211463A1 (en) * | 1997-04-07 | 2006-09-21 | Takashi Nishiyama | Game machine system |
US8014985B2 (en) * | 1999-03-26 | 2011-09-06 | Sony Corporation | Setting and visualizing a virtual camera and lens system in a computer graphic modeling environment |
US20040219980A1 (en) * | 2003-04-30 | 2004-11-04 | Nintendo Co., Ltd. | Method and apparatus for dynamically controlling camera parameters based on game play events |
US20060152434A1 (en) * | 2003-06-12 | 2006-07-13 | Frank Sauer | Calibrating real and virtual views |
US20060221179A1 (en) * | 2004-04-12 | 2006-10-05 | Stereo Display, Inc. | Three-dimensional camcorder |
US20070296721A1 (en) * | 2004-11-08 | 2007-12-27 | Electronics And Telecommunications Research Institute | Apparatus and Method for Producting Multi-View Contents |
US20060132482A1 (en) * | 2004-11-12 | 2006-06-22 | Oh Byong M | Method for inter-scene transitions |
US20080192116A1 (en) * | 2005-03-29 | 2008-08-14 | Sportvu Ltd. | Real-Time Objects Tracking and Motion Capture in Sports Events |
US7636087B2 (en) * | 2005-03-31 | 2009-12-22 | Namco Bandai Games, Inc. | Program, information storage medium, image generation system, and image generation method |
US20080267450A1 (en) * | 2005-06-14 | 2008-10-30 | Maki Sugimoto | Position Tracking Device, Position Tracking Method, Position Tracking Program and Mixed Reality Providing System |
US20070282564A1 (en) * | 2005-12-06 | 2007-12-06 | Microvision, Inc. | Spatially aware mobile projection |
US20090170600A1 (en) * | 2005-12-19 | 2009-07-02 | Konami Digital Entertainment Co., Ltd. | Game Machine, Game Machine Control Method, And Information Storage Medium |
US20070238981A1 (en) * | 2006-03-13 | 2007-10-11 | Bracco Imaging Spa | Methods and apparatuses for recording and reviewing surgical navigation processes |
US20080263460A1 (en) * | 2007-04-20 | 2008-10-23 | Utbk, Inc. | Methods and Systems to Connect People for Virtual Meeting in Virtual Reality |
US20080280676A1 (en) * | 2007-05-07 | 2008-11-13 | Samsung Electronics Co. Ltd. | Wireless gaming method and wireless gaming-enabled mobile terminal |
US20090005140A1 (en) * | 2007-06-26 | 2009-01-01 | Qualcomm Incorporated | Real world gaming framework |
US20090070093A1 (en) * | 2007-09-11 | 2009-03-12 | Namco Bandai Games Inc. | Program, information storage medium, and game device |
US20090271715A1 (en) * | 2008-01-29 | 2009-10-29 | Tumuluri Ramakrishna J | Collaborative augmented virtuality system |
US20090247250A1 (en) * | 2008-03-31 | 2009-10-01 | Namco Bandai Games Inc. | Program, game system, and movement control method |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333747A (en) * | 2014-11-28 | 2015-02-04 | 广东欧珀移动通信有限公司 | Stereoscopic photographing method and stereoscopic photographing equipment |
CN105872521A (en) * | 2016-04-26 | 2016-08-17 | 乐视控股(北京)有限公司 | 2D video playing method and device |
WO2017190351A1 (en) * | 2016-05-06 | 2017-11-09 | SZ DJI Technology Co., Ltd. | Systems and methods for video processing and display |
US11019280B2 (en) | 2016-05-06 | 2021-05-25 | SZ DJI Technology Co., Ltd. | Systems and methods for video processing and display |
US11958183B2 (en) | 2020-09-18 | 2024-04-16 | The Research Foundation For The State University Of New York | Negotiation-based human-robot collaboration via augmented reality |
Also Published As
Publication number | Publication date |
---|---|
US9794541B2 (en) | 2017-10-17 |
US10582182B2 (en) | 2020-03-03 |
US20180048876A1 (en) | 2018-02-15 |
US20110164116A1 (en) | 2011-07-07 |
US8803951B2 (en) | 2014-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10582182B2 (en) | Video capture and rendering system control using multiple virtual cameras | |
US9751015B2 (en) | Augmented reality videogame broadcast programming | |
US10121284B2 (en) | Virtual camera control using motion control systems for augmented three dimensional reality | |
US8885022B2 (en) | Virtual camera control using motion control systems for augmented reality | |
US8907941B2 (en) | System and method for integrating multiple virtual rendering systems to provide an augmented reality | |
US8665374B2 (en) | Interactive video insertions, and applications thereof | |
EP3238445B1 (en) | Interactive binocular video display | |
US9661275B2 (en) | Dynamic multi-perspective interactive event visualization system and method | |
CN103051830B (en) | A kind of system and method to clapped target multi-angle live event | |
JP2016519546A (en) | Method and system for producing television programs at low cost | |
US20180227501A1 (en) | Multiple vantage point viewing platform and user interface | |
US9736462B2 (en) | Three-dimensional video production system | |
US20110304735A1 (en) | Method for Producing a Live Interactive Visual Immersion Entertainment Show | |
CN106296686A (en) | One is static and dynamic camera combines to moving object three-dimensional reconstruction method frame by frame | |
US20090153550A1 (en) | Virtual object rendering system and method | |
CN110730340A (en) | Lens transformation-based virtual auditorium display method, system and storage medium | |
US10796723B2 (en) | Spatialized rendering of real-time video data to 3D space | |
US20150289032A1 (en) | Main and immersive video coordination system and method | |
KR20180021623A (en) | System and method for providing virtual reality content | |
KR20210084248A (en) | Method and apparatus for providing a platform for transmitting vr contents | |
CN117425044A (en) | Video generation method, first device, second device and storage medium | |
Series | Collection of usage scenarios and current statuses of advanced immersive audio-visual systems | |
CA2949646A1 (en) | A system for combining virtual simulated images with real footage from a studio | |
Badique et al. | Multimedia content creation tools in the EU ACTS Programme | |
WO2016032427A1 (en) | Three-dimensional video production system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DISNEY ENTERPRISES, INC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GAY, MICHAEL;THIEL, AARON;REEL/FRAME:033122/0811 Effective date: 20100104 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |