`
`EXHIBIT D
`(Part 2 of 2)
`
`
`
`Case 6:20-cv-00810-ADA Document 73-14 Filed 04/23/21 Page 2 of 14
`
`US 6,357,042 B2
`
`15
`each other only in that the annotations contained therein are
`different as a case of being authored in different stations.
`Annotation streams 62a—d are generated so as to be
`synchronous with stream 53. Therefore, it is ideal that all
`output streams are running in synchronous mode while
`leaving each authoring station. Previously described con-
`ventions such as the ability of image tracking software to
`skip frames helps to assure this ideal.
`An authoring server 63 is provided and adapted to com-
`bine annotation streams 61a—d into one annotation stream 55
`which is analogous to stream 55 of FIG. 7. In this way, all
`annotations performed separately may be combined and may
`act in unison at the users end. Video stream outputs from the
`separate stations converge, or more specifically, remerge
`into video stream 53 as illustrated via a horizontal, left-
`facing arrow labeled element number 53. Video stream 53 is
`the normalized video output from each authoring station and
`typically does not include any annotations.
`If there is a known latency with respect to recombining
`streams 62a-62d in server 63, then video stream 53 must be
`re-synchronized with annotation stream 55 before stream 55
`becomes output. In this case, stream 53 is diverted over path
`65 into server 63 and delayed until it is synchronous with
`stream 55 before it exits server 63 over path 67. In this way,
`streams 55 and 53 remain synchronous on output from the
`authoring system. In an alternate embodiment, synchronic
`delay may be performed in a separate server (not shown).
`The video stream that is output from system 51 (stream 53)
`remains essentially unchanged from the video that is input
`into the system (stream 49) unless the medium of transport
`of the video stream requires a different video resolution or
`frame rate. Although it has been previously described that a
`preferred arrangement for an authoring station such as
`authoring station 61a is a PC/VDU with a CPU running at
`least 266 MHz and a Windows platform, it will be apparent
`to one with skill in the art that other platforms may be used
`such as a Sun Microsystems workstation, UNIX operating
`systems, and so on. In the case of differing platforms,
`differences in functional software architecture will also be
`apparent.
`It will also be apparent to one with skill in the art that
`video stream outputs which ultimately remerge as stream 53
`may be transferred to server 63 and delayed for synchronous
`purposes and so on, without departing from the spirit and
`scope of the present invention. In the latter case, it is
`conceivable as well that if both streams 53 and 55 share
`entry into server 63, they may also be combined therein and
`output as one annotated video stream.
`FIG. 9 is a block diagram illustrating an exemplary
`modular architecture of a single authoring station according
`to an embodiment of the present invention. Authoring sta-
`tion 61 is provided and adapted to track a moving image
`entity in a video presentation and to provide tracking coor-
`dinates as well as other types of annotation for the purpose
`of soliciting responses from an end user through interactive
`device. Authoring station 61 is, in this embodiment analo-
`gous to station 61a of FIG. 8. Authoring station 61 utilizes
`various interfacing software modules in performing it's
`stated functions as is further detailed below.
`The exemplary architecture is just one architecture
`through which the present invention may be practiced. A
`CRT module 81 is provided and adapted to display a
`normalized graphical bitmap image-stream as may be
`viewed by a person involved in an authoring procedure. A
`Filtergraph 72 comprises three software filters that are
`dedicated to performing certain functions. These are input
`filter 73, transform filter 75, and renderer filter 77. These
`
`16
`three filters are responsible for receiving input video from
`variable sources (input filter), interpreting presented data
`and forming an image (transform filter), and generating and
`displaying the actual viewable video stream (renderer filter)
`5 comprising of a series of bitmapped frames. Within the
`domain of filtergraph 72, video frame speed is set at 30 FPS
`(exemplary), and resolution is set at 352 by 240 pixels
`(exemplary). This provides a compatible set of parameters
`for authoring station 61 which is, in this example, a PC/VDU
`10 running Windows as previously described.
`Input filter 73 is adapted to accept a video input stream 71
`which may be sourced from a wide variety of either analog
`or digital feeds. Examples are live video feeds from satellite,
`video camera, cable, and prerecorded video feeds from a
`15 VCR, CD-ROM, DVD, Internet server, and so on. In addi-
`tion to video input, filtergraph 72 may also accept input from
`a user interface module 83 adapted to provide certain
`controls relating to filter 73 such as video conversion
`controls, frame rate controls and so on. Control direction-
`20 ality with regards to user interface 83 is illustrated via
`directional arrows emanating from interface 83 and leading
`to other components. Such controls may be initiated via
`keyboard command or other known method such as via
`mouse click, etc. Transform filter 75 interprets data for the
`25 purpose of obtaining bitmap images at a normalized reso-
`lution. Renderer filter 77 then draws the bitmap image-
`stream on CRT monitor 81 for viewing. In another
`embodiment, CRT 81 may be another type of monitor
`wherein pixel graphics may be viewed such as are known in
`30 the art.
`A tracking module 79 (T-module) is provided and adapted
`to track an image and provide frame by frame tracking
`coordinates and to be a vehicle through which additional
`annotations may be provided through user interface 83. For
`35 example, through interface 83, an author may set up the
`parameters for tracking such as are described with reference
`to FIG. 5 above, as well as add additional annotation such as
`static or moving image icons, formatted text, animated
`graphics, sounds and the like. Tracking module 79 is analo-
`40 gous to tracking module 13 of FIG. 1.
`Renderer filter 77 is the driver that drives the video
`display as previously described. Tracking module 79 works
`in conjunction with renderer filter 77 as illustrated via
`opposite-facing arrows between the modules. That is, it is at
`45 this stage that image tracking and annotation operations
`actually take place as previously described. For example, the
`upward facing arrow emanating from renderer filter 77 and
`entering tracking module 79 represents input stream 71 (in
`the form of a series of bitmapped images). The downward
`50 facing arrow emanating from module 79 and re-entering
`filter 77 represents output stream 71 and the additional
`information related to the positions of the entities being
`tracked. The video presentation is simultaneously being
`played on CRT 81 as tracking is occurring and is subse-
`55 quently sent on as output stream 89 from renderer filter 77
`which is analogous to video stream 53 of FIG. 8. An
`annotation manager 85 within renderer 77 converts annota-
`tion data, input during annotation processes and the data
`relating to the tracked entities output from the tracking
`60 module, to metadata for more compact transmission in
`output stream 87. Stream 87 is a data stream containing
`information about the various annotations added by the
`author and the tracking co-ordinates of the tracked entities
`and is analogous to the annotation stream 62b of FIG. 8.
`65 Such metadata conversion-data tables for compact transmis-
`sion in output stream 87 may be stored elsewhere accessible
`to the CPU powering authoring station 61. User interface 83
`
`TT0007265
`
`
`
`Case 6:20-cv-00810-ADA Document 73-14 Filed 04/23/21 Page 3 of 14
`
`US 6,357,042 B2
`
`17
`provides considerable option and capability for entering
`commands to add image icons, animated graphics, following
`tracked objects or static or moving independently in the
`video in predefined manner, formatted text captions and so
`on.
`In one embodiment, user interface 83 may pre-
`programmed by an author to supply the appropriate pre-
`selected annotations in a reactive fashion. That is, according
`to a specific time interval, a signal could initiate annotation
`inserts and so on. In other embodiments, an author may
`physically enter an annotation via pressing a pre-defined key
`on a keyboard and soon. There are many known methods for
`inserting annotations.
`It will be apparent to one with skill in the art that other
`software module configurations may be used instead of
`those presented in this example without departing from the
`spirit and scope of the present invention. For example,
`similar functional modules may be provided to be compat-
`ible with alternate platforms such as UNIX or Macintosh.
`It will also be apparent to one with skill in the art that the
`bulk of annotation in the form of inserted text, graphical
`icons, universal resource locators (URL's), interactive
`shapes, and so on will, in many embodiments, be at least
`partly associated with tracking coordinates of an image and
`therefore will depend on those frame by frame coordinates.
`For example, an interactive icon may follow a moving image
`entity and be visible by an end user as in case of advertise-
`ment logos for sponsors of sportspersons in a sporting event.
`Text blocks and the like may take similar association. Hence,
`the specific content of annotations and insertion methods of
`such annotations may be pre-designed based on known facts
`about the video stream such as what image is to be tracked
`for what advertiser who has what URE's and so on. Execu-
`tion of those annotations may be automatic according to a
`timed function as described above, or may be performed
`manually, perhaps using a macro or other designed input
`function.
`In another embodiment, added functionality could be
`added to user interface 83 which allows for an author to
`adequately identify an image entity to be tracked so as to be
`enabled to place a tracking box such as box 29 of FIG. 5 over
`the entity at a maximally opportune instant during image
`motion. In this case, once the tracking box in activated, the
`software could be adapted to allow the author to manually
`track the object till such a time that the tracking box is placed
`more or less at the center of the object in the video. A
`synchronization module could be added in authoring server
`63 and adapted to synchronize separate annotation streams
`before combining them and synchronizing them with the
`output video stream which is stream 53 in our example.
`System for Synchronizing Data Streams Delivered Over
`Separate Networks
`According to a preferred embodiment of the present
`invention, a unique synchronization system is provided and
`adapted to overcome unpredictable latency inherent in deliv-
`ery of data-streams that are delivered over separate delivery
`media to end users. The method and apparatus provided and
`taught herein for this unique purpose is two-fold. Firstly, a
`video/data stream signature operation is executed after coor-
`dinate tracking and annotation operations are performed in
`an authoring station such as was described above with
`respect to authoring station 61 of FIG. 9. The signature
`streams are then sent to their respective broadcast and/or
`data-transmission systems to be sent to an end user.
`Secondly, a video/annotation stream capture and synchro-
`nization operation, executed via software on customer pre-
`mises equipment (CPE), must be executed at the user's end
`before a single combined stream may be viewed by the user.
`
`18
`FIG. 10 is a block diagram illustrating a signature appli-
`cation apparatus at the authoring end according to an
`embodiment of the present invention. A signature applica-
`tion module 91 is provided in this embodiment in the form
`5 of a software application module resident in an authoring
`server such as server 63 of FIG. 8. Module 91 is initiated in
`server 63 after tracking and annotation has been performed.
`Separate data streams (video and annotation) are given
`frame-specific identification and marking so that they may
`10 latter be synchronized by using inserted data corresponding
`to the frame-specific identification.
`A video stream 93 is shown entering signature module 91.
`Video stream 91 is analogous to stream 53 of FIG. 8. An
`annotation stream 95 is similarly illustrated as entering
`15 signature module 91. Annotation stream 95 is analogous to
`stream 55 of FIG. 8. Streams 95 and 93 are synchronous as
`they enter module 91. Synchronization has been achieved
`after image tracking and authoring in authoring server 63 of
`FIG. 8, as described in detail above. Synchronization after
`20 separate broadcasting is much more complicated and is
`described in enabling detail below.
`Referring back to FIG. 10, in this embodiment, a frame
`reader/counter module 97 is adapted to read video stream 93
`and annotation stream 95 for the purpose of recording an
`25 association of annotation data to video-frame data using a
`serial count of each frame. Because annotation stream 55 of
`FIG. 8 was generated at the time of tracking an entity within
`video stream 53 of FIG. 8, each stream comprises a same
`number of frames constituting an entire stream length.
`30 Therefore, it is possible to count and associate individual
`frames in serial fashion. A number/time marker-generator
`module 99 generates code to represent frames in annotation
`stream 95 and also to represent time markers in video stream
`93. Further binary numbers are generated for use in a pixel
`35 signature method described more fully below.
`According to a preferred embodiment of the present
`invention, three separate signature methods, each method
`using one sequence of binary numbers described above, are
`executed via signature module 91 in the course of it's
`40 function. Using three separate signatures insures that at least
`one of the applied signatures will successfully pass on to the
`end user's equipment. All three methods share a common
`goal, which is to record in one of two data streams to be later
`synchronized, at regular intervals, a marker, and information
`45 denoting which frame from the other of the two data streams
`should be displayed at the marker for the two streams to be
`properly synchronized.
`In one of the three methods a number denoting frames in
`one of the two data streams is inserted into video blanking
`50 intervals (VBIs) of the other data stream. Although it is
`possible to insert such a synchronizing number in each VBI
`for the carrying stream, it is not necessary to do so for
`synchronizing purposes. Typically the synchronizing num-
`ber need be inserted only once in several frames, and the fact
`55 of such a number appearing in a VBI can serve also as a
`marker; that is, the appearance of the number in a VBI can
`be taken to mean that the associated frame from the com-
`panion stream is to be displayed with the "next" frame in the
`carrying stream. The convention could also be applied to any
`60 frame follow g the -next" frame.
`In a second method the identifying number is inserted in
`one or another of the horizontal blanking intervals (HBI) of
`a frame in the carrying stream. The particular HBI is known
`by convention, and more than one HBI may be used as a
`65 -belt-and-suspenders" approach. In this method the marker
`may be also by convention, such as the "next" frame, or
`some number of frames following the -next" frame.
`
`TT0007266
`
`
`
`Case 6:20-cv-00810-ADA Document 73-14 Filed 04/23/21 Page 4 of 14
`
`US 6,357,042 B2
`
`19
`A third method for synchronization signature according to
`an embodiment of the present invention involves altering
`pixel data in a manner to communicate a binary number to
`a system (described further below) at the user's end pro-
`grammed to decode such a number from a carrying data
`stream. In this method, in the carrying data stream, the data
`stream values for an -agreed-upon" pixel are altered. For
`example, for one particular pixel in a frame, the R,G, and B
`values (or, in appropriate instances, the Y, U, and V values)
`may be arbitrarily set to zero to denote a zero bit in a binary
`signature, and in following frames the values for the same
`pixel may be set to maximum value (all l's) to denote a
`binary 1 bit for the signature. In this manner, over several
`frames, a binary number denoting a particular frame from
`the companion data stream may be inserted.
`In this pixel alteration method, a marker is also needed.
`Again, the marker can be by convention (preferred), such as
`the third frame after the end of a decoded signature, or the
`same sort of coding may be used to insert a binary marker
`signature.
`In the pixel insertion method, any pixel may be used by
`convention, but some may serve better than others. For
`example, in some instances jitter problems may make pixel
`identification relatively difficult. In a preferred embodiment,
`wherein a logo is used to identify a data stream, such as a
`network logo seen in the lower right of frames for some
`networks, a particular pixel in the logo may be used, which
`would serve to alleviate the jitter problem.
`It will be apparent to the skilled artisan, giving the above
`teaching, that there will be a variety of ways pixel data may
`be altered providing a coding system for a synchronization
`signature. For example, the R, G, and B values may be
`altered differently by convention, providing three signature
`bits per pixel, and more than one pixel may be used; so a
`coded number of virtually any binary length may be pro-
`vided with the data for a single frame in a video data stream.
`In a preferred embodiment of the present invention, all
`three methods of stream signature, VBI, HBI, and pixel
`alteration are used. The reason for this is because it is
`possible that other systems downstream (toward broadcast,
`or in some rebroadcast) may use VBI's and 11131's to bear
`certain data, thus overwriting some or all data that may be
`inserted in blanking intervals via methods of the present
`invention. Similarly, a logo or other graphical alteration such
`as a commercial may be inserted into a video stream thus
`overriding a planned pixel alteration in a significant section
`of the video. By using all three methods at the authoring end
`survival of the synchronization information at the user's end
`is assured.
`Referring back to FIG. 10, a frame writer and pixel
`command module 101, comprising sub-modules 101a, and
`101b, uses previously generated data to insert time markers
`and binary numbers into frame data of at least one of the data
`streams (93 and 95), as well as causing alteration to one or
`more pixels over a series of frames to create a serial
`transmission or physical marker that may be associated with
`frame numbers assigned to matching frames within annota-
`tion stream 95.
`It will be apparent to the skilled artisan that either data
`stream may be the carrying stream. As a convention the
`primary video data stream is used as the carrying stream
`rather than the annotation stream.
`In some embodiments, a natural screen change conven-
`tion may be used for markers. For example, known software
`may be provided and adapted to detect screen changes
`wherein a majority of pixel values show significant alter-
`ation. These screen changes will happen randomly through-
`out the video and typically are spaced over a number of
`frames.
`
`1
`
`20
`It will be apparent to one with skill in the art that module
`91 may be programmed according to predetermined criteria
`without departing from the spirit and scope of the present
`invention. Such criteria may vary according to factors such
`5 as density of annotation data in a particular annotation
`stream, normal frame rate of the video, whether or not it is
`known if there will be any further annotation before
`broadcasting, and so on. For example, a timing marker may
`be taken every 5th frame instead of every 10th frame.
`Screen-change marking may or may not be used. There are
`many variables that may be considered before applying the
`innovative signature methods of the present invention. Pre-
`senting the combined signatures insures that
`re-synchronization remains possible at the user's end as
`previously described.
`15 FIG. 11 is a process flow chart illustrating logical steps for
`providing a synchronization signature at the authoring end
`according to an embodiment of the present invention. At step
`103 the frames of the two streams are identified and moni-
`tored as necessary. The software may determine, for
`20 example, the scope (density) of annotation, the status of
`available VBI and HBI areas, the frequency of frames for
`time marking intervals, and so on. This step also includes
`counting frames for the purpose of generating annotation
`frame numbers for signature association purposes. In step
`25 105, serial binary numbers are generated in separate
`sequences that may be used for time marking, physical
`marking, and frame association.
`In step 107, annotation frame numbers are written into
`VBI and HBI areas associated with video frames as well as
`30 to the appropriate annotation frame headers. If a concerted
`pixel alteration method is pre-determined to be used as a
`marking scheme, then the pixel or pixels are selected,
`altered, and activated in step 109.
`It will be apparent to one with skill in the art of video
`35 editing including knowledge of video-frame structure and
`the techniques for writing data into such video frames that
`there are many variations possible with regards to time
`marking and assigning identifying numbers to data frames
`wherein such numbers are also added to video frames. For
`40 example, differing frame intervals may be chosen as time
`markers, different bit structures may be used such as 16, 24,
`or 32 bit resolutions, and so on.
`With reference to the stated objective of the present
`invention as previously described above, it was mentioned
`45 that the method of the present invention involves a second
`phase wherein separate data streams, marked via the con-
`ventions above, arrive at a user location after being sent via
`alternate mediums, such as one via cable broadcast, and one
`via a wide area network (WAN) delivery wherein, after
`50 receiving the streams, the user's equipment captures,
`re-synchronizes and combines the streams to be displayed
`for viewing as one annotated video stream. Such a CPE
`apparatus and method is provided and taught below.
`FIG. 12 is a block diagram illustrating a data capture and
`55 synchronization system at the user's end according to an
`embodiment of the present invention. System 115 is pro-
`vided and adapted to receive broadcast data-streams from
`varying sources and combine and synchronize the streams so
`the data from the two different streams may be integrally
`60 displayed as authored. System 115 has a central processing
`unit (CPU) 117 that has a cache memory and random access
`memory (RAM). System 15 may be integrated with a
`computer or components thereof, a WEB TV or components
`thereof, or another type of receiving station capable of
`65 capturing and displaying broadcast video.
`System 115 further comprises a signal receiving module
`119, illustrated as connected to CPU 117 via bus structure
`
`TT0007267
`
`
`
`Case 6:20-cv-00810-ADA Document 73-14 Filed 04/23/21 Page 5 of 14
`
`US 6,357,042 B2
`
`21
`121. Bus structure 121 is the assumed connection to other
`illustrated modules within device 115 although an element
`number does not accompany the additional connections.
`Module 119 is shown divided into sub-modules with each
`sub-module dedicated to capturing signals from a specific 5
`type of medium. In this case, there are six sub-modules that
`are labeled according to medium type. From top to bottom
`they are a modem, a satellite receiver, a TV receiver, a first
`optional input port (for plugging in a peripheral device), a
`second optional input port (for plugging in a peripheral 10
`device), and a cable receiver. The optional input ports may
`accept input from Video Cameras, DVD's, VCR's, and the
`like.
`In this particular example, an annotation data stream 125
`is illustrated as entering system 115 through a modem, as 15
`might be the case if an annotation data stream is sent to an
`end user via the Internet or other WAN. A video broadcast
`stream 127 is illustrated as entering system 115 through the
`sub-module comprising a cable receiver. Streams 125 and
`127 are analogous to streams 95 and 93, respectively, as 20
`output from signature application module 91 of FIG. 10.
`Video stream 127 in this example is a live broadcast stream
`in digital form. Annotation stream 125 is delivered via a
`WAN which in a preferred embodiment will be the Internet.
`As such, stream 125 arrives as data packets which must be 25
`sorted as is well-known in the art.
`System 115 further comprises a pipeline module 129
`adapted to accept both streams 125 and 127 for the purpose
`of synchronization. Pipeline 129 is illustrated as having a
`time-begin mark of 0 and a time-end mark of T. The span of 30
`time allowed for buffering purposes may be almost any
`increment of time within reason. The inventors have has
`determined that a few seconds is adequate in most instances.
`Video stream 127 flows trough pipeline 129 via a con-
`trollable buffer 133. Similarly annotation data stream 125 35
`flows through pipeline 129 via controllable buffer 131. It is
`important to note here that either stream may arrive first to
`pipeline 129 and that neither stream has a predictable
`latency. The only constant factor between the two streams at
`this entry point are that they are both running at the same 40
`frame rate.
`Innovative software is provided and adapted to read the
`time-marker and data-frame numbers in the carrying stream
`and to compare the indicated frame number for the opposite
`stream to the actual frame position relative to the carrying 45
`stream in the pipeline. The system is adapted to adjust either
`data stream toward synchronization of the two streams. For
`example, CPU, through executing the software, may repeat
`frames in a pattern in either data stream to slow that stream
`relative to the opposite stream. The software in a preferred 50
`embodiment performs this calculation for every detected
`time marker in stream 127.
`Buffering alteration parameters will depend upon the
`frequency of time markers and the extent of error detected
`in timing between the two data streams. For example, it is 55
`desired to produce what is termed in the art to be a soft ramp
`effect so that sudden movement or jumping of annotations
`related to video entities as viewed by a user does not
`noticeably occur. Similarly, latency factors are unpredictable
`regarding both streams during the entirety of their transmis- 60
`sions. Therefore, buffers 131 and 133 are utilized continu-
`ally to synchronize streams 127 and 125 as they pass through
`pipeline 129. Synchronization error toward the end of pipe-
`line 129 is small enough so that the signals may be combined
`via a signal combining module 135 before they are sent on 65
`as one stream into typically a video RAM of a display
`module 139.
`
`22
`A single annotated video-stream 137 is output from
`display module 139 to a suitable connected display monitor
`or screen. An input signal 141 represents user interaction
`with an entity in video stream 137 as it is displayed. Such a
`signal may trigger downloading of additional detailed infor-
`mation regarding the subject of interaction. Interaction sig-
`nal 141 results from a mouse click or other input command
`such as may be initiated via a connected keyboard or the
`like.
`It will be apparent to one with skill in the art that the
`architecture illustrated herein is but one example of a data
`stream capture and synchronization system or device that
`may be integrated with other equipment without departing
`from the spirit and scope of the present invention. In one
`embodiment, system 115 may be part of a computer station.
`In another embodiment, system 115 may be part of a set-top
`box used in conjunction with a TV. There are various
`possibilities. Moreover, there may be differing modular
`components installed in system 115. For example, instead of
`providing a dial-up modem, WAN connection may be via
`satellite and the modem may be wireless.
`In one embodiment, a broadcast video stream without
`audio narration may be synchronized to a separately
`received audio stream. Furthermore, a prerecorded and
`authored video feed from a source connected to an optional
`input module may be synchronized with a previously stored
`and annotated data stream from a source connected to a
`second optional input module as long as the signature
`process was applied to both streams according to the
`embodiment of FIG. 10. Interaction with tracked entities and
`the like associated with the prerecorded streams may be sent
`to a participating Internet server or the like through the
`modem sub-module provided the system is on-line during
`viewing.
`FIG. 13 is a Process flow chat illustrating logical steps for
`capturing and synchronizing separate video streams for user
`display and interaction according to an embodiment of the
`present invention. In step 143, separate data streams are
`captured and redirected into a synchronization pipeline such
`as pipeline 129 of FIG. 12. Time markers, and if applicable,
`screen-change markers are searched for and detected in step
`145. In step 147, data-frame ID numbers are searched and
`compared to data-frame numbers inserted in marker frames
`of a video stream such as stream 127 of FIG. 12. The data
`may be inserted in VI31 and 1-I131 areas or as coded numbers
`added previously by pixel manipulation.
`In step 149, a timing error is calculated with regards to
`data inserted in a marker frame in the video stream as
`matched to data in an annotation data-frame closest to the
`marker. The error will define an annotation frame as being
`n number of frame intervals ahead of or behind the target
`marker frame. In step 151, the stream determined to be
`running n number of frames ahead is buffered to reduce the
`error. In step 153, the process repeats (steps 145-151) for
`each successive marker in the video stream.
`The process steps illustrated in this embodiment are
`intended to be exemplary only. The order and function of
`such process steps may vary according to differing embodi-
`ments. For example, in some embodiments wherein it may
`be known that no further annotation will be performed after
`signature operations, then only time marker intervals with
`VBI inserted data may be used. In another such instance, it
`may be determined that only screen change marking and
`HBI inserted data will be used, and so on.
`In a preferred embodiment, the method and apparatus of
`the present invention is intended for a user or users that will
`receive the video data via broadcast, and the annotation data
`
`TT0007268
`
`
`
`Case 6:20-cv-00810-ADA Document 73-14 Filed 04/23/21 Page 6 of 14
`
`US 6,357,042 B2
`
`23
`via a WAN, preferably the Internet. This is so that additional
`data obtained by a user through interaction with a tracked
`entity in the video may be personalized and specific to the
`user. In a case such as this a user would, perhaps, obtain a
`subscription to the service. In other embodiments, other
`broadcast and data delivery methods may be used.
`Hypervideo and Scene Video Editor
`In another aspect of the present invention, a video editor
`is provided for editing video streams and corresponding
`annotation streams and creating new video and synchronous
`annotation streams. The editor in a preferred embodiment
`comprises a software suite executable on a computer plat-
`form similar to the various platforms described above
`related to the coordinate tracking and annotating systems
`(authoring) of FIG. 1. The editor in some embodiment
`manipulates data streams in the well-known MPEG format,
`and in others in other formats. The format under which the
`editor performs is not limiting to the invention, and in
`various embodiments the system includes filters (translators)
`for converting data streams as need to perform its functions.
`The Editor is termed by the inventors the HoTV!Studio,
`but will be referred to in this specification simply as the
`Editor. The Editor in various embodiments of the present
`invention may operate on computer platforms of various
`different types, such as, for example, a high-end PC having
`a connected high-resolution video monitor. As such plat-
`forms are very familiar to the skilled artisan, no drawings of
`such a platform are provided. Instead, descriptive drawings
`of displays provided in a user interface are used for describ-
`ing preferred embodiments of the invention. It may be
`assumed that the editor platform includes typical apparatus
`of such a platform, such a one or more pointer devices and
`a keyboard for user