Hypervideo

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

Hypervideo, or hyperlinked video, is a displayed video stream that contains embedded, user-clickable anchors,[1] allowing navigation between video and other hypermedia elements. Hypervideo is thus analogous to hypertext, which allows a reader to click on a word in one document and retrieve information from another document, or from another place in the same document. That is, hypervideo combines video with a non linear information structure, allowing a user to make choices based on the content of the video and the user's interests.

A crucial difference between hypervideo and hypertext is the element of time. Text is normally static, while a video is necessarily dynamic; the content of the video changes with time. Consequently, hypervideo has different technical, aesthetic, and rhetorical requirements than a static hypertext page. For example, hypervideo might involve the creation of a link from an object in a video that is visible for only a certain duration. It is therefore necessary to segment the video appropriately and add the metadata required to link from frames—or even objects—in a video to the pertinent information in other media forms.

History of Hypervideo

Illustrating the natural progression to hypervideo from hypertext, the software Storyspace,[2] a hypertext writing environment, employs a spatial metaphor for displaying links. Storyspace utilizes 'writing spaces', generic containers for content, which link to other writing spaces. HyperCafe,[3] a popular experimental prototype of hypervideo, made use of this tool to create "narrative video spaces". HyperCafe was developed as an early model of a hypervideo system, placing users in a virtual cafe where the user dynamically interacts with the video to follow different conversations.

Video to video linking was demonstrated by the Interactive Cinema Group at the MIT Media Lab. Elastic Charles[4] was a hypermedia journal developed between 1988 and 1989, in which "micons" were placed inside a video, indicating links to other content. When implementing the Interactive Kon-Tiki Museum,[5] Listol used micons in order to represent video footnotes. Video footnotes were a deliberate extension of the literary footnote applied to annotating video, thereby providing continuity between traditional text and early hypervideo.[6] In 1993, Hirata et al.[7] considered media based navigation for hypermedia systems, where the same type of media is used as a query as for the media to be retrieved. For example, a part of an image (defined by shape, or color, for example) could link to a related image. In this approach, the content of the video becomes the basis of forming the links to other related content.

HotVideo was an implementation of this kind of hypervideo, developed at IBM's China Research Laboratory in 1996.[8] Navigation to associated resources was accomplished by clicking on a dynamic object in a video. In 1997, a project of the MIT Media Lab's Object-Based Media Group called Hypersoap further developed this concept. HyperSoap was a short soap opera program in which a viewer could click with an enhanced remote control on objects in the video to find information on how they could be purchased. The company Watchpoint Media was formed in order to commercialize the technology involved, resulting in product called Storyteller, oriented towards interactive television. Watchpoint Media was acquired by Goldpocket in 2003, which was in turn acquired by Tandberg Television in late 2005.[citation needed]

eline Technologies, founded in 1999, developed the first viable hypervideo solutions called VideoClix. Today VideoClix is the most widely used SaaS (Software as a Service) solution to distribute and monetize clickable video on the web and mobile devices.[citation needed] With the advantage that its videos can play back in popular video players such as QuickTime and Flash as well as multiple OVPs (online video platforms) such as Brightcove, ThePlatform and Ooyala. VideoClix also offers technology that can be integrated into any 3rd party players based on Quicktime, Flash, Mpeg4 and HTML5. this product has proven to be a commercial success. In 2006, eline Technologies was acquired by VideoClix Inc. VideoClix client base include Disney, ESPN, MTV networks, Dailymotion, Revision 3 as well as Brands such as Apple, Kraft, Fruit of the loom and many others.

In 1997, the Israeli software firm Ephyx Technologies released a product called v-active,[9] one of the first commercial object based authoring system for hypervideo. This technology was not a success, however; Ephyx changed its name to Veon in 1999, at which time it shifted focus away from hypervideo to the provision of development tools for web and broadband content.[10]

Concepts and Technical Challenges

Hypervideo is challenging, compared to hyperlinked text, due to the unique difficulty video presents in node segmentation; that is, separating a video into algorithmically identifiable, linkable content.

Video, at its most basic, is a time sequence of images, which are in turn simply two dimensional arrays of color information. In order to segment a video into meaningful pieces (objects in images, or scenes within videos), it is necessary to provide a context, both in space and time, to extract meaningful elements from this image sequence. Humans are naturally able to perform this task; however, developing a method to achieve this automatically (or by algorithm) is a complex problem.

And it is desirable to do this algorithmically. At an NTSC frame rate of 30 frames per second,[11] even a short video of 30 seconds comprises 900 frames. The identification of distinct video elements would be a tedious task if human intervention were required for every frame. Clearly, even for moderate amounts of video material, manual segmentation is unrealistic.

From the standpoint of time, the smallest unit of a video is the frame (the finest time granularity).[6] Node segmentation could be performed at the frame level—a straightforward task as a frame is easily identifiable. However, a single frame cannot contain video information, since videos are necessarily dynamic. Analogously, a single word separated from a text does not convey meaning. Thus it is necessary to consider the scene, which is the next level of temporal organization. A scene can be defined as the minimum sequential set of frames that conveys meaning. This is an important concept for hypervideo, as one might wish a hypervideo link to be active throughout one scene, though not in the next. Scene granularity is therefore natural in the creation of hypervideo. Consequently, hypervideo requires algorithms capable of detecting scene transitions.

Of course, one can imagine coarser levels of temporal organization. Scenes can be grouped together to form a narrative sequence, which in turn are grouped to form a video; from the point of view of node segmentation, these concepts are not as critical. Issues of time in hypervideo were considered extensively in the creation of the HyperCafe.[3]

Even if the frame is the smallest time unit, one can still spatially segment a video at a sub-frame level, separating the frame image into its constituent objects; this is necessary when performing node segmentation at the object level. Time introduces complexity in this case also, for even after an object is differentiated in one frame, it is usually necessary to follow the same object through a sequence of frames. This process, known as object tracking, is essential to the creation of links from objects in videos. Spatial segmentation of object can be achieved, for example, through the use of intensity gradients to detect edges, color histograms to match regions,[1] motion detection,[12] or a combination of these and other methods.

Once the required nodes have been segmented and combined with the associated linking information, this metadata must be incorporated with the original video for playback. The metadata is placed conceptually in layers, or tracks, on top of the video; this layered structure is then presented to the user for viewing and interaction. Thus the display technology, the hypervideo player, should not be neglected when creating hypervideo content. For example, efficiency can be gained by storing the geometry of areas associated with tracked objects only in certain keyframes, and allowing the player to interpolate between these keyframes, as developed for HotVideo by IBM.[13] Furthermore, the creators of VideoClix emphasize the fact that its content plays back on standard players, such as Quicktime and Flash. When one considers that the Flash player alone is installed on over 98% of internet enabled desktops in mature markets,[14] this a perhaps a reason for the success of this product in the current arena.

The rise of hypervideo

As the first steps in hypervideo were taken in the late 1980s, it would appear that hypervideo is taking unexpectedly long to realize its potential. Many experiments (HyperCafe, HyperSoap) have not been extensively followed up on, and authoring tools are at the moment available from only a small number of providers.

However, perhaps with the wider availability of broadband internet, this situation is rapidly changing. Interest in hypervideo is increasing, as reflected in popular blogs on the subject,[15][16] as well as the extraordinary rise of the internet phenomenon YouTube. Furthermore, by 2010, some estimates have internet downloads claiming over one third of the market for on-demand video.[17]

As the amount of video content increases and becomes available on the internet, the possibilities for linking video increase even faster. Digital libraries are constantly growing, of which video is an important part. News outlets have amassed vast video archives, which could be useful in education and historical research.[1] Direct searching of pictures or videos, a much harder task then indexing and searching text, could be greatly facilitated by hypervideo methods.

Commentary

User replies to video content, traditionally in the form of text or image links which are not embedded into the playback sequence of the video, have been allowed through such video hosting services as Viddler to become embedded both within the imagery of the video and within portions of the playback (via selected time lengths inside the Progress slider element); this feature has become known as "video comments" or "audio comments".

Commercial exploitation

Perhaps the most significant consequence of hypervideo will result from commercial advertising. Devising a business model to monetize video has proven notoriously difficult. The application of traditional advertising methods—for example introducing ads into video—is likely to be rejected by the online community, while revenue from selling advertising on video sharing sites has so far not been promising.[18]

Hypervideo offers an alternate way to monetize video, allowing for the possibility of creating video clips where objects link to advertising or e-commerce sites, or provide more information about particular products. This new model of advertising is less intrusive, only displaying advertising information when the user makes the choice by clicking on an object in a video. And since it is the user who has requested the product information, this type of advertising is better targeted and likely to be more effective.

Ultimately as hypervideo content proliferates on the Internet, particularly content targeted for delivery via the television set, one can imagine an interlinked web of hypervideo forming in much the same way as the hypertext based World Wide Web has formed. This hypervideo based "Web of Televisions" or "TeleWeb" would offer the same browsing and information mining power of the Web, but be more suited to the viewing experience of being 10 feet from the screen on the living room couch than the Web is. Here may form an environment of not only interactive ads, but also one of interactive and nonlinear news, information, and even story telling.

The future of hypervideo

The above mentioned "Web of Televisions" or "TeleWeb" concepts are likely to become widely adopted as implemented by future advanced Set-Top-Box and Game-Box units with the ability to provide both the 10 foot TV and 2-foot Web experience. The addition of a wireless display and remote control ties together Web and TV, in this scenario clicking objects is non disruptive to movies and TV shows. The full screen video display provides the 10 foot video experience while supplemental content, commerce and advertising related to clicking video objects is placed on the additional display and remote control unit that provides the 2-foot PC experience.

References

  1. 1.0 1.1 1.2 Smith, Jason and Stotts, David, An Extensible Object Tracking Architecture for Hyperlinking in Real-time and Stored Video Streams, Dept. of Computer Science, Univ. of North Caroline and Chapel Hill
  2. Storyspace: Storyspace
  3. 3.0 3.1 HyperCafe: Narrative and Aesthetic Properties of Hypervideo, Nitin Nick Sawhney, David Balcom, Ian Smith, UK Conference on Hypertext
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Liestol, Gunner. Aesthetic and Rhetorical Aspects of linking Video in Hypermedia
  6. 6.0 6.1 Lua error in package.lua at line 80: module 'strict' not found.
  7. Hirata, K., Hara, Y., Shibata, N., Hirabayashi, F., 1993, Media-based navigation for hypermedia systems, in Hypertext '93 Proceedings.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. NTSC Basics
  12. Khan, Sohaib and Shah, Mubarak, Object Based Segmentation of Video Using Color, Motion and Spatial Information, Computer Vision Laboratory, University of Central Florida
  13. U.S. Patent 6912726
  14. Adobe - Flash Player Statistics
  15. Andreas Haugstrup Pedersen | solitude.dk
  16. Hyper text, now video hyperlinking
  17. The Economist, Feb. 8 2007, What's on next
  18. The Economist, Aug 31st 2006, The trouble with YouTube.

See also

Further reading