English  |  正體中文  |  简体中文  |  Items with full text/Total items : 94286/110023 (86%)
Visitors : 21690417      Online Users : 667
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://asiair.asia.edu.tw/ir/handle/310904400/18932


    Title: Video Attention Ranking using Visual and Contextual Attention Model for Content Driven Sports Videos Mining
    Authors: 黃仲陵;Huang, Chung-Lin
    Contributors: 資訊多媒體應用學系
    Date: 2009-02
    Issue Date: 2012-11-26 07:10:52 (UTC+0)
    Abstract: In this paper, we propose new video attention modeling and content-driven mining strategies which enable client users to browse the video according to their preference. By integrating the object-based visual attention model (V'AM) with the contextual attention model (CAM), the proposed scheme not only can more reliably take advantage of the human perceptual characteristics but also effectively discriminate which video contents may attract users' attention. In addition, extended from the Google PageRank algorithm which sorts the websites based on the importance, we introduce the so-call content-based attention rank (AR) to effectively measure the user interest (UI) level of each video frame. The information of users' feedback is treated as the enhanced query data to further improve the retrieving accuracy. The proposed algorithm is evaluated on commercial baseball game sequences and produces promising results.
    Relation: IEEE Transactions on Multimedia
    Appears in Collections:[行動商務與多媒體應用學系] 期刊論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML415View/Open


    All items in ASIAIR are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback