ASIA unversity:Item 310904400/26285
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 94286/110023 (86%)
Visitors : 21654254      Online Users : 629
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://asiair.asia.edu.tw/ir/handle/310904400/26285


    Title: 以立體視覺建構人性化3D互動式擴增實境-總計畫及子計畫一:人體3D運動參數捕捉技術
    Authors: 黃仲陵
    Contributors: 資訊學院;資訊多媒體應用學系?
    Keywords: 人體運動參數的偵測;Annealed Particle Filter;局部區塊為範例基準估測
    Date: 2012
    Issue Date: 2013-07-18 07:53:38 (UTC+0)
    Abstract: 利用電腦視覺技術來做人體運動參數估測的研究,一直都有廣泛的應用面,最近無需標籤的估測技術,由於它方便性,而被密切重視。而這種無需標籤人體運動參數估測的研究通常會遇到兩個問題,(一)人體運動參數的高維度,(二)當相互遮蔽發生而導致資訊減少。本計畫目的要分三年開發無須標籤的Vision-Based肢體與手勢運動參數的估測。 第一年我們提出人體步態參數估測法。此方法是結合一個Annealed Particle Filter (APF)以及一個事先訓練好的「兩關節間的相關度」和「單一關節時空上的相關度」做人體走路參數的偵測。我們會做各種不同拍攝角度的走路參數偵測,首先我們建構出3D模型,然後將人體模型分成10個部分,並由12維度的走路參數來表示各種的走路姿態,我們會分別使用形狀和顏色的資訊來作為我們做走路參數偵測的依據,接下來我們將APF結合「兩關節間的相關度」和「單一關節時空上的相關度」做走路參數的偵測,我們人體運動參數偵測技術可以有效的運行於室內以及戶外的環境下。相較於傳統的APF,我們希望可以大幅的縮短運算時間,並且有效的增加運動參數估測的準確度。 第二年我們提出利用局部區塊圖像雜湊的即時人體姿勢估測法。以局部區塊為範例基準的近似法來估測人體的運動關節參數。我們使用人體的剪影輪廓上萃取出來的局部區塊圖像來當我們的範例,以人物剪影的輪廓作為依據,並且使用局部區塊圖像為單位與資料庫的區塊圖像作比對,來篩選出最近似於目前輸入畫面的人物的關節角度參數。沿著人體影像的輪廓來萃取數十個區塊,並且記錄該區塊範圍內的shape context。在辨識的時候,每一張區塊都從資料庫中找出最鄰近的幾個區塊,最後利用這些區塊作相似度的投票,進而估測出最相像的輪廓而得到關節角度參數。為了加快最鄰近搜索的速度,我們使用LSH (Locality-Sensitive Hashing)來做相似區塊搜尋的動作。它可以有效地降低相似資料搜尋的計算時間,並且不會因為資料庫資料量的增加而增加計算時間。 第三年我們將分析手活動影像來估測手部的運動參數。手部運動可分為手部的整體運動(此為剛體運動)與手指的局部運動(此為節肢運動),手指的局部運動會造成手活動的畫面中手指間相互遮蔽的問題,我們利用分析雙攝影機的手部正面與側面的資訊可以解決手指間遮蔽問題。我們採取兩種參數預測的方式包括: (一) 單攝影機下的手部平移與旋轉參數預測,(二) 雙攝影機下的手指彎曲參數預測。手部運動可分為手部的整體運動(此為剛體運動)與手指的局部運動(此為節肢運動),手指的局部運動會造成手活動的畫面中手指間相互遮蔽的問題,我們利用分析雙攝影機的手部正面與側面的資訊可以解決手指間遮蔽問題。分析手部整體運動,我們使用尋找凸包的方法來判斷粗略的手部形狀與傾斜狀況。分析手部局部運動,我們採用手指追蹤,則使用粒子濾波器進行追蹤。使用單一攝影機時會碰到手指相互遮掩的問題,因此使用第二台攝影機獲取更多的資訊,來計算手勢運動參數。

    Recently, markerless vision-based human body part tracking and pose estimation have recently attracted intensive attention for their wide applications in human-computer interface, argument virtual and video surveillance. The vision-based approaches to solve the problems of motion parameters capturing always meet two challenges: (1) the parameter estimation problem in high-dimensional space, and (2) the missing observation information due to self-occlusion. In this proposal, we present three different methods to extraction the motion parameters of human object without using any active or passive markers. In the 1st year, we propose a markerless human body part tracking and walking motion parameter estimation method. 3-D human model with structural and kinematical constraints have been used to track a real walking human object in various viewing directions. Our proposed method modifies the Annealed Particle Filter (APF) by adding the pre-trained spatial correlation map and temporal constraint to capture motion parameters of a walking human object. Compared with the traditional APF, our method needs less computation time and generates more accurate results, especially when self-occlusion occurs. In the 2nd year, we propose an example-based approach for 3D human body pose estimation by using Microsoft kinect to which capture the depth image of human object. The estimation process is based on the similarity between the contour of the input human and the model in database. To reduce the estimation time, we apply the Locality-Sensitive Hashing to index the global parameter of the human pose. We use the shape context of the patches extracted from the contour image as the feature vector. In the training, we get a useful hash function to encode each feature vector into a hash value for hash table construction. In the testing, based on the hash value of input patch, we retrieve similar patches from the hash table and apply the Hough voting algorithm. After applying temporal and prediction constraint, the pose parameters with the highest voting are considered to be our estimation result. In the 3rd year, we propose a markerless hand motion capturing system. The hand motion consists of global motion (rigid body motion) and local motion (articulated motion). The global motion indicates the translation and rotation motion of the entire hand, whereas the local motion represents the joint angle motion of each finger. First, we use color model to segment the hand region. Second, we analyze the convexity hull and defect of the hand region to find the finger-tips which can then be used to estimate the global motion parameters. Third, we apply two views and use the particle filter tracking to find the 3D location of fingertip. Finally, we use the inverse kinematic and finger motion constraints to calculate the joint angles of the fingers indicating the local motion parameters.
    Appears in Collections:[Department of Applied Informatics and Multimedia] Ministry of Science and Technology Research Project

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML600View/Open


    All items in ASIAIR are protected by copyright, with all rights reserved.


    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback