VIDEO QUALITY ANALYSIS FOR AN AUTOMATED VIDEO CAPTURING AND EDITING SYSTEM FOR CONVERSATION SCENES (FriPmPO1)
Author(s) :
Takashi Nishizaki (University of Tsukuba, Japan)
Ryo Ogata (University of Tsukuba, Japan)
Yuichi Nakamura (Kyoto University, Japan)
Yoshinari Kameda (University of Tsukuba, Japan)
Yuichi Ohta (University of Tsukuba, Japan)
Abstract : This paper introduces video quality analysis for automated video capturing and editing. We have already proposed an automated video capturing and editing system for conversation scenes. In capturing phase, our system not only produces concurrent video streams with multiple pan-tilt-zoom cameras but also recognizes ``conversation states'' e.g. who is speaking what, someone is nodding, etc. Since we need to know the conversation states for automated editing phase, it is important to clarify how the recognition rate of the conversation attributes affects our editing system about the quality of resultant videos. In this paper, we analyze the relationship between the recognition rate of the conversation states and the quality of resultant videos through subjective evaluation experiments. The results show that the quality of resultant videos was scored as almost same as the best case that recognition was perfectly done by manual, and the recognition rate of our capturing system is sufficient.

Menu