Not Logged In

Making CNNs for Video Parsing Accessible

The ability to extract sequences of game events for high-resolution e-sport games has traditionally required access to the game's engine. This serves as a barrier to groups who don't possess this access. It is possible to apply deep learning to derive these logs from gameplay video, but it requires computational power that serves as an additional barrier. These groups would benefit from access to these logs, such as small e-sport tournament organizers who could better visualize gameplay to inform both audience and commentators. In this paper we present a combined solution to reduce the required computational resources and time to apply a convolutional neural network (CNN) to extract events from e-sport gameplay videos. This solution consists of techniques to train a CNN faster and methods to execute predictions more quickly. This expands the types of machines capable of training and running these models, which in turn extends access to extracting game logs with this approach. We evaluate the approaches in the domain of DOTA2, one of the most popular e-sports. Our results demonstrate our approach outperforms standard backpropagation baselines.

Citation

Z. Lou, M. Guzdial, M. Riedl. "Making CNNs for Video Parsing Accessible". International Conference on the Foundations of Digital Games, (ed: Sebastian Deterding, Foaad Khosmood, Johanna Pirker, Thomas Apperley), August 2019.

Keywords:  
Category: In Conference
Web Links: ACM Digital Library

BibTeX

@incollection{Lou+al:FDG19,
  author = {Zijin Lou and Matthew Guzdial and Mark Riedl},
  title = {Making CNNs for Video Parsing Accessible},
  Editor = {Sebastian Deterding, Foaad Khosmood, Johanna Pirker, Thomas
    Apperley},
  booktitle = {International Conference on the Foundations of Digital Games},
  year = 2019,
}

Last Updated: October 29, 2020
Submitted by Sabina P

University of Alberta Logo AICML Logo