Teach Conversational AI to play Tabletop Games through Active Learning 

 
EricRobots1.png

EEMCS Bachelor’s & Master’s Project Open-Call

AI can play games; it can even beat the best human players. There has been a torrent of successful deep reinforcement learning applications in digital games (Go, Dota, Atari games, etc). With time and a well-designed reward system, agents can quickly develop strategies to play the game effectively. However, we are not interested in simply spectating how well an AI agent can play one game, after hours of training. We want one that we can teach to play any game with us, right out the box.

In this project, you will build a conversation AI framework employing active learning to grasp a non-digital game's gameplay and quickly reach "enough competency" to play with any human partner(s). You will work with PhD candidates within the Designing Intelligence Lab to develop semi-supervised machine learning algorithms. You will apply NLU in a tabletop gameplay meta-learning context.

Related courses: Human-Computer Interaction

Supervision: Catharine Oertel, Eric (Heng) Gu, Interactive Intelligence

Sound like a project that suits your interests and skills? Let’s set up a meeting to get to know each other.