Patrik Jonell

I’m currently a PhD student in the area of adaptive conversational agents at KTH, The Royal Institute of Technology in Stockholm, Sweden. My research interests can be broadly defined as multimodal interaction. At the moment, my main research focus is machine learning for these interactions, specifically for non-verbal behavior generation in conversational agents. My goal is to give conversational agents the ability to adapt their non-verbal behavior in response to the behaviour of a conversation partner, similar to what humans do, and I use data collected from interactions between humans, along with deep learning, to progress toward that goal.

Recent publications

  1. IUI
    A large, crowdsourced evaluation of gesture generation systems on common data
    [To appear in] 26th Annual Conference on Intelligent User Interfaces. 2021
    Indicates equal contribution
  2. ArXiv
    HEMVIP: Human Evaluation of Multiple Videos in Parallel
    ArXiv. 2021
  3. Frontiers in
    Computer
    Science
    Multimodal capture of patient behaviour for improved detection of early dementia: clinical feasibility and preliminary results
    Patrik Jonell, Birger Moëll, Krister Håkansson, Gustav Eje Henter, Taras Kucherenko, Olga Mikheeva, Göran Hagman, Jasper Holleman, Miia Kivipelto, Hedvig Kjellström, Joakim Gustafson, and Jonas Beskow
    [In review] Frontiers in Computer Science - Human-Media Interaction. 2020
    Indicates equal contribution
  4. IVA Best Paper
    Let’s Face It: Probabilistic Multi-Modal Interlocutor-Aware Generation of Facial Gestures in Dyadic Settings
    In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. 2020
  5. ICMI Best Paper
    Gesticulator: A framework for semantically-aware speech-driven gesture generation
    In Proceedings of the 22th ACM International Conference on Multimodal Interaction. 2020