cognition, language
and computation lab

We try to understand the computational principles underlying natural language understanding by humans and machines. We investigate the neural implementation of these principles in the human brain, their evolutionary origins and their usefulness in language technology.

Our research

news

Charlotte Pouw joined us as a new PhD-student! She will work on interpretability methods as part of the InDeep consortium.
We have a paper accepted at ISMIR2021 where we introduce a new representation for melodic contour: cosine contours. The representation is motivated by an interesting observation: the principal components of melodies are shaped as cosines.
We are very happy that Marianne de Heer Kloots is continuing as a PhD student in our lab! Welcome, Marianne!
We presented a paper at DLfM 2020 where we introduce Chant21, a Python library for working with plainchant in music21, and two large datasets, CantusCorpus and GregoBaseCorpus. In the paper we also discuss two case studies on the melodic arch hypothesis and, essentially, melodic predictability in parts of chants known as differentiae.
We won a best paper award for the best multi/interdisciplinary research at ISMIR2020 for our paper on mode classification in plainchant.
We now have a simple demo online that accompagnies our 2018 JAIR paper Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure by Dieuwke Hupkes, Sara Veldhoen and Willem Zuidema.
All news
blogposts view all
Transformers, capsules, or both?

Transformers, capsules, or both?

From Attention in Transformers to Dynamic Routing in Capsule Nets

Samira Abnar

In this post, we go through the main building blocks of transformers and capsule networks and try to draw a connection between different components of these two models. Our main goal here is to understand if these models are inherently different and if not, how they relate.

Read on...