Jonas Groschwitz

That's me

Google Scholar | Semantics Scholar | CV

Email: j.d.groschwitz (Klammeraffe) uva.nl



About me

I’m a computational linguist but also a mathematician (and I’m afraid it might show sometimes). Welcome to my humble online home!

I am currently a recipient of a DFG Walter Benjamin fellowship for a project on neurosymbolic graph-to-text generation. As part of that, I am currently at the University of Amsterdam for a year, in the group of Raquel Fernández (my first year of the project was at Edinburgh University).

Research Interests

Generally speaking, I am interested in the intersection of symbolic and neural methods. Specifically, I spent my PhD developing a new algebra for semantic composition (the AM algebra) and a neural semantic graph parser that does not predict the semantic graph directly, but rather an AM dependency tree that serves as a compositional structure, constructing the graph from its lexical parts. And it works!

Current research interests include compositional methods for natural language generation, as well as semi- and unsupervised methods for learning symbolic latent variables. My favourite thing about research is to look at a model in detail, especially by how it handles certain examples, and thereby better understand how it works.

PhD Thesis

My thesis develops the AM dependency parser, a semantic parser for Abstract Meaning Representation (AMR; Banarescu et al., 2013) that owes its strong performance to its effective combination of neural and compositional methods. The AM dependency parser drops the restrictive syntactic constraints of classic compositional approaches, instead relying only on semantic types and meaningful semantic operations as structural guides. The ability of neural networks to encode contextual information allows the parser to make correct decisions in the absence of hard syntactic constraints.

Consequently, the thesis focuses on terms for semantic representations, which are algebraic ‘building instructions’. The thesis first examines the HR algebra (a general tool for building graphs; Courcelle and Engelfriet, 2012) for this purpose; finds it creates too many latent terms to use in practice; and then develops the linguistically motivated AM algebra. Representing the terms over the AM algebra as dependency trees further simplifies the semantic construction. In particular, the move from the HR algebra to the AM algebra and then to AM dependency trees drastically removes the ambiguity of latent structural information required for training the model. In conclusion, using the AM dependency trees as latent structures we obtain a simple compositional semantic parser, where neural tagging and dependency models predict interpretable, meaningful operations that construct the AMR.

Publications

2021

2020

2019

2018

2017

2016

2015