Deep Learning for Natural Language Processing

In the last few years, a machine learning approach called deep learning has been a key factor for large improvements in natural language processing (NLP). In this approach, discrete elements such as words get embedded in high-dimensional numeric vector spaces. Deep learning algorithms can then automatically infer complex relationships between these embeddings in a very general way. With large amounts of data and computing power, such models can automatically derive abstract knowledge about language that would be difficult to capture with manual methods.

The goal of our project is to improve our understanding of what word embeddings can represent, how they can be adapted to deliver optimal results for different NLP tasks and how to model these tasks in the deep learning framework.The problems we study include machine translation, named entity recognition, Chinese character segmentation, part-of-speech tagging, chunking and syntactic parsing.

To understand and improve deep learning methods for NLP, we design, run and compare a large number of computational experiments for various tasks, creating models to recognise regularities in training data and make predictions on fresh, unseen data sets. We also monitor and visualise the intermediate representations generated by our models to understand better how they can be improved to become more meaningful for the tasks we wish to solve.

Research group

PI:Dr. Christian Hardmeier
Dept of Linguistics and Philology, Uppsala University
Prof. Joakim Nivre
Dept of Linguistics and Philology, Uppsala University
Yan Shao
Dept of Linguistics and Philology, Uppsala University
Ali Basirat
Dept of Linguistics and Philology, Uppsala University