Faculty of Engineering, LTH

Denna sida på svenska This page in English

Digit@LTH: Events

CS MSc Thesis Zoom Presentation 1 October 2021


From: 2021-10-01 13:15 to 14:00
Place: Online via:
Contact: birger [dot] swahn [at] cs [dot] lth [dot] se
Save event to your calendar

One Computer Science MSc thesis to be presented on 1 October via Zoom

Friday, 1 October there will be a master thesis presentation in Computer Science at Lund University, Faculty of Engineering.

The presentation will take place via Zoom at:

Note to potential opponents: Register as an opponent to the presentation of your choice by sending an email to the examiner for that presentation ( Do not forget to specify the presentation you register for! Note that the number of opponents may be limited (often to two), so you might be forced to choose another presentation if you register too late. Registrations are individual, just as the oppositions are! More instructions are found on this page.


Presenters: Anas Mofleh, Mohammad Al Masri
Title: Language-Agnostic SentimentClassifier for Messaging
Examiner: Elin Anna Topp
Supervisor: Pierre Nugues (LTH)

In this thesis, we evaluate the classification performance for different machine learning models on multilingual datasets. We start the evaluation with simple logistic regression as a baseline and ending with fine-tuned transformers on binary and multi-label datasets. We also evaluate the prediction time of the different fine-tuned models. The evaluation was performed on two public datasets and one private dataset afforded by Sinch AB, where this project was taking place. Our results show that fine-tuning the transformer-based models could improve the company currently used model. For the multi-label dataset, we outperform the state of the art results for both languages using Xlm-Roberta-Large with macro F1 ranging from 0.6460 to 0.6973. We also obtain consistent results with state of the art in the binary dataset, using Xlm-Roberta-Large with macro F1 ranging from 0.7720 to 0.9186. However, we found that Xlm-Roberta-Base results are one percent lower than the top result, while the inference time was much faster than the best model on any hardware (GPU and CPU).

Link to presentation:

Link to popular science summary: