Offensive language identification is classification task in natural language processing (NLP) where the aim is to moderate and minimise offensive content in social media. It has been an active area of research in both academia and industry for the past two decades. There is an increasing demand for offensive language identification on social media texts which are largely code-mixed. Code-mixing is a prevalent phenomenon in a multilingual community and the code-mixed texts are sometimes written in non-native scripts. Systems trained on monolingual data fail on code-mixed data due to the complexity of code-switching at different linguistic levels in the text. This shared task presents a new gold standard corpus for offensive language identification of code-mixed text in Dravidian languages (Tamil-English and Malayalam-English).

Tamil language is spoken by Tamil people in India, Sri Lanka and by the Tamil diaspora around the world, with official recognition in India, Sri Lanka and Singapore. Malayalam is a Dravidian language spoken predominantly by the people of Kerala, India. The Tamil script evolved from the Tamili script, Vatteluttu alphabet, and Chola-Pallava script. It has 12 vowels, 18 consonants, and 1 āytam (voiceless velar fricative). Minority languages such as Saurashtra, Badaga, Irula, and Paniya are also written in the Tamil script. Tamil scripts are explained in Tolkappiyam (dating 5,320 BCE) as Eluttu" means "sound, letter, phoneme", and this covers the sounds of the Tamil language, how they are produced (phonology). It includes punarcci (lit. "joining, copulation") which is combination of sounds, orthography, graphemic and phonetics with sounds as they are produced and listened to. Malayalam scripts are alpha-syllabic, belonging to a family of abugida writing systems that is partially alphabetic and partially syllable-based. However, social media users often mix Roman script for typing because it is easy to input. Hence, the majority of the data available in social media for these under-resourced languages are code-mixed.

The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages ( (Tamil-English and Malayalam-English)) collected from social media. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.

The participants will be provided development, training and test dataset.


This is a comment/post level classification task. Given a Youtube comment, systems have to classify it into Not-offensive, offensive-untargeted, offensive-targeted-individual, offensive-targeted-group, offensive-targeted-other, or Not-in-indented-language.

Last year link for Offensive Language Identification in Dravidian Code-Mixed text 2020 . The sister taks for Sentiment Analysis in Dravidian Code-Mixed text 2020. It is co-located with 13th meeting of Forum for Information Retrieval Evaluation 2021 which will be held virtually in Hyderabad, India.

Paper submission:

The paper (Offensive Language Identification for Davidian Languages in Code-Mixed Text) name format should be: TEAM_NAME@Dravidian-CodeMix-HASOC2021: Title of the paper. For example: NUIG_ULD@Dravidian-CodeMix-HASOC2021: Offensive Language Identification on Multilingual Code Mixing Text.

Following are some general guidelines to keep in mind while submitting the working notes.

  • Basic sanity check for grammatical errors and reported results.
  • Papers should have sufficient information for reproducing the mentioned results - Papers should follow the appropriate style (We will use CEUR style: details below)
  • Check the papers for text reuse / plagiarism. This includes self-plagiarism as well. We would like to stress this point as CEUR is quite strict about it. Any paper found to have plagiarized content should be rejected without further consideration.
  • It has been commonly observed that several teams write more than one working notes (for e.g. separate submissions for separate subtasks) and reuse a substantial portion of the text in these multiple submissions. Keeping this in mind, we will NOT be allowing multiple working notes from the same set of authors. They should be asked to merge them into one.
  • Please ensure the author names do not have any salutations like Dr., Prof., etc. in the final version.
  • Each paper should have a copyright clause included in the paper (See the"Author agreement variants" at
  • Each author should also submit a copyright agreement signed by the authors. (Partially filled agreement will be shared shortly).

All submissions should be in Single column CEUR format. Authors should use one of the CEUR Templates below:

In general, apart from plagiarism related concerns, we would not be rejecting any paper.

Email: and

Bibtex of our paper:

title={{Overview of the HASOC-DravidianCodeMix Shared Task on Offensive Language Detection in Tamil and Malayalam}},
author={Chakravarthi, Bharathi Raja and Kumaresan, Prasanna Kumar and Sakuntharaj, Ratnasingam and Madasamy, Anand Kumar and Thavareesan, Sajeetha and B, Premjith and Chinnaudayar Navaneethakrishnan, Subalalitha and McCrae, John P. and Mandl, Thomas},
publisher = {CEUR},
booktitle={Working Notes of FIRE 2021 - Forum for Information Retrieval Evaluation},

Paper Submission Link: