Evaluation of large language models using an Indian language LGBTI+ lexicon

Evaluation of large language models using an Indian language LGBTI+ lexicon

Authors

  • Aditya Joshi University of New South Wales, Sydney
  • Shruta Rawat Humsafar Trust
  • Alpana Dange Humsafar Trust

DOI:

https://doi.org/10.47289/AIEJ20231109

Keywords:

Language models, fairness, queerphobia, evaluation, natural language processing, indian languages

Abstract

Large language models (LLMs) are typically evaluated on the basis of task-based benchmarks such as MMLU. Such benchmarks do not examine the behaviour of LLMs in specific contexts. This is particularly true in the LGBTI+ context where social stereotypes may result in variation in LGBTI+ terminology. Therefore, domain-specific lexicons or dictionaries may be useful as a representative list of words against which the LLM’s behaviour needs to be evaluated. This paper presents a methodology for evaluation of LLMs using an LGBTI+ lexicon in Indian languages. The methodology consists of four steps: formulating NLP tasks relevant to the expected behaviour, creating prompts that test LLMs, using the LLMs to obtain the output and, finally, manually evaluating the results. Our qualitative analysis shows that the three LLMs we experiment on are unable to detect underlying hateful content. Similarly, we observe limitations in using machine translation as means to evaluate natural language understanding in languages other than English. The methodology presented in this paper can be useful for LGBTI+ lexicons in other languages as well as other domain-specific lexicons. The work done in this paper opens avenues for responsible behaviour of LLMs in the Indian context, especially with prevalent social perception of the LGBTI+ community.

Author Biographies

Aditya Joshi, University of New South Wales, Sydney

Aditya is a data scientist at SEEK, Melbourne. He is an incoming lecturer in natural language processing at the school of computer science and engineering at UNSW Sydney. Aditya's research interests lie in natural language processing. His papers have been published at leading NLP conferences such as ACL, EMNLP, COLING and so on, and computer science journals such a ACM Computing Surveys. His TEDx talk titled 'Detecting sarcasm, combating hate' interleaved his PhD thesis on computational sarcasm and his experience as an out PhD student in an Indian university.

Shruta Rawat, Humsafar Trust

Shruta Rawat is a Research Manager at the Humsafar Trust. Humsafar Trust is a NGO from India that works in the LGBTI+ space. Shruta's research spans several aspects of LGBTI+ community in India.

Alpana Dange, Humsafar Trust

Alpana Dange is a Consultant Research Director at the Humsafar Trust, Mumbai. Humsafar is a key NGO from India that works in the LGBTI+ space.

Downloads

Published

2023-11-09

How to Cite

Joshi, A., Rawat, S., & Dange, A. (2023). Evaluation of large language models using an Indian language LGBTI+ lexicon. The AI Ethics Journal, 3(1). https://doi.org/10.47289/AIEJ20231109

Issue

Section

Research Papers
Loading...