STYLE VARIATION AND POLITENESS STRATEGIES IN LARGE LANGUAGE MODEL-BASED CHATBOTS

Main Article Content

Chukwuma Livinus Ndububa
Bibian Ugoala

Abstract

Human social existence relies heavily on pragmatics. Consequently, failure to understand certain communicative features leads to unsuccessful interactions, as interlocutors' communicative needs remain unmet, especially in the digital age, where communication occurs through or with machines. This study, therefore, investigated key pragmatic aspects of the language use of selected LLM-based chatbots, including how they vary their language style across prompts and contexts, the consistency of their politeness strategies, and the influence of prompt genre on stylistic features. Grounded in the Speech Adaptation in Human–Computer Interaction theory, the study employed a comparative qualitative method to analyze 36 purposively stratified screenshots from five notable LLM-based chatbots. The results show that the chatbots differ in sentence length, phrasing, formality, prompt adaptation, humour, human simulation, idioms, and structural signposting, as well as in the frequency of contractions and passive constructions. The study also revealed that the chatbots consistently respond to face-threatening acts with respect, empathy, self-criticism, and willingness to cooperate. Significant findings include: Perplexity has the lowest frequency of contractions and least human simulation; Claude produces the longest responses; only ChatGPT withholds silence, shows the highest adherence to clear prompts, and cannot tell time; Gemini is the least versatile stylistically; and Copilot employs more semiotic devices but cannot generate specific APA 7th edition references using Digital Object Identifiers.


JEL Classification Codes: 035, Y80, Z13.

Downloads

Download data is not yet available.

Article Details

Section

Research Paper/Theoretical Paper/Review Paper/Short Communication Paper

Author Biographies

Chukwuma Livinus Ndububa, Researcher, Department of English, The National Open University of Nigeria, Nigeria

Chukwuma Livinus Ndububa is a researcher in the Department of English at the National Open University of Nigeria (NOUN). His research interests include English language studies, language technology, and the applications of artificial intelligence in linguistic research.

Bibian Ugoala, Senior Lecturer, Department of English, The National Open University of Nigeria, Nigeria

Dr. Bibian is a Senior Lecturer in the Department of English, The National Open University of Nigeria. Her area of research interest is interrogating how multiple semiotic elements interact to make meaning in different media platforms. Her works have been published widely in different journals.

How to Cite

Ndububa, C. L., & Ugoala, B. (2025). STYLE VARIATION AND POLITENESS STRATEGIES IN LARGE LANGUAGE MODEL-BASED CHATBOTS. American International Journal of Multidisciplinary Scientific Research, 16(1), 1-13. https://doi.org/10.46281/aijmsr.v16i1.2572

References

AlAfnan, M. A., & MohdZuki, S. F. (2023). Do Artificial Intelligence Chatbots Have a Writing Style? An Investigation into the Stylistic Features of ChatGPT-4. Journal of Artificial Intelligence and Technology, 3(3), 85–94. https://doi.org/10.37965/jait.2023.0267

Bansal, P. (2024). Prompt engineering importance and applicability with generative AI. Journal of Computer and Communications, 12, 14–23. https://doi.org/10.4236/jcc.2024.1210002

Barus, P. A., Zhani, V. U., Siregar, K. A., Rizky, M. A., & Siregar, D. Y. (2024). Politeness and Impoliteness in Digital Communication: A Pragmatic Study in English. Jurnal Pendidikan Tambusai, 8(3), 47445–47450. Retrieved from https://jptam.org/index.php/jptam/article/view/23066

Bell, A. (1984). Language style as audience design. Language in Society, 13(2), 145–204. https://doi.org/10.1017/S004740450001037X

Bickmore, T., & Cassell, J. (2005). Social dialogue with embodied conversational agents. In J. van Kuppevelt, L. Dybkjær, & N. Bernsen (Eds.), Advances in natural, multimodal dialogue systems (pp. 23–54). New York, NY: Kluwer Academic.

Bordens, K. S., & Abbott, B. B. (2018). Research design and methods: A process approach (10th ed.). McGraw-Hill Education.

Bowman, R., Cooney, O., Newbold, J. W., Thieme, A., Clark, L., Doherty, G., & Cowan, B. (2024). Exploring how politeness impacts the user experience of chatbots for mental health support. International Journal of Human-Computer Studies, 184, 103181. https://doi.org/10.1016/j.ijhcs.2023.103181

Brennan, S. E. (1991). Conversation with and through computers. User Modeling and User-Adapted Interaction, 1, 67–86. https://doi.org/10.1007/BF00158952

Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage (2nd ed.). Cambridge University Press.

Brummernhenrich, B., Paulus, C. L., & Jucks, R. (2025). Applying social cognition to feedback chatbots: Enhancing trustworthiness through politeness. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13569

Cassell, J. (2001). Embodied conversational agents: Representation and intelligence in user interface. AI Magazine, 22(3), 67–83. https://doi.org/10.1609/aimag.v22i4.1593

Cassell, J. (2000). Embodied conversational interface agents. Communications of the ACM, 43(4), 70–78. https://doi.org/10.1145/332051.332075

Cassell, J., & Bickmore, T. (2003). Negotiated collusion: Modeling social language and its relationship effects in intelligent agents. User Modeling and User-Adapted Interaction, 13(1–2), 89–132. https://doi.org/10.1023/A:1024026532471

Choi, Y., Baek, J., & Hwang, S. J. (2025). System prompt optimization with meta-learning. arXiv. https://doi.org/10.48550/arXiv.2505.09666

Coupland, N. (2007). Style: Language variation and identity. Cambridge University Press.

Coupland, N. (2001). Language, situation, and the relational self: Theorizing dialect-style in sociolinguistics. In P. Eckert & J. R. Rickford (Eds.), Style and sociolinguistic variation (pp. 185–210). Cambridge University Press.

Creswell, J. W., & Creswell, J. D. (2023). Research design: Qualitative, quantitative, and mixed methods approaches (6th ed.). SAGE Publications, Inc.

Fathira, V., & Masbiran, G. (2025). Politeness in making requests: A study on EFL learners’ communicative strategies and attitudes. EJI (English Journal of Indragiri): Studies in Education, Literature, and Linguistics, 9(2), 598–612. https://doi.org/10.61672/eji.v9i2.3083

Giles, H. (2001). Couplandia and beyond. In P. Eckert & J. R. Rickford (Eds.), Style and sociolinguistic variation (pp. 211–219). Cambridge University Press.

Google Cloud. (n.d.). Prompt engineering for an AI guide. Google. Retrieved from https://cloud.google.com/discover/what-is-prompt-engineering

Hallmark University Library. (n.d.). What are prompts? LibGuides. Retrieved from https://hallmark.libguides.com/c.php?g=1312147&p=9645092

Ikabina, I. (2024). Application of politeness theory in digital communication: Impacts and implications for online interactions. International Journal of Educational Research Excellence (IJERE), 3(2), 640–644. https://doi.org/10.55299/ijere.v3i2.889

Irvine, J. T. (2001). “Style” as distinctiveness: The culture and ideology of linguistic differentiation. In P. Eckert & J. R. Rickford (Eds.), Style and sociolinguistic variation (pp. 21–43). Cambridge University Press.

Leech, G. (2014). The pragmatics of politeness. Oxford University Press

Ke, Z., Ming, Y., & Joty, S. (2025). Adaptation of large language models. arXiv. https://arxiv.org/abs/2504.03931

Miri, S. M., & Shahrokh, Z. D. (2019). A short introduction to comparative research. [Conference paper]. Presented at Allameh Tabataba’i University, Tehran, Iran.

Muñoz‑Ortiz, A., Gómez‑Rodríguez, C., & Vilares, D. (2024). Contrasting linguistic patterns in human and LLM‑generated news text. Artificial Intelligence Review, 57, 265. https://doi.org/10.1007/s10462-024-10903-2

Nowak, K. L., & Biocca, F. (2003). The effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. Presence: Teleoperators and Virtual Environments, 12(5), 481–494. https://doi.org/10.1162/105474603322761289

Nyimbili, F., & Nyimbili, L. (2024). Types of purposive sampling techniques with their examples and application in qualitative research studies. British Journal of Multidisciplinary and Advanced Studies, 5(1), 90–99. https://doi.org/10.37745/bjmas.2022.0419

Pawlik, L. (2025). How the choice of LLM and prompt engineering affects chatbot effectiveness. Electronics, 14(5), 888. https://doi.org/10.3390/electronics14050888

Pedrazzini, F. (2025). Multilingual LLMs: Progress, challenges, and future directions. PreMAI. Retrieved from https://blog.premai.io/multilingual-llms-progress-challenges-and-future-directions/

Promptlayer. (n.d.). What is a prompt format? Retrieved from https://www.promptlayer.com/glossary/prompt-format

Reinhart, A., Brown, D. W., Markey, B., Laudenbach, M., Pantusen, K., Yurko, R., & Weinberg, G. (2024). Do LLMs write like humans? Variation in grammatical and rhetorical styles. Proceedings of the National Academy of Sciences of the United States of America, 122(8), e2422455122. https://doi.org/10.1073/pnas.2422455122

Resnik, P. (2025). Large language models are biased because they are large language models. Computational Linguistics, 51, 1–21. https://doi.org/10.1162/coli_a_00558

Setiawan, A., & Sulthan, F. (2025). The Speech Politeness of Z-Generation in the Digital Era. ELTALL: English Language Teaching, Applied Linguistic and Literature, 6(1), 95-107. https://doi.org/10.21154/eltall.v6i1.10792

Yule, G. (2023). The study of language (8th ed.). Cambridge University Press. https://doi.org/10.1017/9781009233446

Zhang, X., Hu, Y., Liu, F., & Dou, Z. (2025). P3: Prompts promote prompting (arXiv:2507.15675v1) [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2507.15675

Similar Articles

You may also start an advanced similarity search for this article.