Bridging Bytes and Cultures: The Impact of AI on Linguistic and Cultural Nuances in Online Conversations

Ley’ah Mcclain-Perez and Ivan Pantoja Tinoco

The digital era is marked by the ascension of artificial intelligence. In particular, this presentation will delve into the transformative influence of ChatGPT on the online communication landscape, particularly within the microcosm of X. This AI-driven tool created by OpenAI not only redefines user interactions but also molds the linguistic contours of digital discourse. Our inquiry is rooted in a critical analysis of ChatGPT’s integration into social platforms, assessing its impact on the quality of communication, user perceptions, attitudes, and the ensuing ethical dilemmas.

Our research navigates through the multifaceted ramifications of ChatGPT, exploring its syntactic coherence and semantic relevance, alongside its occasional pitfalls that may lead to misinterpretations. It highlights the diverse demographic engaging on X, using ChatGPT for various purposes ranging from casual interaction to more substantial exchanges, thus painting a broad spectrum of digital human-AI interaction.

This exploration is not merely an academic exercise but a pivotal discourse that contributes to understanding the nuanced dynamics of digital communication in the AI era. It poses critical questions about the future of online interactions, the role of AI in shaping public discourse, and the ethical boundaries of AI integration into social platforms.

Figure 1: Demographics showing the potential of AI in the case of ChatGPT

[expander_maker id=”1″ more=”Read more” less=”Read less”]

Introduction

In the fast-paced world of online communication, the integration of artificial intelligence (AI) technologies has brought a significant shift in how people interact and engage with digital platforms such as emailing or Twitter. At the forefront of this transformation is ChatGPT, an advanced AI language model developed by OpenAI. With its widespread adoption, particularly on platforms like Twitter, ChatGPT has sparked a wave of curiosity and inquiry into its impact on interpersonal communication dynamics.

 ChatGPT’s presence on Twitter is hard to miss. Its ability to generate text responses that closely mimic human speech has made it a staple tool for many users across the platform. From casual conversations to more formal discussions, ChatGPT has seamlessly integrated into the online social media platform Twitter, which can be deceiving for those who have not incorporated AI into their lifestyles.

Twitter, renowned for its role in facilitating global connectivity, serves as a hub for real-time conversations and idea exchange. Users from diverse backgrounds come together on the platform to share thoughts, opinions, and news. With ChatGPT now part of the conversation, the dynamics of communication have undergone a subtle yet significant transformation, prompting researchers to delve deeper into its implications.

Our study seeks to address several key research questions:

  1. How does the integration of ChatGPT influence the quality and nature of communication within social media and discussion forums?
  2. What are the perceptions and attitudes of users toward ChatGPT-generated content in online interactions?
  3. What ethical considerations arise from the use of ChatGPT in facilitating online communication, and how do users navigate these concerns?
?
Figure 2: The Inside Story of ChatGPT’s Astonishing Potential | Greg Brockman | TED 

Methods

Our study involved a qualitative analysis of tweets gathered from X users. We collected tweets related to ChatGPT usage from teachers, recruiters, job applicants, and corporate employees. These backgrounds are important to note as these X users were most commonly found tweeting about our topic. Some keywords that we used to find tweets include “ChatGPT email,” “ChatGPT job”, and “ChatGPT ethics.” We filtered each search by tweets from the past year and tweets with the most engagement. Once we obtained these tweets, we analyzed the user’s holistic profile to ensure they were not a bot user. These keywords and filters provided a wide range of perspectives on ChatGPT integration in online communication.

Our analysis focused on categorizing tweets into for and against ChatGPT usage and into categories based on their employment background to discern users’ attitudes towards ChatGPT-generated content. In order to accomplish this, we examined the tone and vernacular of the tweets to distinguish the X user’s attitude. Furthermore, we also examined any memes or emoticons in the user’s tweets as this allowed us to better interpret the tone behind their tweet. Analyzing the tone, vernacular, memes and emoticons was crucial for us to depict any sarcasm.

Results and Analysis

After ChatGPT was implemented in online discourse, our analysis shows notable changes in the tone and linguistic expressions of conversations. The language used is a combination of official and casual, and ChatGPT comments frequently reflect the conversational tone that permeates online interactions. The limitations of AI-generated material in preserving contextual nuances are highlighted by difficulties in interpreting linguistic nuances including comedy, sarcasm, and cultural references.

Discussion

The results of our research provide a more comprehensive view of how online conversation is changing in the era of artificial intelligence. Although ChatGPT makes communication easier and improves accessibility, its effects on language use and cultural dynamics need to be carefully considered. Through adept handling of AI-generated content, we can optimize its potential to enhance digital connections while reducing the likelihood of misunderstandings and cultural insensitivity.

The key to maximizing AI-generated content’s ability to promote meaningful conversation while avoiding communicative dangers is to manage it strategically. Through the development of cultural sensitivity and contextual cue awareness, users may skillfully negotiate the complex landscape of online communication, utilizing AI’s augmentative powers without sacrificing human connection.

An ongoing conversation about the development of digital literacy and appropriate AI use is essential to this project. Giving consumers the means to understand language nuances and make sense of subtle implications helps guarantee that the emergence of AI-driven communication will continue to be a positive force for change.

References

Carrie Bradshaw Hater. “My Students Are Using CHATGPT for Their Essays and Everyone Is Turning in the Same Essay!!!” Twitter, Twitter, 4 Mar. 2024, twitter.com/nancytaughtyou/status /1764491684446900521.

Courtney Glory to Heroes Wells. “I Used CHATGPT When I Was Overtired and Needed to Get an Email to My Daughter’s Teacher That Made Sense and Didn’t Trust My Own Editing. I Don’t Care If She Knows — It’s Better than the Soup She Would Have Gotten If I Did It on My Own.” Twitter, Twitter, 28 Feb. 2024, twitter.com/ndesquiress/status/1762909572807708894.

Kepha, Brian. “Maaan ,CHATGPT Is Great at Corporate Lingo and I Love This.with It, It Is so Easy to Communicate a Serious and Business Tone Especially on Emails and Job Applications. This Is a CHATGPT Appreciation Tweet.” Twitter, Twitter, 29 Feb. 2024, twitter.com/AngelofVerdant/status/1763065674501394463.

Gawne & McCulloch. “Emoji as Digital Gestures.” Language@Internet, 2019.

Posts, Hannah. “It’s Hard to Tell. English Isn’t Her First Language, but the Communication Program I’m Using (I Lied, It’s Not Actually Email) Has a Translate Function She Could Use. in This Particular Instance She Was Just Replying, so All She Needed to Say Was ‘OK.’” Twitter, Twitter, 28 Feb. 2024, twitter.com/HannahPosted/status/1762946495324500456.

Sharma, S., & Yadav, R. (2023). Chat GPT – A Technological Remedy or Challenge for Education System. Global Journal of Enterprise Information System, 14(4), 46-51. Retrieved from https://www.gjeis.com/index.php/GJEIS/article/view/698

[/expander_maker]

Body Language and Technology: AI Expressing Human Emotional Body Language

Kissan Desai, Elizabeth Reza, Aaron Zarrabi

Within today’s society artificial intelligence has reached levels that were once deemed unimaginable, from simple computer programming to being able to perform tasks such as mimicking human emotional body language. However, the question at hand around artificial intelligence is: to what extent can artificial intelligence “accurately” express human EBL? We answered this question through our own research on UCLA undergraduate juniors and seniors. We first asked participants to fill out a survey to gather their demographic information, followed by a zoom interview for the experimental portion. Each participant was displayed with twelve images (6 AI and 6 humans) depicting EBL. Through our examinations, we discovered that AI does have the ability to accurately mimic specific human bodily emotions; however, humans are better able to identify emotions when expressed by other humans rather than by AI. We discovered that when it came to ethnicity, culture, and gender, participants had split opinions on its effect on their overall responses, as only some believed it played a role in their ability to correctly identify the EBL of humans and AI. Our research can help technology continue to evolve, possibly to a point where society can no longer distinguish the differences between AI and humans.

[expander_maker id=”1″ more=”Read more” less=”Read less”]

Figure 1: The Dangers of Misinterpreting Body Language! (A scene in which nonverbal communication is completely misinterpreted, showing the consequences of the inability to understand.)

Introduction and Background

In order to communicate with others, humans utilize modes of both verbal and nonverbal communication. One mode of nonverbal communication is the use of emotional body language (EBL). In our research, we defined EBL as physical behaviors, mannerisms, and facial expressions–whether purposeful or subconscious–that are perceived and treated as meaningful gestures relaying emotional significance to the onlooker (de Gelder, 2006). The progressive development of artificial intelligence allows AI to mimic human behaviors (Embgen et al., 2012), as well as assess human individuals’ communication, including EBL, as seen in use of AI in job interviews (Nordmark, 2020). These combined factors lead us to question, to what extent can current AI “accurately” express human EBL?

Our target population for this research was college students, as they may potentially work alongside AI in their future workplaces. Our sample population was made up of UCLA juniors and seniors. The “accuracy” of AI’s EBL was based on the ability of our participants to identify AI expressing the emotions of anger, disgust, fear, happiness, sadness, and surprise, as compared to their ability to identify the same EBL expressed by humans. Through our research, we aimed to answer the question: To what extent can university students understand the emotions expressed by artificial intelligence through its utilization of EBL to communicate nonverbally?

We hypothesized that our participants would be able to identify AI EBL, though not as well as they would be able to identify human EBL. An additional caveat to our hypothesis was our belief that differences in EBL interpretations between our participants would be due to cultural differences harbored by our participants, as that would be a main difference between them, given they would all be around the same age range and go to the same school.

Through our project, we aimed to analyze the differences in individuals’ interpretation of human and AI EBL to hopefully make correlations between how different groups interpret certain gestures, or if there is some universal EBL. To expand upon that, we were curious if AI would be able to capture EBL that is naturally seen in humans (Hertfordshire, 2012). We finally understand the importance of different cultural norms regarding the body and as such, were attentive to this fact when asking others to study EBL, not only viewing the “multimodality in human interactions,” but also understanding the effects different cultures have through the lens of both technology and traditions colliding with one another (Macfayden, 2023).

Methods

Our experiment utilized both qualitative research as well as thematic analysis, as we looked for common themes within human-to-human and human-to-technology interaction. As seen through previous experiments, such as Stephanie Embgen’s (Embgen et al., 2012), we understand that humans do have the capability of identifying emotions through AI EBL. Through our experiment, we surveyed and interviewed 14 UCLA juniors and seniors.

Each participant was asked to fill out a Google form, providing information on their demographic information.

Figure 2: Link to Google Form filled out by participants

https://docs.google.com/forms/d/e/1FAIpQLSf01ZG57zuMVyc99y3Ew1HtaRvEIxOj8lXhSUCt 7u4VdX-xhg/viewform?vc=0&c=0&w=1&flr=0

Figure 3: Participants – Gender and Culture/Ethnicity (Left column contains 7 male participants and their respective ethnic/racial identities. Right column contains 7 female participants and their respective ethnic/racial identities.)

After the survey, we met with our participants via Zoom for an interview. Through the interview, our camera and audio were off while the participants’ were on. This precaution was in order to avoid any possible biases due to participants seeing our own body language or tonal changes in voice. We presented the participants with two sets of six images containing the emotions of anger, disgust, fear, happiness, sadness, and surprise. The images were sorted randomly, alternating between human and AI. The human images were created by us while the AI images were of Kobian, a robot created in the Japanese University of Waseda which is able to display numerous human emotions (Takanishi Laboratory, 2015).

Figure 4: Emotional Sequences (The images on top are of the Kobian robot expressing human emotional body language. On the bottom are humans expressing human emotional body language.)

After each image, our participants would identify the emotion, elaborating on why they chose that emotion, how they would express that emotion themselves, and then were informed which emotion was shown. At the end of the interview, we asked our participants whether they felt there were any discrepancies in their results due to their culture, ethnicity, or gender, something examined in the work of Miramar Damanhouri who asserts that the utilization of body language and other forms of non-verbal communication can lead to misinterpretation as different cultures/ethnicities have different rules and verbal cues (2018).

Figure 5: Link to Examine Interview Powerpoint and Process

https://docs.google.com/presentation/d/1ocxmTIu7mlOzKg7Y7jGVJn5i3BOIRVXSQpSu7iSWZ xQ/edit?usp=sharing

Figure 6: Results – Total AI Correct (Black) vs. Total Human Correct (Red) (Each row shows the results for a single participant and the total AI/Human images they identified correctly/incorrectly. At the bottom it’s shown that the cumulative number of correctly identified AI images were 40 for all the participants, while the cumulative number of correctly identified human images was 61 for all the participants.)

Through our results, we noticed which emotions participants had an ease or difficulty identifying. Participants had difficulties identifying the emotional expressions of Happiness (only 4 correct), Fear (only 1 correct), and Anger (only 5 correct) when expressed by AI. On the other hand, 14 participants were able to correctly identify Happiness and 10 were able to correctly identify Surprise when examining AI. When examining human EBL, participants had difficulty identifying surprise (5 correct) and had ease identifying happiness (13 correct), sadness (11 correct), disgust (11 correct), and anger (10 correct). As a whole though, participants took more time to identify AI EBL, even those they got correct, over human EBL, some of which they identified instantly.

Figure 7: Results Per Emotion (red is human; black is AI) (Table containing the results for each participant identifying the EBL displayed by both AI and humans, AI in black and human in red. A “yes” means it was correctly identified, while a “no” means it was not correctly identified.)

We hypothesized that differences in EBL interpretations would be, in part, due to the cultural, ethnic, and gender differences harbored by our participants. When examining our participants, we discovered that many felt that their ethnicity, gender, or culture hadn’t played a part in their responses, with 6 expressing that it had, 2 expressing maybe, and 7 expressing no. For example, one of our participants explained how the individuals within their respective culture don’t show much emotion, they are very straight faced, and this affected their responses.

Overall, our results showed that humans are able to identify human emotions better than AI; however, it’s important to note that within the experiment, humans were able to identify certain AI emotions over their human counterparts, showing that there is at least some level of shared understanding between humans and AI; this is something that society continues to advance in an attempt to mimic human emotions.

Figure 8: Link to Results https://docs.google.com/spreadsheets/d/1TuvKT66sn2laSej50YjwplEUTSAX-DYyMMHUxYrV DMA/edit?usp=sharing

Discussion and Conclusions

In conversations with our participants, we found there were certain human tendencies in EBL that our participants found lacking in the AI representations, such as the lack of smile lines around eyes or flushed cheeks, both of which participants found to be an integral part of EBL that they use to identify certain emotions. These characteristics were ones that AI was unable to mimic in the images we used. We also found discrepancies in how our participants viewed EBL based on how they personally expressed the emotion. These included difficulty identifying the human “surprise” example and difficulty identifying the AI “fear” example. A question we would ask the participants is if they felt that they display the emotion in the same way as the image, and when shown images that they had difficulty identifying, a majority shared that they did not express the emotion in the same way. A factor that may have contributed to these difficulties is the fact that the human images were made by us, as the researchers, based on our own individual perceptions of the EBL. Not all EBL is universal, and so differences in this vein could have affected those answers. In addition, the split results in regard to how much our participants’ race, ethnicity, and gender affected how accurately they were able to identify the EBL displayed was too close to formally draw any conclusions from, and so our hypothesis that the discrepancies between their responses and the correct answers would be due to these factors cannot at this time be proven or disproven. Further research is needed to analyze that question more thoroughly.

Overall, these findings can help to fine tune AI imitations of body language as AI continues to develop, as it has not quite mastered human EBL yet. Our findings also emphasize the way both differences and uniformity in expressing oneself through body language work to create meaning through gestures that can be understood (or misunderstood) by others that express themselves in similar/different ways.

References

Damanhouri, M. (2018). The advantages and disadvantages of body language in Intercultural communication. Khazar Journal of Humanities And Social Sciences, 21(1), 68–82. https://doi.org/10.5782/2223-2621.2018.21.1.68

de Gelder, Beatrice. “Towards the Neurobiology of Emotional Body Language.” Nature Reviews Neuroscience, vol. 7, no. 3, Mar. 2006, pp. 242–49. www.nature.com, https://doi.org/10.1038/nrn1872.

Embgen, S., Luber, M., Becker-Asano, C., Ragni, M., Evers, V., & Arras, K. O. (2012). Robot-Specific Social Cues in Emotional Body Language. 2012 IEEE RO-MAN: The 21st IEEE International Symposium On Robot and Human Interactive Communication, 1019–1025. https://doi.org/10.1109/ROMAN.2012.6343883.

Hertfordshire, A. B. U. of, Beck, A., Hertfordshire, U. of, Portsmouth, B. S. U. of, Stevens, B., Portsmouth, U. of, Kim A. Bard University of Portsmouth, Bard, K. A., Hertfordshire, L.U. of, Cañamero, L., & Metrics, O. M. V. A. (2012, March 1). Emotional body language displayed by artificial agents. ACM Transactions on Interactive Intelligent Systems. Retrieved January 27, 2023, from https://dl.acm.org/doi/abs/10.1145/2133366.2133368

Macfayden, Leah, Virtual ethnicity: The new digitization of place, body, language, and … (n.d.). Retrieved January 27, 2023, from https://open.library.ubc.ca/media/download/pdf/52387/1.0058425/

Nordmark, V. (2020, March 16). What are ai interviews? Hubert. https://www.hubert.ai/insights/what-are-ai-interviews#%3A~%3Atext%3DAI%20assessment%20filters%20the%20applications%2Cof%20behavior%20and%20team%20fit 

Takanishi Laboratory. (2015, June 24). Emotion Expression Biped Humanoid Robot KOBIAN-RIII. Takanishi.mech.waseda.ac.jp. Retrieved February 21, 2023, from http://www.takanishi.mech.waseda.ac.jp/top/research/kobian/KOBIAN-R/index.htm

YouTube. (2015, November 5). Nonverbal communication- gestures. YouTube. Retrieved February 14, 2023, from https://www.youtube.com/watch?v=0cIo0PkBs2c

 [/expander_maker]