At the forefront of AI strategy…
At the forefront of AI strategy…
Our Research
@AIXosphere our research is purposeful, forward looking and actionable. We work to meet the needs of our clients. Our research produces unique insights, innovation in new products and services, and custom policy frameworks.
For more information: Link to Paper
Explosive growth in big data technologies and artificial intelligence (AI) applications have led to increasing pervasiveness of information facets and a rapidly growing array of information representations.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information and consequently affect human performance. Extant research in cognitive fit, which preceded the big data and AI era, focused on the effects of aligning information representation and task on performance, without sufficient consideration to information facets and attendant cognitive challenges.
Therefore, there is a compelling need to understand the interplay of these dominant information facets with information representations and tasks, and their influence on human performance. We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary for these complex information environments. To this end, we propose and test a novel “Adaptive Cognitive Fit” (ACF) framework that explains the influence of information facets and AI-augmented information representations on human performance. We draw on information processing theory and cognitive dissonance theory to advance the ACF framework and a set of propositions. We empirically validate the ACF propositions with an economic experiment that demonstrates the influence of information facets, and a machine learning simulation that establishes the viability of using AI to improve human performance.
For more information: Link to Paper
Contemporary public discourse surrounding artificial intelligence (AI) often displays disproportionate fear and confusion relative to AI’s actual potential.
This study examines how the use of alarmist and fear-inducing language by news media contributes to negative public perceptions of AI. Nearly 70,000 AI-related news headlines were analyzed using natural language processing (NLP), machine learning (ML), and large language models (LLMs) to identify dominant themes and sentiment patterns. The theoretical framework draws on existing literature that posits the power of fear-inducing headlines to influence public perception and behavior, even when such headlines represent a relatively small proportion of total coverage.
This research applies topic modeling and fear sentiment classification using BERT, LLaMA, and Mistral, alongside supervised ML techniques. The findings show a persistent presence of emotionally negative and fear-laden language in AI news coverage. This portrayal of AI as dangerous to humans or as an existential threat profoundly shapes public perception, fueling AI phobia that leads to behavioral resistance toward AI, which is ultimately detrimental to the science of AI. Furthermore, this can have an adverse impact on AI policies and regulations, leading to a stunted growth environment for AI. The study concludes with implications and recommendations to counter fear-driven narratives and suggests ways to improve public understanding of AI through responsible news media coverage, broad AI education, democratization of AI resources, and the drawing of clear distinctions between AI as a science versus commercial AI applications, to promote enhanced fact-based mass engagement with AI while preserving human dignity and agency.
For more information: Link to Paper
The rapid rise of generative artificial intelligence (AI) and generative AI-based chatbots has prompted growing concerns over their true economic value.
While industry sources are reporting projected or anticipated productivity gains and cost savings, a dependable analysis of these claims remains absent. This paper reviews extant research, economic studies, industry reports, and other pertinent information to evaluate the impacts of generative AI and chatbots, especially on return on investment (ROI) measures. We offer an analytical view of the financial impacts of generative AI, especially chatbots, with cases and pricing information from generative AI service providers. Our goal is to provide policymakers, business leaders, and researchers with an exploratory understanding of how business value can be unlocked with generative AI. This is an abbreviated version of the paper to meet the page limit of the conference.
For more information: Link to Paper
An understudied area in the field of social media research is the design of decision support systems that can aid the manager by way of automated message component generation.
Recent advances in this form of artificial intelligence has been suggested to allow content creators and managers to transcend their tasks from creation towards editing, thus overcoming a common problem: the tyranny of the blank screen. In this research, we address this topic by proposing a novel system design that will suggest engagement-driven message features as well as automatically generate critical and fully written unique Tweet message components for the goal of maximizing the probability of relatively high engagement levels. Our multi-methods design relies on the use of econometrics, machine learning, and Bayesian statistics, all of which are widely used in the emerging fields of Business and Marketing Analytics. Our system design is intended to analyze Tweet messages for the purpose of generating the most critical components and structure of Tweets. We propose econometric models to judge the quality of written Tweets by way of engagement-level prediction, as well as a generative probability model for the auto-generation of Tweet messages. Testing of our design demonstrates the need to take into account the contextual, semantic, and syntactic features of messages, while controlling for individual user characteristics, so that generated Tweet components and structure maximizes the potential engagement levels.
For more information: Link to Paper
Artificial Intelligence (AI) has become ubiquitous in human society, and yet vast segments of the global population have no, little, or counterproductive information about AI.
It is necessary to teach AI topics on a mass scale. While there is a rush to implement academic initiatives, scant attention has been paid to the unique challenges of teaching AI curricula to a global and culturally diverse audience with varying expectations of privacy, technological autonomy, risk preference, and knowledge sharing. Our study fills this void by focusing on AI elements in a new framework titled Culturally Adaptive Thinking in Education for AI (CATE-AI) to enable teaching AI concepts to culturally diverse learners. Failure to contextualize and sensitize AI education to culture and other categorical human-thought clusters, can lead to several undesirable effects including confusion, AI-phobia, cultural biases to AI, increased resistance toward AI technologies and AI education. We discuss and integrate human behavior theories, AI applications research, educational frameworks, and human centered AI principles to articulate CATE-AI. In the first part of this paper, we present the development a significantly enhanced version of CATE. In the second part, we explore textual data from AI related news articles to generate insights that lay the foundation for CATE-AI, and support our findings. The CATE-AI framework can help learners study artificial intelligence topics more effectively by serving as a basis for adapting and contextualizing AI to their sociocultural needs.
For more information: Link to Paper
Advanced artificial intelligence (AI) techniques have led to significant developments in optical character recognition (OCR) technologies.
OCR applications, using AI techniques for transforming images of typed text, handwritten text, or other forms of text into machine-encoded text, provide a fair degree of accuracy for general text. However, even after decades of intensive research, creating OCR with human-like abilities has remained evasive. One of the challenges has been that OCR models trained on general text do not perform well on localized or personalized handwritten text due to differences in the writing style of alphabets and digits. This study aims to discuss the steps needed to create an adaptive framework for OCR models, with the intent of exploring a reasonable method to customize an OCR solution for a unique dataset of English language numerical digits were developed for this study. We develop a digit recognizer by training our model on the MNIST dataset with a convolutional neural network and contrast it with multiple models trained on combinations of the MNIST and custom digits. Using our methods, we observed results comparable with the baseline and provided recommendations for improving OCR accuracy for localized or personalized handwritten text. This study also provides an alternative perspective to generating data using conventional methods, which can serve as a gold standard for custom data augmentation to help address the challenges of scarce data and data imbalance.
For more information: Link to Paper
The birth of a child brings immense joy to a mother’s life. However, the reality can be different for mothers experiencing Postpartum Depression (PPD).
According to the World Health Organization (WHO), around 13% of women experience postpartum mental health disorders, with rates rising to nearly 20% in developing countries. PPD is a condition that affects many women worldwide, but because of the social stigma and the lack of accessible mental health support, it often goes undiagnosed or untreated. This paper presents MOMCare, a chatbot designed to support mothers navigating the challenges of PPD. MOMCare has a retrieval-augmented architecture with an end-to-end pipeline from data preprocessing to response generation. It employs hybrid classification, a dual embedding system, a dual verification guardrail, and a medical domain-specific reranking mechanism to generate empathetic and relevant PPD responses.
This refined design of Retrieval Augmented Generation (RAG) ensures fast and factual response by reducing noise in retrieval and providing abundant context to gpt-3.5-turbo. MOMCare was evaluated using both automated and human metrics. Results show strong performance in both evaluations, which underlines the potential for chatbot interventions in the postpartum mental health domain. This system is robust enough to take new data and create a conversation generation pipeline that includes new information. Expanding the knowledge base using the conversation history with the users is also in development. The MOMCare chatbot and its features were built on sound ethical principles of healthcare and Artificial Intelligence (AI) and present a strong design emphasis on safety and fairness. Note: This is the accepted manuscript of a paper accepted for publication in the Springer proceedings (Smart Innovation, Systems and Technologies series) of the 10th International Conference on Information and Communication Technology for Intelligent Systems (ICTIS 2025), held in New York on May 23, 2025. The final version will be published on SpringerLink.
For more information: Link to Paper
Along with the Coronavirus pandemic, another crisis has manifested itself in the form of mass fear and panic phenomena, fueled by incomplete and often inaccurate information.
There is therefore a tremendous need to address and better understand COVID-19’s informational crisis and gauge public sentiment, so that appropriate messaging and policy decisions can be implemented. In this research article, we identify public sentiment associated with the pandemic using Coronavirus specific Tweets and R statistical software, along with its sentiment analysis packages. We demonstrate insights into the progress of fear-sentiment over time as COVID-19 approached peak levels in the United States, using descriptive textual analytics supported by necessary textual data visualizations. Furthermore, we provide a methodological overview of two essential machine learning classification methods, in the context of textual analytics, and compare their effectiveness in classifying Coronavirus Tweets of varying lengths. We observe a strong classification accuracy of 91% for short Tweets, with the Naïve Bayes method. We also observe that the logistic regression classification method provides a reasonable accuracy of 74% with shorter Tweets, and both methods showed relatively weaker performance for longer Tweets. This research provides insights into Coronavirus fear sentiment progression, and outlines associated methods, implications, limitations and opportunities.
For more information: Link to Paper
Mental health is a growing concern across demographics, with one in five adults (National Institute of Mental Health, 2022) and one in seven children aged three to seventeen (Centers for Disease Control and Prevention, 2023) in the United States diagnosed with a mental health condition.
Despite it being a prevalent issue, access to mental health support remains limited for many people, a challenge exacerbated by the pandemic (Lattie, 2022). In recent years, AI chatbots have emerged as a potential avenue to overcome these obstacles. With the rise of the development and use of such mental health support chatbots, it has been integral to have evaluation frameworks that ensure that these chatbots consistently provide empathetic, safe, and effective responses to the users.
For this purpose, this paper introduces ESHRO, an innovative evaluation framework to analyze the LLM-generated responses on five critical metrics: Empathy, Safety, Helpfulness, Relevance, and Overall Quality. By incorporating multidimensional metrics and integrating both automated and human evaluation, ESHRO overcomes many limitations of existing frameworks. Moreover, to showcase its application, we developed ELY Chatbot, an AI-driven mental health chatbot developed to deliver emotional support and motivation. We utilized the ESHRO framework to evaluate it. The ESHRO framework demonstrates the potential to improve evaluations of mental health chatbots. The paper concludes by discussing limitations and highlighting opportunities for future research, ultimately paving the way for safer, more empathetic, and more impactful mental health solutions.
For more information: Link to Paper
Models represent reality, and the models we as humans create have traditionally been designed to produce some output.
Artificial intelligence (AI) can be viewed as a model of human intelligence’s capabilities, at least in part. In this sense, AI ‘machines’ have been generative since its inception in the 1950s and we should not have been surprised by what we are now seeing in the form of “generative AI” (gen AI) applications, but we are! The reason behind the recent widespread appreciation of the generative aspects of AI applications is due to the ease of availability (all that is needed is a connected browser on any device!) of such AIs to the masses, the increased speeds at which gen AI outputs are being churned out and the impressive usefulness of such rapidly created output. Gen AI has achieved fast-food status on a consumer level and it can be industrialized, commoditized and woven into the socioeconomic fabric of human society. Combined with the power of strategic human enhancive AI architectures such as adaptive cognitive fit (ACF), we can anticipate gen AI to help unleash iterations of rapid and complex advancements with purported benefits which will be treated as hyper-value creation opportunities and hitherto obscure risks (Samuel, et al., 2022). The focus should eventually shift to ACF and similar architectures which will help nurture a society that supports mass-human ascendancy over AIs, as opposed to the converse.
For more information: Link to Paper
Artificial intelligence (AI) is expected to transform the future, and conversational AI is a critical
part of this transformation (Samuel et al., 2024).
Emojis have become integral to digital
communication, conveying nuanced emotional cues that enrich textual meaning (Bai et al.,
2019). However, current large language model (LLM) chatbots struggle with emojis and they
often misinterpret the meaning (or fail to recognize) of emojis, often making their replies
irrelevant (Delobelle et al., 2019; Xie et al., 2025). In our experiments, LLMs demonstrate a
mere 55% accuracy with emoji classification. To address this, we propose an emoji-augmented
AI chatbot (EACh) incorporating natural language understanding (NLU) and natural language
generation (NLG) for social communications with emoji-awareness, by developing a design that
enhances the LLM’s ability to interpret emojis and generate appropriate emojis.
Our approach utilizes a Retrieval-Augmented Generation (RAG) architecture to incorporate
emoji-specific knowledge into the LLM’s reasoning (Jiang et al., 2023). EACh will perform two
functions: Emoji interpretation (when a user message contains an emoji, the system retrieves its entry from an emoji-specific knowledge database to assist the model in accurately interpreting its intended meaning and emotional context) and Emoji generation (when formulating a
response, to the model queries the database for an emoji that aligns with the desired tone or
sentiment of the message, ensuring it is contextually appropriate. Sentiment and emotion
analysis, along with generative modeling, have been critical facets of NLU (Samuel et al., 2020;
Garvey et al., 2021).
For more information: Link to Paper
Wildfires have continued to be a significant threat to communities globally. Artificial Intelligence
(AI) technologies hold tremendous promise for the future in many domains, and we posit that they
can play a critical role in predicting and preventing wildfires (Samuel et al., 2024).
By harnessing data analysis and machine learning, AI can detect high-risk areas, predict fire behavior, and provide early warnings (Western Fire Chiefs Association, 2024). The NOAA reported significant drought and heat with wildfires that devastated several western states from 2020 to 2022. Each of those years exceeded the 1.2 million acres burned since 2016 (NOAA, 2023). In 2023, the devastating Lahaina fire in Maui killed 114 and left about 850 missing, while the Olinda and Kula fires burned 1,081 and 202 acres, respectively (WFCA, 2023). On January 7, 2025, deadly wildfires in Los Angeles killed 29 people, including residents defending their homes. The Palisades fire burned 23,448 acres and destroyed over 6,800 structures, while the Eaton Fire reached 14,021 acres and wiped out 10,491 structures (Stelloh et al., 2025).
AI assistants like Alexa provide weather forecasts, and a wildfire AI chatbot could similarly assess wildfire risk and offer emergency guidance using AI techniques, big data, and high-performance computing (Samuel et al., 2022). This paper presents an efficient solution using a wildfire AI chatbot to help local firefighters, law enforcement, and the public detect wildfire threats and access critical information necessary during such emergencies for evacuation, home protection, help prepare a disaster supply kit, develop a family communication plan, and resource allocation (Habitat for Humanity, 2025). The research develops a personalized chatbot for wildfire risk classification that enhances community preparedness, empowers residents with tailored safety measures, and supports first responders during emergencies using AI. Users can ask the chatbot any wildfire-related questions, and the chatbot will process the content and context to provide helpful and accurate information. The goal is to design an intelligent system that understands user needs and delivers tailored information, empowering residents to take timely safety measures.
For more information: Link to Paper
The Coronavirus pandemic has created complex challenges and adverse circumstances.
This research identifies public sentiment amidst problematic socioeconomic consequences of the lockdown, and explores ensuing four potential public sentiment associated scenarios. The severity and brutality of COVID-19 have led to the development of extreme feelings, and emotional and mental healthcare challenges. This research focuses on emotional consequences – the presence of extreme fear, confusion and volatile sentiments, mixed along with trust and anticipation. It is necessary to gauge dominant public sentiment trends for effective decisions and policies. This study analyzes public sentiment using Twitter Data, time-aligned to the COVID-19 reopening debate, to identify dominant sentiment trends associated with the push to reopen the economy. Present research uses textual analytics methodologies to analyze public sentiment support for two potential divergent scenarios – an early opening and a delayed opening, and consequences of each. Present research concludes on the basis of textual data analytics, including textual data visualization and statistical validation, that tweets data from American Twitter users shows more positive sentiment support, than negative, for reopening the US economy. This research develops a novel sentiment polarity based public sentiment scenarios (PSS) framework, which will remain useful for future crises analysis, well beyond COVID-19. With additional validation, this research stream could present valuable time sensitive opportunities for state governments, the federal government, corporations and societal leaders to guide local and regional communities, and the nation into a successful new normal future.
For more information: Link to Paper
As artificial intelligence (AI) technologies cross over a vital threshold of competitiveness with human intelligence, it is necessary to properly frame critical questions in the service of shaping policy and governance while sustaining human values and identity.
Given AI’s vast socioeconomic implications, government actors and technology creators must proactively address the unique and emerging ethical concerns that are inherent to AI’s many uses.
AI can be viewed as an adaptive “set of technologies that mimic the functions and expressions of human intelligence, specifically cognition, and logic.” In the AI field, foundation models (FMs) are more or less what they sound like: large, complex models that have been trained on vast quantities of digital general information that may then be adapted for more specific uses. Two notable features of foundation models include a propensity to gain new and often unexpected capabilities as they increase in scale (“emergence”), and a growing predisposition to serve as a common “intelligence base” for differing specialized functions and AI applications (“homogenization”). Large language models (LLMs) that power applications like ChatGPT are foundation models with a focus on modeling human language, knowledge, and logic. Advanced AIs and foundation models have the potential to replace multiple task-specific or narrow AIs due to their scale and flexibility, which increases the risk of a few powerful persons or entities who control these advanced AIs gaining extraordinary socioeconomic power, creating conditions for mass exploitation and abuse.
For more information: Link to Paper
In this Editorial, we highlight the emerging dominance of AI + Big Data, and here are some excerpts : We have entered into the age of Artificial Intelligence (AI).
Everything around us is becoming artificially intelligent: from business applications to healthcare, education to finance and governance to art, music and entertainment. The fact that AI has gripped public attention is evident from the steep rise in public engagement with artificial intelligence applications, explosive increase in news media coverage of AI, increasing volumes of social media posts and the mushrooming of a range of AI ecosystem initiatives. We at JBDAI (formerly JBDTP) hope to encourage and foster much high quality research, rigor and innovative thought leadership on big data and artificial intelligence in the years ahead, supporting human well-being, the sustainability of our natural resources and balanced societal progress – please contribute to JBDAI and be a part of this exciting intellectual adventure!
For more information: Link to Paper
Generative Artificial Intelligence (Gen AI) has transformative potential in healthcare to enhance patient care, personalize treatment options, train healthcare professionals, and advance medical research.
This paper examines various clinical and non-clinical applications of Gen AI. In clinical settings, Gen AI supports the creation of customized treatment plans, generation of synthetic data, analysis of medical images, nursing workflow management, risk prediction, pandemic preparedness, and population health management. By automating administrative tasks such as medical documentations, Gen AI has the potential to reduce clinician burnout, freeing more time for direct patient care. Furthermore, application of Gen AI may enhance surgical outcomes by providing real-time feedback and automation of certain tasks in operating rooms. The generation of synthetic data opens new avenues for model training for diseases and simulation, enhancing research capabilities and improving predictive accuracy. In non-clinical contexts, Gen AI improves medical education, public relations, revenue cycle management, healthcare marketing etc. Its capacity for continuous learning and adaptation enables it to drive ongoing improvements in clinical and operational efficiencies, making healthcare delivery more proactive, predictive, and precise.
For more information: Link to Paper
This study works on the development of a generative artificial intelligence (AI) university support system (GenAI-USS) by improvising retrieval augmented generation (RAG) architecture to improve the performance of large language models (LLM) in a way that supports stepwise transparency.
We aim to achieve better transparency and flexibility, and improved accuracy of responses to
queries based on university data assimilated from university webpages and knowledge sources. We
use RAG to develop a plug-and-play mechanism, along with prompt selection to boost LLM accuracy. One of the key components in our GenAI-USS is the capture and integration of real-time information via live retrieval into the generative AI process. This domain-specific knowledge assimilation with real-time updates to capture changes and new information serves as a specialized dynamic expert knowledge database for RAG. Our RAG mechanism pulls in relevant, up-to-date information from the dynamic database, which pulls real-time data from targeted predetermined knowledge sources.
The other key component in our GenAI-USS design is the deliberately designed information processing visibility at each stage of the process to ensure full transparency, and this includes the following: overview, data collection, storage encoding, testing, chatbot interaction, and search. The testing module allows for interactive viewing of generated responses and their sources. Our strategy is expected to lead to higher-quality AI-generated output via targeted information retrieval, hallucination mitigation, accuracy improvement, and timely data updates. Essentially, on the submission of a query, the RAG-dependent GenAI-USS first identifies the most relevant information from the specialized expert knowledge database and then factors this into the generative AI response development process. This results in a successful implementation of our primary objectives of a transparent and flexible user-choice–driven RAG-based generative AI system, which also provided heuristically notable improvements in the quality of output produced.
For more information: Link to Paper
A new era of artificial intelligence has begun, wherein artificial intelligence (AI) has emerged as a dominant societal paradigm that increasingly influences nearly every sphere of human life (Samuel et al., 2024a).
While AI holds great promise, it also gives rise to hitherto unidentified problems and uncertainties – the emerging complexities of socio-technical challenges associated with human-like AI are increasing and are not expected to be resolved in the foreseeable future (Brynjolfsson, 2022). Extant research posits that the broad and explosive development of AI technologies, while advantageous, is also fraught with risks and the emergence of sophisticated new threats across domains such as medicine, education, law and governance, and military, among others (Hashimoto et al., 2018; Jensen et al., 2020; Köbis et al., 2022; Hendrycks et al., 2023; Park et al., 2023). While technological revolutions are often marked by chaos, confusion, and fear, these challenges have been amplified by a combination of the unprecedented potential for rapid transformation of human society and the fragmented, and often AI-phobic public information about AI (Samuel et al., 2024b).
To counter the potentially destabilizing effects of AI on society, it is necessary to establish research, policy, education, and practice initiatives that avoid harms and minimize the risks associated with the deployment of AI technologies. Additionally, we must ensure that we preserve the core values that guide individuals, cultures, and societies while supporting rapid advancements in AI. As the famous philosopher Alfred N. Whitehead eloquently stated, “The art of progress is to preserve order amid change and to preserve change amid order,” reminding us of the delicate balance needed during times of rapid innovation.
For more information: Link to Paper
In this era of artificial intelligence (AI), ambiguity presents a significant challenge for information and communication management, often leading to misinterpretations and inefficiencies.
The rise of generative AI (Gen AI) has further amplified this issue by producing text that is often unclear or open to multiple interpretations, which could impact decision-making in many critical areas. To address these challenges, we propose AICMA: a framework for AI-driven Identification, Classification, and Mitigation of Ambiguity. The AICMA framework consists of three core stages: Identification, which determines whether a given sentence is ambiguous or not; Classification, which classifies a sentence based on ambiguity level (High – Ambiguous, Low – Ambiguous, or Not – Ambiguous); and Mitigation, which utilizes large language models (LLMs) to adapt and regenerate ambiguous sentences for enhanced clarity while preserving their original intent. By improving textual interpretability, AICMA offers significant value across various domains, including education, healthcare, policy-making, and beyond.
This framework is based on our theoretical framing of ambiguity, particularly in AI-generated text. It has the potential to contribute to developing more robust and reliable AI systems that will produce clearer and more interpretable outputs. Its adaptability will allow it to be integrated into existing AI systems, making it a versatile framework for developers and researchers aiming to enhance information and communication effectiveness Ultimately, AICMA represents a significant step forward in addressing the complexities of ambiguity in AI-generated text, paving the way for more transparent and effective AI-driven text and speech generation solutions, from conversational agents to critical decision-support systems.
For more information: Link to Paper
Chatbots, or AI-powered conversational systems, are increasingly embedded in daily life, from customer support and scheduling to education, budgeting, and emotional companionship.
As adoption grows across sectors, they reshape human–technology interaction while raising societal concerns. This study examines public perceptions and social impacts of chatbots through a literature review, expert insights, case studies, and analysis of discussions on a popular subreddit. Sentiment and thematic analysis reveal mixed experiences shaped by trust, usability, ethics, and emotional reliance. Findings highlight both opportunities and risks, underscoring the importance of transparent, inclusive, and value-aligned chatbot design. We conclude with policy recommendations grounded in principles of human-enhancing AI at organizational, individual, and societal levels.
For more information: Link to Paper
Globally, there has been a rapid increase in the availability of online gambling. As online gambling has increased in popularity, there has been a corresponding increase in online communities that discuss gambling.
The movement of gambling and communities interested in gambling to online spaces presents new challenges to harm reduction. The current study analyses a forum from a popular online forum hosting website (reddit.com) to determine its suitability as a source for data to inform gambling harm reduction in online spaces.
The current study provides an exploratory analysis of 1,141 unique posts and 11,668 comments collected from the online forum r/onlinegambling. The dataset covers posts and comments from August 5, 2015, to October 30, Natural language processing (NLP) techniques were used to identify common terms and phrases, identify topics with high rates of participant engagement and perform a sentiment analysis of posts and comments.
Sentiment analysis results showed that the majority of posts and comments were positive, but there were substantial numbers of negative and neutral content. Positive content was often congratulatory and focused on winning, neutral posts more commonly focused on practical advice, and negative posts were more commonly concerned with avoiding operators perceived as illegitimate by forum participants.
Unlocking the Potential of AI: Our research extends to additional areas, including AI for agriculture and human-centric robotics!