نوع مقاله : مقاله پژوهشی( کیفی )
نویسندگان
1 استادیار، گروه مدیریت دولتی، دانشگاه پیام نور، تهران، ایران
2 دانشجوی دکتری ،گروه مدیریت،واحد بین الملل اوکراین،دانشگاه پیام نور، اوکراین.
کلیدواژهها
موضوعات
عنوان مقاله English
نویسندگان English
Abstract
The present study aims to provide a model for developing employee cognitive trust in artificial intelligence. The statistical population of this study includes senior managers of companies in Tehran province, 17 of whom were selected as samples using non-probability purposive sampling. This study was conducted with a qualitative approach and using grounded theory. In-depth semi-structured interviews were used to collect data, and data analysis was performed using open and axial coding methods. Interviews continued until data saturation and were then analyzed using MAXQDA 2022 software. The results show that there are16 subcategories in the form of six classes: the causal factors include transparency, education and awareness, ethical compliance, and defining common roles and goals, are among the factors that help strengthen employee trust. Contextual factors are organizational culture and resources; and intervening factors are employee resistance and system complexity. Strategies include employee training and empowerment as key tools in improving human-machine interactions. Ultimately, the outcomes of this process include AI adoption, improved human-machine interactions, and increased organizational performance. The results of this analysis emphasize the importance of creating transparency, reducing system complexity, and improving employee understanding of the mechanisms and benefits of AI to increase cognitive trust.
Introduction
The creation of value through digital technologies depends on users’ trust in these technologies. Trust in this context is recognized as a key factor for the acceptance and utilization of new digital technologies.
Numerous previous studies have examined the relationship between the transparency of AI systems and human trust in this technology and have obtained mixed results. Some studies have reported a positive relationship. For example, transparency in music recommendation systems can increase user trust (Mehrotra et al., 2024).
Given the significant advances in the field of artificial intelligence, how employees interact with these technologies and the level of trust they have in them has become one of the key challenges in workplaces. Cognitive trust is recognized as one of the fundamental pillars in human-machine relationships, which is based on employees’ rational and logical assessments of the capabilities and competencies of artificial intelligence systems. This type of trust has a great impact on decision-making processes, group collaboration, and overall organizational performance (Lukyanenko et al., 2022(.
Cognitive trust in artificial intelligence is of great importance because employees need to be confident in the capabilities of artificial intelligence so that they can benefit from it in their decision-making processes and in performing their tasks. Research has shown that when employees trust the capabilities of artificial intelligence systems, their acceptance and effective use in workplaces increases. This trust is particularly important in environments where AI is used as a decision-making support tool (Yu & Li, 2022).
Consequently, developing and strengthening employees’ cognitive trust in AI in the workplace, especially given the technical and psychological complexities of this technology, is a key pillar in the process of its effective adoption and use. Given the existing evidence and new research, it is clear that transparency, continuous training, and performance-appropriate assessments of AI can help increase this trust. Thus, a more accurate understanding of how cognitive trust affects employee interactions with AI can not only lead to improved productivity and job satisfaction, but also generally facilitate decision-making processes and improve organizational performance. Therefore, future research should examine these components in more detail and provide more practical models for improving cognitive trust in interactions with AI. The present study seeks to answer the question: what is the model for developing cognitive trust in AI?
Theoretical Framework
Artificial Intelligence and Trust in Organizations
Artificial intelligence, sometimes called machine intelligence, refers to the intelligence displayed by machines in various situations, which is in contrast to the natural intelligence in humans (Bagheri et al., 2024). The use of AI in organizations can generate great value and significantly improve the productivity and effectiveness of organizational performance. In particular, AI can improve the accuracy of recommender systems, gain user trust in these systems, and provide a better user experience (Cicek et al., 2025).
Employee Cognitive Trust
Cognitive trust is one of the main pillars in organizational relationships that is formed based on employees' rational assessments and conscious analyses of the capabilities, honesty, and predictability of others' behavior. In contrast to affective trust, which is based on emotions; cognitive trust focuses more on competence and professional capabilities (Cicek et al., 2025). In fact, this type of trust is formed when people evaluate the capabilities and honesty of others through rational evidence and previous experiences. Research has shown that cognitive trust has a significant impact on job performance and team interactions and, in complex organizational situations, plays a fundamental role in reducing conflicts and promoting cooperation (Rajabi-Farjad & Atapour, 2021). In various theoretical models, cognitive trust typically includes the dimensions of competence, honesty, and predictability. Competence refers to an individual's ability to perform tasks, integrity refers to fair and ethical behavior, and predictability refers to employees' expectation that others' behaviors will be predictable in different situations (Choudhury, 2022). These dimensions simultaneously affect the formation and strengthening of cognitive trust in workplaces.
Research Methodology
The present research approach is qualitative and its strategy is based on grounded theory. At the heart of this method, a systematic approach was used to achieve a paradigmatic model. The statistical population of this study included all senior managers of companies in Tehran province; consultants in this field, and academic experts. The sampling method was non-probability purposive sampling and snowball sampling.
Research findings
17 interviews were analyzed. In the open coding stage, 613 initial concepts were reduced to 90 primary open codes and 45 secondary open codes after reviewing the data and merging similar concepts. In the second stage of axial coding, secondary codes were classified based on their relationship to similar topics and placed into 16 subcategories (components). In the last stage of open coding, the previously obtained components or subcategories were placed into more abstract categories or categories based on similarities, conceptual connections, and common characteristics between open codes and concepts. In the axial coding stage, the components obtained from the open coding stage were linked together in the form of causal conditions, pivotal phenomena, contextual factors, intervening factors, strategies, and consequences in a paradigmatic pattern. It should be noted that due to the length of the open coding stages, only secondary open codes are referred to for each category.
Conclusion
The findings of this study indicate that employee training and empowerment are key tools in improving human-machine interactions, which ultimately include the adoption of artificial intelligence, improving human-machine interactions, and increasing organizational performance. At the causal level, a set of key components were identified that provide the context for the formation of cognitive trust. Transparency and explainability of AI performance play an important role in the correct and reassuring understanding of employees, because users trust them more easily when they are aware of how the systems make decisions.
The enabling factors also have a profound impact on the formation or weakening of this type of trust, as cultural and organizational contexts. A learning, collaborative, and technology-oriented organizational culture facilitates the faster adoption of new technologies and trust in AI systems. In addition, organizational resources, including expert human resources, budget, technical infrastructure, and senior management support, are factors that, if present, facilitate the path to trust building and, if absent, are considered an obstacle to it. In line with ethical compliance and preventing bias, the results of the study also showed that employees trust AI systems more to use accurate and transparent data for decision-making and also to comply with ethical rules and principles.
According to the results of the study, the following suggestions were made:
Designing formal and informal communication platforms such as inter-unit meetings, organizational social networks, and collaborative software to facilitate effective communication between employees and different department
Establishing standard information security frameworks and implementing data quality controls periodically to ensure the accuracy, completeness, and updating of information used in decision-making
Designing user-friendly user interfaces and providing training on working with intelligent systems for employees, so that understanding, controlling, and predicting system behavior is simple and reliable for humans
کلیدواژهها English