Purpose The main objective of this study is to analyse the barriers that limit the adoption of artificial intelligence (AI) in SMEs. To do so, seven risk categories will be examined: functional (FUR), hedonic (HR), security (SR), social (SOR), financial (FR), performance (PR) and psychological (PYR). The choice of these seven categories is based on a comprehensive review of the literature on technology adoption and perceived risks, and these categories are considered to comprehensively capture the main risks associated with AI as perceived by SMEs. Additionally, the influence of factors such as technostress (TE), distrust (DT) and hesitancy (HES) on the adoption process will be investigated. Design/methodology/approach This study offers a contribution to the analysis of barriers limiting the adoption of AI in small and medium-sized enterprises (SMEs), addressing the problem from a dual methodological approach integrating PLS-SEM and fsQCA techniques. Seven categories of perceived risks (functional, hedonic, security, social, financial, performance and psychological) along with psychological and organisational factors such as technostress, distrust and hesitation were investigated using a representative sample of 395 SMEs. This dual methodological approach allowed us to identify not only significant causal relationships but also complex configurations that determine technological adoption. Findings The results reveal that distrust is a decisive factor affecting the intention not to adopt AI. This finding highlights the importance of creating strategies that increase trust in automated processes. Functional risk and hedonic risk were also identified as significant barriers, highlighting how perceived usefulness and subjective experience of pleasure influence willingness towards technology adoption. In contrast, financial risk and performance risk were not significant in this research, suggesting that SMEs may be more driven by factors related to knowledge and familiarity with technologies than by strictly economic or performance concerns. These findings challenge previous perceptions and open new perspectives to explore how organisational capabilities modulate technology decisions. Research limitations/implications The result underlines the need for initiatives that reduce hesitation and effectively communicate the benefits of AI, especially in key sectors of the economy. Taken together, these findings are fundamental for the design of public policies and business strategies aimed at fostering digital transformation in this strategic sector. Practical implications In terms of managerial implications, the adoption of AI by companies must first face the process of hesitation and distrust. The implementation of these processes must be clearly and accurately disseminated within the business structure, going through a process of proper understanding by managers. Any attempt that does not entail adequate knowledge or that generates a certain degree of hesitation may constitute a major failure that, through an inadequate strategy, will end up affecting the client. Companies providing AI services must develop a comprehensive strategy of enrichment and enhancement of the product, determining not only the advantages of its application in business strategies but also the positive effects on the company's value proposition. Social implications Differences in the perception of risks, such as financial, functional and psychological risks, may intensify the technology gap between different social groups, especially those with different levels of access to technology education. This could lead to significant disparities in how different sectors of society benefit from AI applications, limiting their inclusive reach and positive impact on the general population. Moreover, there are growing concerns about the impact of AI on employment and labour skills, as the perception that this technology could lead to dependency or replace human functions negatively influences its uptake. Originality/value This study represents one of the first investigations to comprehensively address the risks associated with the use of AI by firms, and to address the level of direct risk that this represents with respect to their behaviour. The treatment of non-adoption issues is an innovation in the academic approach, allowing for more comprehensive conceptual modelling of findings. This study provides a theoretical basis for a comprehensive analysis of the risks affecting the adoption of AI in firms, marking an academic innovation by focussing the analysis on non-adoption behaviour.
Building similarity graph...
Analyzing shared references across papers
Loading...
Pablo Ledesma Chaves
Eloy Gil-cordero
Antonio Navarro-García
European Journal of Innovation Management
Universidad de Sevilla
Building similarity graph...
Analyzing shared references across papers
Loading...
Chaves et al. (Sat,) studied this question.
www.synapsesocial.com/papers/69810006c1c9540dea813033 — DOI: https://doi.org/10.1108/ejim-06-2025-0719
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: