Detecting Governance Risk In Generative AI Adoption: A Predictive Analysis Of Organizational Misalignment And AI Failure Signals
DOI:
https://doi.org/10.63278/mme.v30i2.1965Abstract
Generative artificial intelligence is rapidly being implemented in organizations, but the governance systems tend to build slowly than the implementation. This builds up growing alarm about governance risk, especially where poor supervision, absence of accountability, and inadequate compliance preparedness and novel AI breakdown indicators influence responsible adoption. In that regard, the investigation of the impact of organizational circumstances and warning signs of operations on governance risk has gained a significant research priority. This research paper explores the relationship between the signs of AI failure and organizational aspects of governance as predictors of governance risk in adoption of generative AI. The study particularly aims to reveal the governance preparedness, interplay between the governance processes and the indicators of failure, as well as to determine the most predictive of the governance risk factors. The quantitative cross-sectional design was followed with primary data being gathered by way of structured questionnaire. There were 100 participants in the study who were selected across various industries, including technology, healthcare, retail, finance, education and the public sector. The analysis and interpretation of data were done in SPSS through descriptive statistics, Cronbach alpha reliability test, Pearson correlation and multiple linear regression. The results indicate that in general, organizations were only moderately ready to adopt generative AI. The governance risk was also mitigated using better governance practices that included clarity in policy, support of leadership, accountability structure, human control, compliance preparedness, and alignment of strategy. Conversely, AI failure indicators, especially, low user trust, bias/fairness issues, risk of misuse, and disruption of workflow were related to greater governance risk. The findings also reveal that good governance environments are likely to have low levels of failure-signals. The paper finds that the governance risk in the use of generative AI is quantifiable and controllable. It provides useful guidance to managers and compliance teams as well as policy designers interested in ensuring that the governance frameworks are tougher and that the adoption of AI is less damaging, more compliant, and less risky.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Ahmad Jamal, Zeeshan Akbar, Salman Akbar, Sikander Niaz, Fatima Tauseef

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their published articles online (e.g., in institutional repositories or on their website, social networks like ResearchGate or Academia), as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).

Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution 4.0 International License.



According to the