Detecting Governance Risk In Generative AI Adoption: A Predictive Analysis Of Organizational Misalignment And AI Failure Signals

Authors

  • Ahmad Jamal
  • Zeeshan Akbar
  • Salman Akbar
  • Sikander Niaz
  • Fatima Tauseef

DOI:

https://doi.org/10.63278/mme.v30i2.1965

Abstract

Generative artificial intelligence is rapidly being implemented in organizations, but the governance systems tend to build slowly than the implementation. This builds up growing alarm about governance risk, especially where poor supervision, absence of accountability, and inadequate compliance preparedness and novel AI breakdown indicators influence responsible adoption. In that regard, the investigation of the impact of organizational circumstances and warning signs of operations on governance risk has gained a significant research priority. This research paper explores the relationship between the signs of AI failure and organizational aspects of governance as predictors of governance risk in adoption of generative AI. The study particularly aims to reveal the governance preparedness, interplay between the governance processes and the indicators of failure, as well as to determine the most predictive of the governance risk factors. The quantitative cross-sectional design was followed with primary data being gathered by way of structured questionnaire. There were 100 participants in the study who were selected across various industries, including technology, healthcare, retail, finance, education and the public sector. The analysis and interpretation of data were done in SPSS through descriptive statistics, Cronbach alpha reliability test, Pearson correlation and multiple linear regression. The results indicate that in general, organizations were only moderately ready to adopt generative AI. The governance risk was also mitigated using better governance practices that included clarity in policy, support of leadership, accountability structure, human control, compliance preparedness, and alignment of strategy. Conversely, AI failure indicators, especially, low user trust, bias/fairness issues, risk of misuse, and disruption of workflow were related to greater governance risk. The findings also reveal that good governance environments are likely to have low levels of failure-signals. The paper finds that the governance risk in the use of generative AI is quantifiable and controllable. It provides useful guidance to managers and compliance teams as well as policy designers interested in ensuring that the governance frameworks are tougher and that the adoption of AI is less damaging, more compliant, and less risky.

Downloads

Published

2024-07-15

How to Cite

Jamal, Ahmad, Zeeshan Akbar, Salman Akbar, Sikander Niaz, and Fatima Tauseef. 2024. “Detecting Governance Risk In Generative AI Adoption: A Predictive Analysis Of Organizational Misalignment And AI Failure Signals”. Metallurgical and Materials Engineering 30 (2):122-40. https://doi.org/10.63278/mme.v30i2.1965.

Issue

Section

Research