Thursday, 26 October 2023

NEW AI STUDY REVEALS HIGH USAGE DESPITE LOW POLICIES

Global digital trust association ISACA surveyed more than 2,300 professionals who work in audit, risk, security, data privacy and IT governance to get their take on the current state of generative AI. (Graphic: ISACA)


KUALA LUMPUR, Oct 26 (Bernama) -- A new poll of global digital trust professionals has revealed a high degree of uncertainty around generative artificial intelligence (AI), few company policies around its use, lack of training, and fears around its exploitation by bad actors.

According to Generative AI 2023: An ISACA Pulse Poll, global digital trust professionals weighed in on generative AI, in a new pulse poll that explores employee use, training, attention to ethical implementation, risk management, exploitation by adversaries, and impact on jobs.

The poll found that many employees at respondents’ organisations are using generative AI, even without policies in place for its use, according to a statement.

Only 28 per cent of organisations said their companies expressly permit the use of generative AI, while only 10 per cent stated a formal comprehensive policy is in place, and more than one in four claimed no policy exists and there is no plan for one.

These employees are using generative AI in a number of ways, including to create written content (65 per cent); increase productivity (44 per cent); automate repetitive tasks (32 per cent); provide customer service (29 per cent); and improve decision making (27 per cent).

However, despite employees quickly moving forward with use of the technology, only six per cent of respondents’ organisations are providing training to all staff on AI, and more than half (54 per cent) indicated that no AI training at all is provided, even to teams directly impacted by AI.

The poll also explored the ethical concerns and risks associated with AI as well, with 41 per cent saying that not enough attention is being paid to ethical standards for AI implementation.

Furthermore, examining how current roles are involved with AI, respondents believe that security (47 per cent), information technology operations (42 per cent), as well as risk and compliance (tie, 35 per cent) are responsible for the safe deployment of AI.

Despite the uncertainty and risk surrounding AI, 80 per cent of respondents believe AI will have a positive or neutral impact on their industry, 81 per cent believe it will have a positive or neutral impact on their organisations, and 82 per cent believe it will have a positive or neutral impact on their careers.

ISACA equips individuals and enterprises with the knowledge, credentials, education, training and community to progress their careers, transform their organisations, and build a more trusted and ethical digital world.

-- BERNAMA


No comments:

Post a Comment