Sign In  |  Register  |  About Livermore  |  Contact Us

Livermore, CA
September 01, 2020 1:25pm
7-Day Forecast | Traffic
  • Search Hotels in Livermore

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

AI Policies are Low, Use is High, and Adversaries are Taking Advantage, Says New AI Study

A new poll of global digital trust professionals is revealing a high degree of uncertainty around generative artificial intelligence (AI), few company policies around its use, lack of training, and fears around its exploitation by bad actors, according to Generative AI 2023: An ISACA Pulse Poll.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20231025898760/en/

Global digital trust association ISACA surveyed more than 2,300 professionals who work in audit, risk, security, data privacy and IT governance to get their take on the current state of generative AI. (Graphic: ISACA)

Global digital trust association ISACA surveyed more than 2,300 professionals who work in audit, risk, security, data privacy and IT governance to get their take on the current state of generative AI. (Graphic: ISACA)

Digital trust professionals from around the globe—those who work in cybersecurity, IT audit, governance, privacy and risk—weighed in on generative AI—artificial intelligence that can generate text, images and other media—in a new pulse poll from ISACA that explores employee use, training, attention to ethical implementation, risk management, exploitation by adversaries, and impact on jobs.

Diving in, even without policies

The poll found that many employees at respondents’ organizations are using generative AI, even without policies in place for its use. Only 28 percent of organizations say their companies expressly permit the use of generative AI, only 10 percent say a formal comprehensive policy is in place, and more than one in four say no policy exists and there is no plan for one. Despite this, over 40 percent say employees are using it regardless—and the percentage is likely much higher given that an additional 35 percent aren’t sure.

These employees are using generative AI in a number of ways, including to:

  • Create written content (65%)
  • Increase productivity (44%)
  • Automate repetitive tasks (32%)
  • Provide customer service (29%)
  • Improve decision making (27%)

Lack of familiarity and training

However, despite employees quickly moving forward with use of the technology, only six percent of respondents’ organizations are providing training to all staff on AI, and more than half (54 percent) say that no AI training at all is provided, even to teams directly impacted by AI. Only 25 percent of respondents indicated they have a high degree of familiarity with generative AI.

“Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organizations need to catch up in providing policies, guidance and training to ensure the technology is used appropriately and ethically,” said Jason Lau, ISACA board director and CISO at Crypto.com. “With greater alignment between employers and their staff around generative AI, organizations will be able to drive increased understanding of the technology among their teams, gain further benefit from AI, and better protect themselves from related risk.”

Risk and exploitation concerns

The poll explored the ethical concerns and risks associated with AI as well, with 41 percent saying that not enough attention is being paid to ethical standards for AI implementation. Fewer than one-third of their organizations consider managing AI risk to be an immediate priority, 29 percent say it is a longer-term priority, and 23 percent say their organization does not have plans to consider AI risk at the moment, even though respondents note the following as top risks of the technology:

  1. Misinformation/Disinformation (77%)
  2. Privacy violations (68%)
  3. Social engineering (63)
  4. Loss of intellectual property (IP) (58%)
  5. Job displacement and widening of the skills gap (tied at 35%)

More than half (57 percent) of respondents indicated they are very or extremely worried about generative AI being exploited by bad actors. Sixty-nine percent say that adversaries are using AI as successfully or more successfully than digital trust professionals.

“Even digital trust professionals report a low familiarity with AI—a concern as the technology iterates at a pace faster than anything we’ve seen before, with use spreading rampantly in organizations,” said John De Santis, ISACA board chair. “Without good governance, employees can easily share critical intellectual property on these tools without the correct controls in place. It is essential for leaders to get up to speed quickly on the technology’s benefits and risks, and to equip their team members with that knowledge as well.”

Impact on jobs

Examining how current roles are involved with AI, respondents believe that security (47 percent), IT operations (42 percent), and risk and compliance (tie, 35%) are responsible for the safe deployment of AI. When looking ahead, one in five organizations (19 percent) are opening job roles related to AI-related functions in the next 12 months. Forty-five percent believe a significant number of jobs will be eliminated due to AI, but digital trust professionals remain optimistic about their own jobs, with 70 percent saying it will have some positive impact for their roles. To realize the positive impact, 80 percent think they will need additional training to retain their job or advance their career.

Optimism in the face of challenges

Despite the uncertainty and risk surrounding AI, 80 percent of respondents believe AI will have a positive or neutral impact on their industry, 81 percent believe it will have a positive or neutral impact on their organizations, and 82 percent believe it will have a positive or neutral impact on their careers. Eighty-five percent of respondents also say AI is a tool that extends human productivity, and 62 percent believe it will have a positive or neutral impact on society as a whole.

Learn More

Read more in the infographic and other AI resources, including the AI Fundamentals Certificate, the complimentary The Promise and Peril of the AI Revolution: Managing Risk white paper, and a free guide to AI policy considerations, at www.isaca.org/resources/artificial-intelligence.

About ISACA

ISACA® (www.isaca.org) equips individuals and enterprises with the knowledge, credentials, education, training and community to progress their careers, transform their organizations, and build a more trusted and ethical digital world. ISACA leverages the expertise of its more than 165,000 members who work in digital trust fields such as information security, governance, assurance, risk, privacy and quality. It has a presence in 188 countries, including 225 chapters worldwide. Through its foundation One In Tech, ISACA supports IT education and career pathways for underresourced and underrepresented populations.

AI policies are low, use is high, and adversaries are taking advantage, says new #ISACA AI study.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 Livermore.com & California Media Partners, LLC. All rights reserved.