Industry the May 2024 issue

Using Generative AI Starts with Trust

Q&A with Michael Bondar, Principal, Deloitte Risk & Financial Advisory, and Leader, U.S. Enterprise Trust; and Kate Graeff, VP for Enterprise Trust, Deloitte Risk & Financial Advisory
By Zach West Posted on April 30, 2024

Create a whole new season of a cancelled television show you love, says one; cast any actor you can think of, living or dead, in a recreation of your favorite movie, says another.

Of course, the reality is more complex. Outside of the tech industry, generative AI is widely mistrusted.

Hallucinations, a phenomenon in which generative AI presents false information as fact, are a persistent problem. According to a recent study of nine of the most popular generative AI models published in the Natural Language Processing Journal, “46.4% of the texts generated had factual errors; 52.0% had discourse flaws, such as self-contradictory statements; 31.3% contained logical fallacies; and 15.4% generated…incorrect claims and papers misattributed to living scientists.”

AI companies also scrape copyrighted material from the wider internet to use in their model-training data sets with no compensation for copyright holders, a practice seen by many as outright theft. Research released in March 2024 by Patronus AI, an AI model evaluation company, showed that AI models regularly reproduce copyrighted content in response to prompts, with OpenAI’s GPT-4 regurgitating verbatim copyrighted text 44% of the time.

Nevertheless, Deloitte’s Kate Graeff and Michael Bondar are optimistic about the opportunities AI-based tools can offer businesses for efficiency and productivity, among other things. But they temper that optimism with clear-eyed and thoughtful consideration of the unique risks that come with these new tools. In this Q&A they also discuss the necessity of fostering new skills in employees to equip them to properly identify and analyze AI-generated information and how building and maintaining trust must come first as businesses integrate AI-based tools into their model.

Trust in technology is earned when the technology demonstrates consistent and reliable results while having a positive impact on people.
Michael Bondar, principal, Deloitte Risk & Financial Advisory, and leader, U.S. Enterprise Trust
How can employers mitigate trust issues in generative AI? Do you think the unpopularity of generative AI will pose problems for employers looking to use AI for data analytics or other similar applications?

GRAEFF: The advent of generative AI represents a hugely transformative period for businesses, but it is also raising many trust-related questions for the workforce regarding the reliability of AI-generated outputs, how tools will change jobs and affect skills gaps, general uncertainty around new AI-supported tools, and more. As a result, organizations have an immense opportunity to put trust at the heart of every generative AI implementation.

For organizations implementing generative AI tools, this means embedding trust in the design process and carefully considering how humans and machines will work together, which tasks will be automated with the help of AI, and where AI assistants can best support humans to be more creative or more efficient.

Employers can also engender trust by establishing robust AI governance with clearly defined principles, processes, and guidelines; including workers’ inputs during the governance-building processes; providing transparency about the benefits and risks of AI; and continuously evaluating—and communicating out—the impact of generative AI on the business and its people.

Investments in training to provide the workforce with the digital skills and digital literacy needed to responsibly create, use, and interpret content created by generative AI tools can also help employers build trust with the workforce.

Of course, the benefits and risks of generative AI, which vary by use case, industry, and even user, all need to be carefully evaluated to ease adoption efforts. Organizations that can stand up mature trust-evaluation and trust-building capabilities will be more likely to position themselves for success when adopting new generative AI tools.

The term “synthetic information” has been used in some contexts instead of “AI-generated.” Do you feel there might be a danger in blurring the line between actual information and what generative AI might produce?

GRAEFF: Multimodal generative AI can produce and interpret data across a range of formats, including text, images, audio, and video, resulting in what we call “AI-generated” or “synthetic information.”

It is important to acknowledge that both AI and humans have limitations when creating or interpreting information. Without proper guardrails in place, AI can produce content that is inaccurate, biased, hateful, or toxic. In a similar vein, humans are prone to recency and confirmation biases, which could result in blind spots and poor judgment.

Assuming AI has been designed with the necessary guardrails in place, it has the potential to help humans overcome these limitations to provide more objective and faster results. However, to achieve these benefits and create a positive future, society will need to rigorously evaluate the provenance and impact of the information we create and consume. Early efforts are already being made on this front to create standards and labels for synthetically generated content that aim to provide greater transparency and protect trust in human-generated information.

Relatedly, do you think generative AI might exacerbate already growing levels of mistrust of institutions and experts? Google already turns up pages full of AI-generated, SEO-optimized results, and AI-generated images are among the top results you get from Google Images. With all this threatening to drown out sources of truth, are we on a path to a “trustless” society?

BONDAR: Trust is at a deficit in our society, and we see concerning trends across industries that point to low—and in some cases decreasing—levels of trust. For example, [according to Deloitte research] less than 40% of consumers strongly trust technology brands to display competence and good intent.

The ease with which malicious actors can use AI—generative or otherwise—to create disinformation, spread false narratives, sway public opinion, or perpetrate fraud is exacerbating concerns about the erosion of trust in business and society. In a recent Deloitte survey, 49% of respondents believe that the rise of generative AI will erode the overall level of trust in national and global institutions. Just like any other new technology, generative AI tools without adequate checks and balances could very well contribute to further trust erosion; however, it is our perspective that well governed generative AI tools could enhance trust in potentially significant ways.

There’s an urgency for all of us to become more discerning about the content we consume and produce. From a business standpoint, C-suite executives across industries increasingly view trust as a differentiator in a world where trust is a premium. Some organizations, largely in the technology industry, have established the role of chief trust officer to operationalize trust within the organization. This new C-suite role is responsible for establishing a trust charter for the organization, evaluating trust across multiple stakeholders on an ongoing basis, and driving governance processes that mitigate risk, enable compliance, and earn loyalty.

AI will likely contribute to large amounts of fraud—some predict around half a trillion dollars in 2024 due to voice cloning alone. What can be done to address this? Do you think the government should step in with regulation?

GRAEFF: As with any technology, there are positive and negative applications. Unfortunately, bad actors can use generative AI to easily generate content with the intent to impersonate and defraud. It is important for individuals and organizations to recognize this risk and continuously evaluate the authenticity of information.

Organizations can address fraud risks by reviewing existing processes and controls that can be more easily compromised due to increased access to AI, such as voice authentication. In some cases, deep-fake detection technologies can be deployed to recognize and flag AI-generated content in audio, video, images, or text.

Given potential escalation in both the number and complexity of AI-enabled fraud schemes, organizations have an opportunity to review their fraud detection and protection processes and to provide training to the workforce to build awareness of these risks.

From a regulatory standpoint, state-level laws have started to emerge banning—or requiring disclosures on—synthetic content. It remains to be seen whether we will see the emergence of additional regulations focused specifically on fraud perpetrated using generative AI. That said, even with increased regulatory protection, the threat environment will remain asymmetric if bad actors are determined to commit fraud.

How do you think using AI might affect the risk profile of a company? I have seen several examples of people breaking chatbots to get them to provide sensitive information. And there are also legal questions over generative AI’s use of copyrighted material in training data.

BONDAR: The way an organization designs, deploys, and scales AI tools can either earn or erode the trust of its customers, workers, shareholders, regulators, and community. Trust in technology is earned when the technology demonstrates consistent and reliable results while having a positive impact on people. The trustworthy AI framework Deloitte emphasizes with clients focuses on a number of features, including but not limited to privacy, safety, transparency, fairness, accountability, and reliability.

As organizations deploy generative AI tools, they have an opportunity to evaluate its impact on their overall enterprise risk profile. The increase in volume and complexity of generative AI implementations and external generative AI-enabled risks can have a direct impact on operational, technological, and reputational risk for the enterprise. These considerations need to be addressed within organizations’ risk management frameworks and processes.

This starts with robust cross-functional AI governance, supported by appropriate operating models and risk measures that calibrate risks based on their severity and likelihood of occurrence and implementing the necessary guardrails and controls. Strong data integrity and compliance enables organizations to train their models responsibly. Finally, ongoing evaluation of the impact of generative AI on the organization and its various stakeholders can identify where adjustments may be needed in the technology or the processes surrounding its implementation.

What are some important AI trends in 2024 companies and customers alike should be aware of and prepare for? Any current interesting use cases being worked on?

GRAEFF: According to Deloitte’s latest “State of Generative AI in the Enterprise” report, there are a number of AI trends business leaders are projecting for 2024. For one, 48% expect AI to substantially transform their organization in one to three years, with 40% saying they are currently the most prepared regarding strategy and 22% indicating they are the least prepared when it comes to talent. The majority, or 56%, of leaders see improved efficiency and productivity as the top benefits of AI, above innovation and growth.

Of surveyed leaders with “very high expertise in AI,” 39% rank trust as their number-one emotion concerning AI, more so than uncertainty; they also feel more pressure to adopt AI even though most see it as a threat to their business and operating model. These same leaders are primarily adopting AI within product development, marketing, sales and customer service, and IT and cybersecurity.

Collectively, these trends indicate that customers should be prepared to see greater influence of AI in communications and customer service experiences in the next few years.

Organizations, on the other hand, should continue to invest in and prepare for the impact AI technology will have on talent and risk and governance as it is deployed at scale. As AI technology and its impact on society mature, organizations may face growing pressure to not only “self-regulate” how they deploy AI but to also comply with new and emerging regulations at state and federal levels.

As AI technology and its impact on society matures, organizations may face growing pressure to not only ‘self-regulate’ how they deploy AI but to also comply with new and emerging regulations at state and federal levels.
Kate Graeff, VP for Enterprise Trust, Deloitte Risk & Financial Advisory
Is there anything else you would like to touch on?

BONDAR: Through research, we’ve found that there are several measurable benefits of strengthening trust for organizations. In particular, building trust can result in stronger financial results, increased customer loyalty, greater workforce engagement, and stronger brand protection and resilience.

These benefits of trust-building efforts are gaining recognition and prioritization within the boardroom: 94% of global board members [according to Deloitte research] believe building trust is important to performance, while 83% believe action on trust is needed within six months.

Business leaders have an opportunity to be proactive on trust efforts to authentically earn stakeholder trust, whether that concerns trust-building for generative AI specifically, technologies across the enterprise, or something else entirely. Whatever the focus of trust-building efforts, it requires adding trust to the leadership agenda, aligning the enterprise on its trust goals, operationalizing trust with the necessary processes and technology capabilities, and conducting ongoing trust measurement to inform strategic priorities across the business.

Zach West Content Specialist Read More

More in Industry

Data Experts Wanted
Industry Data Experts Wanted
The insurance industry must look beyond the usual suspects when hiring to meet ...
Industry The SEC’s Climate Disclosure Rules Go to Court
Changes to the rules since they were proposed make it more difficult to predict ...
France: Promising Industry Outlook Despite Pressures
Industry France: Promising Industry Outlook Despite Pressures
Market Dynamics, Consumer Demand, Regulatory Updates, and More
Easing Debt Markets Produce Robust Conversations on M&A
Industry Easing Debt Markets Produce Robust Conversations on M&A
As the Federal Reserve touts potential interest rate cuts in 2024, will buyers i...
Sun in a Bottle
Industry Sun in a Bottle
Q&A with Steven Cowley, Director, Princeton Plasma Physics L...
NARAB at 25
Industry NARAB at 25
The Council continues to work to establish the National Asso...