Business
Majority of Bank Staff Use Unapproved AI Tools, Survey Finds
A recent survey conducted by AI vendor DeepL revealed that approximately 65% of finance professionals in the UK are using unapproved AI tools for customer interactions. This practice raises significant cybersecurity and regulatory concerns within the financial sector. The findings highlight a disconnect between the tools provided by organizations and the needs of front-line employees, potentially jeopardizing sensitive information shared during customer engagements.
The study indicates that a substantial 70% of respondents believe AI has enhanced the speed and accessibility of customer support. Moreover, many experts anticipate that AI technology will become essential for cross-border banking operations. Currently, AI powers 37% of banking interactions, with multilingual communication emerging as the most frequently utilized application. Other popular applications include chatbots and transaction monitoring for fraud detection.
Challenges arise as the use of unapproved AI tools, often termed “shadow AI,” can hinder the advancement of officially sanctioned technologies. A separate study from Cybernews disclosed that 59% of workers in the US also use unapproved AI tools, with executives and managers identified as the most significant offenders.
“If employees use unapproved AI tools for work, there’s no way to know what kind of information is shared with them,” stated Mantas Sabeckis, a security researcher at Cybernews. He emphasized the importance of awareness regarding data sharing when using tools that resemble casual conversation, such as ChatGPT.
According to DeepL, the phenomenon of shadow IT typically arises when teams lack access to necessary tools. Employees may resort to general-purpose AI applications instead of secure, purpose-built solutions, particularly in areas like translation. To mitigate these risks, it is crucial for organizations to foster closer collaboration between customer-facing teams and IT departments to select appropriate technology.
In the words of David Parry-Jones, Chief Revenue Officer at DeepL, “In financial services, where every interaction is highly regulated and reputational risk is acute, staff will inevitably look for workarounds if the tools provided don’t meet their needs.” He noted that the real danger lies not in employees experimenting with AI, but in companies failing to provide secure and effective solutions.
By building a collaborative approach between IT and frontline teams, organizations can effectively address the challenges posed by shadow AI. This strategy not only protects against cybersecurity threats but also capitalizes on the advantages of trusted AI technologies. As the financial sector continues to evolve, addressing these issues will be essential for maintaining compliance and ensuring customer safety.
-
Entertainment2 months agoIconic 90s TV Show House Hits Market for £1.1 Million
-
Lifestyle4 months agoMilk Bank Urges Mothers to Donate for Premature Babies’ Health
-
Sports3 months agoAlessia Russo Signs Long-Term Deal with Arsenal Ahead of WSL Season
-
Lifestyle4 months agoShoppers Flock to Discounted Neck Pillow on Amazon for Travel Comfort
-
Politics4 months agoMuseums Body Critiques EHRC Proposals on Gender Facilities
-
Business4 months agoTrump Visits Europe: Business, Politics, or Leisure?
-
Lifestyle4 months agoJapanese Teen Sorato Shimizu Breaks U18 100m Record in 10 Seconds
-
Politics4 months agoCouple Shares Inspiring Love Story Defying Height Stereotypes
-
World4 months agoAnglian Water Raises Concerns Over Proposed AI Data Centre
-
Sports4 months agoBournemouth Dominates Everton with 3-0 Victory in Premier League Summer Series
-
World4 months agoWreckage of Missing Russian Passenger Plane Discovered in Flames
-
Lifestyle4 months agoShoppers Rave About Roman’s £42 Midi Dress, Calling It ‘Elegant’
