Finance AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/finance-ai/ Artificial Intelligence News Tue, 14 Apr 2026 12:55:12 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Finance AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/finance-ai/ 32 32 SAP brings agentic AI to human capital management https://www.artificialintelligence-news.com/news/sap-brings-agentic-ai-human-capital-management/ Tue, 14 Apr 2026 12:55:09 +0000 https://www.artificialintelligence-news.com/?p=112997 According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs. SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these […]

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs.

SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these agents must monitor system states, identify anomalies, and prompt human operators with context-aware solutions.

Data synchronisation failures between distributed enterprise systems routinely require dedicated IT support teams to diagnose. When employee master data fails to replicate due to a missing attribute, downstream systems like access management and financial compensation halt.

The agentic approach uses analytical models to cross-reference peer data, identify the missing variable based on organisational patterns, and prompt the administrator with the required correction. This automated troubleshooting dramatically reduces the mean time to resolution for internal support tickets.

Implementing this level of autonomous monitoring requires severe engineering discipline. Integrating modern semantic search mechanisms with highly structured legacy relational databases requires extensive middleware configuration.

Running large language models in the background to continuously scan millions of employee records for inconsistencies consumes massive compute resources. CIOs must carefully balance the cloud infrastructure costs of continuous algorithmic monitoring against the operational savings generated by reduced IT ticket volumes.

To mitigate the risk of algorithmic hallucinations altering core financial data, engineering teams are forced to build strict guardrails. These retrieve-and-generate architectures must be firmly anchored to the company’s verified data lakes, ensuring the AI only acts upon validated corporate policies rather than generalised internet training data.

The SAP release attempts to streamline this knowledge retrieval by introducing intelligent question-and-answer capabilities within its learning module. This functionality delivers instant, context-aware responses drawn directly from an organisation’s learning content, allowing employees to bypass manual documentation searches entirely. The integration also introduces a growing workforce knowledge network that pulls trusted external employment guidance into daily workflows to support confident decision-making.

How SAP is using agentic AI to consolidate the HCM ecosystem

The updated architecture focuses on unified experiences that adapt to operational needs. For example, the delay between a signed offer letter to new talent and the employee achieving full productivity is a drag on profit margins.

Native integration combining SmartRecruiters solutions, SAP SuccessFactors Employee Central, and SAP SuccessFactors Onboarding streamlines the data flow from initial candidate interaction through to the new hire phase.

A candidate’s technical assessments, background checks, and negotiated terms pass automatically into the core human resources repository. Enterprises accelerate the onboarding timeline by eliminating the manual re-entry of personnel data—allowing new technical hires to begin contributing to active commercial projects faster.

Technical leadership teams understand that out-of-the-box software rarely matches internal enterprise processes perfectly. Customisation is necessary, but hardcoded extensions routinely break during cloud upgrade cycles, creating vast maintenance backlogs.

To manage this tension, the software introduces a new extensibility wizard. This tool provides guided, step-by-step support for building custom extensions directly on the SAP Business Technology Platform within the SuccessFactors environment.

By containing custom development within a governed platform environment, technology officers can adapt the interface to unique business requirements while preserving strict governance and ensuring future update compatibility.

Algorithmic auditing and margin protection

The 1H 2026 release incorporates pay transparency insights directly into the People Intelligence package within SAP Business Data Cloud to help with compliance with strict regulatory environments like the EU’s directives on pay transparency (which requires organisations to provide detailed and auditable justifications for wage discrepancies.)

Manual compilation of compensation data across multiple geographic regions and currency zones is highly error-prone. Using the People Intelligence package, organisations can analyse compensation patterns and potential pay gaps across demographics.

Automating this analysis provides a data-driven defence against compliance audits and aligns internal pay practices with evolving regulatory expectations, protecting the enterprise from both litigation costs and brand damage.

Preparing for future demands requires trusted and consistent skills data that leadership can rely on across talent deployment and workforce planning. Unstructured data, where one department labels a capability using differing terminology from another, breaks automated resource allocation models.

The update strengthens the SAP talent intelligence hub by introducing enhanced skills governance to provide administrators with a centralised interface for managing skill definitions, applying corporate standards, and ensuring data aligns across internal applications and external partner ecosystems. 

Standardising this data improves overall system quality and allows resource managers to make deployment decisions without relying on fragmented spreadsheets or guesswork. This inventory prevents organisations from having to outsource to expensive external contractors for capabilities they already possess internally.

By bringing together data, AI, and connected experiences, SAP’s latest enhancements show how agentic AI can help organisations reduce daily friction. For professionals looking to explore these types of enterprise AI integrations and connect directly with the company, SAP is a key sponsor of this year’s AI & Big Data Expo North America.

See also: IBM: How robust AI governance protects enterprise margins

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
Canada’s Scotiabank preps for its AI future https://www.artificialintelligence-news.com/news/canadas-scotiabank-preps-for-its-ai-future/ Tue, 14 Apr 2026 11:20:00 +0000 https://www.artificialintelligence-news.com/?p=112991 Scotiabank has launched an AI framework, Scotia Intelligence, for data and AI operations that joins various platforms, data oversight, and software tools into a single instance. According to a press release from the bank, the stated purpose of Scotia Intelligence is to give employees, especially client-facing teams, access to AI under the bank’s existing governance […]

The post Canada’s Scotiabank preps for its AI future appeared first on AI News.

]]>
Scotiabank has launched an AI framework, Scotia Intelligence, for data and AI operations that joins various platforms, data oversight, and software tools into a single instance.

According to a press release from the bank, the stated purpose of Scotia Intelligence is to give employees, especially client-facing teams, access to AI under the bank’s existing governance and security rules. Scotiabank has published a short data ethics commitment paper, the existence of which is unique in Canada, the bank says.

Tim Clark, Scotiabank’s group head and chief information officer, said Scotia Intelligence is a new approach that combines the bank’s existing infrastructure with AI abilities that connect computing environments, governance, and security so employees can use the technology more confidently.

The difficult problem in the financial sector is how to make AI tools available at enterprise scale without creating new operational and regulatory risks for the organisation. Scotiabank’s response comes in the form of Scotia Navigator, the employee-focused component of Scotia Intelligence. It provides assistive AI for staff in multiple business units to in support of decision-making and software development, and is the means by which staff can build and deploy their own AI assistants within the company’s governance rules and stipulations.

There’s particular weight on AI software development, with automated coding in play in the bank’s technical teams. Code generation in a regulated environment has to conform to set standards for product quality, so code checking for security and auditability is a business imperative.

The bank has presented performance figures it says support the case for greater rollout of AI, citing contact centres where AI now handles more than 40% per cent of client queries, a fact that has led to industry recognition for its efforts in digital transformation. It says AI automatically forwards around 90% of commercial emails addressed to the bank, cutting the manual work of achieving this task by 70%. In digital banking, Scotiabank points to Scotia Intelligence at work giving predictive payment prompts to customers via a mobile app, helping customers manage recurring bills, email money transfers, and transferring money between a customer’s Scotiabank accounts.

Phil Thomas, the bank’s Group Head and Chief Strategy & Operating Officer, described the launch as a step in the company’s AI strategy focused on client-centred experiences, and said AI tools would allow the bank’s workforce to spend more time on higher-value work. All AI uses are reviewed internally on grounds of fairness, transparency, and accountability before they are launched. Employees working with Scotia Intelligence get mandatory training and annual attestations.

For CIOs, CTOs, and enterprise architecture leaders, Scotiabank’s combination of platform standardisation and formal governance creates the message that controls on AI have to exist as AI moves into production, and that exhibiting the existence of controls is important before incidents make their absence obvious. The scale of AI deployment success will depend at least partly on elements of safety and observability. The examples given by the bank’s statements suggest a programme of AI rollout where every function’s effectiveness can be measured in terms of reduced handling time, high-level automation, and customer engagement.

In its public statement, Scotiabank hasn’t given detail regarding architecture, cost, model strategy, or provided evidence of external benchmarks, so total ROI is unclear. However, should its existing AI projects continue to produce cost reductions, more code, and better customer experiences, it seems likely that Scotiabank will apply the technology elsewhere in its business.

Scotiabank envisages future use of agents for research and analytics, and says there’s scope for “more autonomous, context-aware, and action-oriented capabilities over time.”

(Image source: Pixabay under licence.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Canada’s Scotiabank preps for its AI future appeared first on AI News.

]]>
Strengthening enterprise governance for rising edge AI workloads https://www.artificialintelligence-news.com/news/strengthening-enterprise-governance-for-rising-edge-ai-workloads/ Mon, 13 Apr 2026 13:02:01 +0000 https://www.artificialintelligence-news.com/?p=112976 Models like Google Gemma 4 are increasing enterprise AI governance challenges for CISOs as they scramble to secure edge workloads. Security chiefs have built massive digital walls around the cloud; deploying advanced cloud access security brokers and routing every piece of traffic heading to external large language models through monitored corporate gateways. The logic was […]

The post Strengthening enterprise governance for rising edge AI workloads appeared first on AI News.

]]>
Models like Google Gemma 4 are increasing enterprise AI governance challenges for CISOs as they scramble to secure edge workloads.

Security chiefs have built massive digital walls around the cloud; deploying advanced cloud access security brokers and routing every piece of traffic heading to external large language models through monitored corporate gateways. The logic was sound to boards and executive committees—keep the sensitive data inside the network, police the outgoing requests, and intellectual property remains entirely safe from external leaks.

Google just obliterated that perimeter with the release of Gemma 4. Unlike massive parameter models confined to hyperscale data centres, this family of open weights targets local hardware. It runs directly on edge devices, executes multi-step planning, and can operate autonomous workflows right on a local device.

On-device inference has become a glaring blind spot for enterprise security operations. Security analysts cannot inspect network traffic if the traffic never hits the network in the first place. Engineers can ingest highly classified corporate data, process it through a local Gemma 4 agent, and generate output without triggering a single cloud firewall alarm.

Collapse of API-centric defences

Most corporate IT frameworks treat machine learning tools like standard third-party software vendors. You vet the provider, sign a massive enterprise data processing agreement, and funnel employee traffic through a sanctioned digital gateway. This standard playbook falls apart the moment an engineer downloads an Apache 2.0 licensed model like Gemma 4 and turns their laptop into an autonomous compute node.

Google paired this new model rollout with the Google AI Edge Gallery and a highly optimised LiteRT-LM library. These tools drastically accelerate local execution speeds while providing highly structured outputs required for complex agentic behaviours. An autonomous agent can now sit quietly on a local machine, iterate through thousands of logic steps, and execute code locally at impressive speed.

European data sovereignty laws and strict global financial regulations mandate complete auditability for automated decision-making. When a local agent hallucinates, makes a catastrophic error, or inadvertently leaks internal code across a shared corporate Slack channel, investigators require detailed logs. If the model operates entirely offline on local silicon, those logs simply do not exist inside the centralised IT security dashboard.

Financial institutions stand to lose the most from this architectural adjustment. Banks have spent millions implementing strict API logging to satisfy regulators investigating generative machine learning usage. If algorithmic trading strategies or proprietary risk assessment protocols are parsed by an unmonitored local agent, the bank violates multiple compliance frameworks simultaneously.

Healthcare networks face a similar reality. Patient data processed through an offline medical assistant running Gemma 4 might feel secure because it never leaves the physical laptop. The reality is that unlogged processing of health data violates the core tenets of modern medical auditing. Security leaders must prove how data was handled, what system processed it, and who authorised the execution.

The intent-control dilemma

Industry researchers often refer to this current phase of technological adoption as the governance trap. Management teams panic when they lose visibility. They attempt to rein in developer behaviour by throwing more bureaucratic processes at the problem, mandate sluggish architecture review boards, and force engineers to fill out extensive deployment forms before installing any new repository.

Bureaucracy rarely stops a motivated developer facing an aggressive product deadline; it just forces the entire behaviour further underground. This creates a shadow IT environment powered by autonomous software.

Real governance for local systems requires a different architectural approach. Instead of trying to block the model itself, security leaders must focus intensely on intent and system access. An agent running locally via Gemma 4 still requires specific system permissions to read local files, access corporate databases, or execute shell commands on the host machine.

Access management becomes the new digital firewall. Rather than policing the language model, identity platforms must tightly restrict what the host machine can physically touch. If a local Gemma 4 agent attempts to query a restricted internal database, the access control layer must flag the anomaly immediately.

Enterprise governance in the edge AI era

We are watching the definition of enterprise infrastructure expand in real-time. A corporate laptop is no longer just a dumb terminal used to access cloud services over a VPN; it’s an active compute node capable of running sophisticated autonomous planning software.

The cost of this new autonomy is deep operational complexity. CTOs and CISOs face a requirement to deploy endpoint detection tools specifically tuned for local machine learning inference. They desperately need systems that can differentiate between a human developer compiling standard code, and an autonomous agent rapidly iterating through local file structures to solve a complex prompt.

The cybersecurity market will inevitably catch up to this new reality. Endpoint detection and response vendors are already prototyping quiet agents that monitor local GPU utilisation and flag unauthorised inference workloads. However, those tools remain in their infancy today.

Most corporate security policies written in 2023 assumed all generative tools lived comfortably in the cloud. Revising them requires an uncomfortable admission from the executive board that the IT department no longer dictates exactly where compute happens.

Google designed Gemma 4 to put state-of-the-art agentic skills directly into the hands of anyone with a modern processor. The open-source community will adopt it with aggressive speed. 

Enterprises now face a very short window to figure out how to police code they do not host, running on hardware they cannot constantly monitor. It leaves every security chief staring at their network dashboard with one question: What exactly is running on endpoints right now?

See also: Companies expand AI adoption while keeping control

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Strengthening enterprise governance for rising edge AI workloads appeared first on AI News.

]]>
Companies expand AI adoption while keeping control https://www.artificialintelligence-news.com/news/companies-expand-ai-adoption-while-keeping-control/ Mon, 13 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112964 Many companies are taking a slower, more controlled approach to autonomous systems as AI adoption grows. Rather than deploying systems that act on their own, they are focusing on tools that assist human decision-making and keep control over outputs. This approach is especially clear in sectors where errors carry real financial or legal risk. One […]

The post Companies expand AI adoption while keeping control appeared first on AI News.

]]>
Many companies are taking a slower, more controlled approach to autonomous systems as AI adoption grows. Rather than deploying systems that act on their own, they are focusing on tools that assist human decision-making and keep control over outputs. This approach is especially clear in sectors where errors carry real financial or legal risk.

One example comes from S&P Global Market Intelligence, which builds AI tools into its Capital IQ Pro platform. The system is used by analysts to review company filings, earnings calls, and market data. Its AI features are designed to stay grounded in source material.

According to S&P Global Market Intelligence, its AI tools extract insights from structured and unstructured data, including transcripts and reports, while working with verified source data.

AI adoption ahead of autonomy

The current wave of AI tools in business is often described as a step toward autonomous agents. Systems may eventually plan tasks and act without direct human input. But most companies are not there yet. AI adoption is already widespread, with a majority of organisations using AI in at least one part of their business, according to research from McKinsey & Company. Many organisations have yet to scale AI in the enterprise, showing a disconnect between initial use and broader deployment.

Instead, AI helps with tasks like summarising documents or answering queries, but it does not act independently.

S&P Global Market Intelligence’s tools let users to query large datasets through a chat interface, but the results are tied to verified financial content. In many cases, users can refer back underlying documents, lowering the risk of errors or unsupported outputs.

In its research, the company outlines AI governance as a process in which systems are designed and monitored, with attention to fairness and accountability.

AI in high-risk sectors

In finance, small errors can have large consequences. That shapes how AI is built and used. Tools like Capital IQ Pro are designed to support analysts not replace them. The system may help surface insights or highlight trends, but final decisions still rest with human users.

The gap between adoption and value is becoming clearer. Many organisations report a gap between AI deployment and measurable business outcomes, according to findings from McKinsey & Company.

While autonomous systems may be able to handle certain tasks, companies often need clear accountability. When decisions affect investments, compliance, or reporting, there must be a way to explain how those decisions were made.

Research from S&P Global notes that organisations are increasingly focused on building governance frameworks to manage AI risks, including data quality issues and model bias.

Toward future systems

The difference between today’s controlled AI tools and future autonomous systems remains wide. Interest in more autonomous and agent-driven systems is also growing, even as most organisations remain in early stages of deployment. Systems that can explain their outputs, show their sources, and operate in defined limits are more likely to be trusted.

Autonomous agents may one day handle tasks like financial analysis or supply chain planning with minimal input. But without clear control mechanisms, their use will remain limited.

The themes will feature at AI & Big Data Expo North America 2026 on May 18 – 19. S&P Global Market Intelligence is listed as a bronze sponsor of the event. The agenda features topics like AI governance and the use of AI in regulated industries.

Balancing ability and control

The push toward autonomous AI is unlikely to slow down. Advances in large language models and agent-based systems continue to expand what AI can do.

Enterprise users are asking the question of how to keep those systems under control. S&P Global Market Intelligence’s approach reflects that concern. By keeping AI grounded in verified data and placing humans at the centre of decision-making, it prioritises trust over autonomy.

As systems grow more capable, the ability to govern and control them could become just as important as the tasks they perform.

(Photo by Hitesh Choudhary)

See also: Why companies like Apple are building AI agents with limits

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Companies expand AI adoption while keeping control appeared first on AI News.

]]>
Experian uncovers fraud paradox in financial services’ AI adoption https://www.artificialintelligence-news.com/news/experian-ai-fraud-detection-financial-services-2026/ Thu, 02 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112849 The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it. According to FTC data cited in the forecast, consumers lost […]

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it.

According to FTC data cited in the forecast, consumers lost more than US$12.5 billion to fraud in 2024. As per Experian’s own data accompanying the report, nearly 60% of companies reported an increase in fraud losses from 2024 to 2025. Experian’s fraud prevention solutions helped clients avoid an estimated US$19 billion in fraud losses globally in 2025, a figure that underscores the scale of the problem and how much defence now depends on AI matching the speed and autonomy of attacks.

The agentic AI issue

The most pressing finding in Experian’s forecast is what the company calls machine-to-machine mayhem, the point at which agentic AI systems, designed to transact autonomously on behalf of users, become indistinguishable from the bots fraudsters deploy for the same purpose.

According to Experian’s forecast, as organisations strive to integrate AI agents capable of independent decision-making, fraudsters are exploiting those same systems to run high-volume digital fraud at a scale and speed no human operation could sustain. The core challenge, as per the report, is that machine-to-machine interactions carry no clear ownership of liability; when an AI agent initiates a transaction that turns out to be fraudulent, the question of who is responsible has no settled answer.

Kathleen Peters, chief innovation officer for Fraud and Identity at Experian North America, framed the problem: “Technology is accelerating the evolution of fraud, making it more sophisticated and harder to detect. By combining differentiated data with advanced analytics and cutting-edge technology, businesses can strengthen fraud defences, safeguard consumers, and deliver secure, seamless experiences.”

Experian predicts that this will reach a tipping point in 2026, forcing substantive industry conversations around liability and the governance of agentic AI in commerce. Some organisations are already making preemptive moves. Amazon, for instance, has stated it blocks third-party AI agents from browsing and transacting on its platform, citing security and privacy concerns.

Four other threats the forecast identifies

Beyond the agentic AI issue, Experian’s forecast identifies four additional trends that financial institutions need to consider in 2026.

Deepfake candidates infiltrating remote workforces; Generative AI tools can now produce tailored CVs and real-time deepfake video capable of passing job interviews. According to the forecast, employers will onboard individuals who are not who they claim to be, granting bad actors access to internal systems. The FBI and Department of Justice issued multiple warnings in 2025 about documented instances of North Korean operatives using this approach to gain employment at US companies.

Website cloning overwhelms fraud teams; AI tools have made it easier to create replicas of legitimate sites, and harder to eliminate them permanently. As per the forecast, even after takedown requests are actioned, spoofed domains continue to resurface, forcing fraud teams into reactive patterns.

Emotionally intelligent scam bots; Generative AI means bots can conduct complex romance fraud and relative-in-need scams without human operators. According to Experian’s forecast, such bots respond convincingly, build trust over extended periods, and are becoming increasingly difficult distinguish from genuine human interaction.

Smart home vulnerabilities: Devices including virtual assistants, smart locks, and connected appliances create new entry points for fraudsters. Experian forecasts that bad actors will exploit these devices to access personal data and monitor household activity as the connected home becomes a more greater part of everyday financial behaviour.

Financial institutions’ responses

According to Experian’s Perceptions of AI Report, drawing on responses from more than 200 decision-makers at leading financial institutions, 84% identify AI as a critical or high priority for their business strategy over the next two years. A further 89% say AI will play an important role in the lending lifecycle.

The governance dimension, however, is where institutions struggle. According to the same report, 73% of respondents are concerned about the regulatory environment around AI, and 65% identify AI-ready data as one of their biggest deployment challenges. Data quality was rated the single most important factor in choosing an AI vendor, which positions Experian’s data-first positioning at the intersection of what financial institutions say they need most.

On the compliance side, Experian’s AI-powered Assistant for Model Risk Management addresses one of the most resource-intensive requirements facing institutions deploying AI. According to a 2025 Experian study of more than 500 global financial institutions, 67% struggle to meet their country’s regulatory requirements, 79% report more frequent supervisory communications from regulators than a year ago, and 60% still use manual compliance processes. In Experian’s announcement, the company states that more than 70% of larger institutions report model documentation compliance involves over 50 people, a figure that signals the scale of the automation opportunity.

Vijay Mehta, EVP of Global Solutions and Analytics at Experian Software Solutions, described the challenge the product addresses: “The AI-enabled speed of data analytics and model development is driving unprecedented business opportunities for financial institutions, but it comes with a challenge: global regulations that require time-consuming documentation. Experian Assistant for Model Risk Management helps solve this labour and resource-intensive requirement with end-to-end model documentation automation.”

The data quality foundation

Running underneath Experian’s fraud and compliance products is the same structural argument that appears in both IBM and Salesforce’s AI narratives that appeared this week: AI is only as reliable as the data it runs on. As per Experian’s Perceptions of AI Report, 65% of financial institution decision-makers consider AI-ready data one of their biggest challenges, and data quality is the most critical factor influencing trust in AI vendors.

That is not a coincidence of messaging. It reflects a constraint facing financial services institutions as they move AI from pilots into production credit decisioning, fraud detection, and regulatory reporting; functions where explainability and auditability are not optional.

Experian’s CDAO Paul Heywood is among the confirmed speakers at the AI & Big Data Expo, part of TechEx North America, taking place 18 – 19 May 2026 at the San Jose McEnery Convention Centre, California. Experian is a Platinum Sponsor at TechEx Global.

See also: Hershey applies AI in its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
KPMG: Inside the AI agent playbook driving enterprise margin gains https://www.artificialintelligence-news.com/news/kpmg-inside-ai-agent-playbook-enterprise-margin-gains/ Wed, 01 Apr 2026 15:24:01 +0000 https://www.artificialintelligence-news.com/?p=112839 Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast. The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only […]

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast.

The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only 11 percent have reached the stage of deploying and scaling AI agents in ways that produce enterprise-wide business outcomes.

However, the central finding is not that AI is failing; 64 percent of respondents say AI is already delivering meaningful business outcomes. The problem is that “meaningful” is doing a lot of heavy lifting in that sentence, and the distance between incremental productivity gains and the kind of compounding operational efficiency that moves the needle on margin is, for most organisations, still substantial.

The architecture of a performance gap

KPMG’s report distinguishes between what it labels “AI leaders” (i.e. organisations that are scaling or actively operating agentic AI) and everyone else. The gap in outcomes between these two cohorts is striking.

Headshot of Steve Chase, Global Head of AI and Digital Innovation at KPMG International.

Steve Chase, Global Head of AI and Digital Innovation at KPMG International, said: “The first Global AI Pulse results reinforce that spending more on AI is not the same as creating value. Leading organisations are moving beyond enablement, deploying AI agents to reimagine processes and reshape how decisions and work flow across the enterprise.”

Among AI leaders, 82 percent report that AI is already delivering meaningful business value. Among their peers, that figure drops to 62 percent. That 20-percentage-point spread might look modest in isolation, but it compounds quickly when you consider what it reflects: not just better tooling, but fundamentally different deployment philosophies.

The organisations in that 11 percent are deploying agents that coordinate work across functions, route decisions without human intermediation at every step, surface enterprise-wide insights from operational data in near real-time, and flag anomalies before they escalate into incidents.

In IT and engineering functions, 75 percent of AI leaders are using agents to accelerate code development versus 64 percent of their peers. In operations, where supply-chain orchestration is the primary use case, the split is 64 percent versus 55 percent. These are not marginal differences in tool adoption rates; they reflect different levels of process re-architecture.

Most enterprises that have deployed AI have done so by layering models onto existing workflows (e.g. a co-pilot here, a summarisation tool there…) without redesigning the process those tools sit inside. That produces incremental gains.

The organisations closing the performance gap have inverted this approach: they are redesigning the process first, then deploying agents to operate within the redesigned structure. The difference in return on AI spend between these two approaches, over a three-to-five-year horizon, is likely to be the defining competitive variable in several industries.

What $186 million actually buys—and what it does not

The investment figures in the KPMG data deserve scrutiny. A weighted global average of $186 million per organisation sounds substantial, but the regional variance tells a more interesting story.

ASPAC leads at $245 million, the Americas at $178 million, and EMEA at $157 million. Within ASPAC, organisations including those in China and Hong Kong are investing at $235 million on average; within the Americas, US organisations are at $207 million.

These figures represent planned spend across model licensing, compute infrastructure, professional services, integration, and the governance and risk management apparatus needed to operate AI responsibly at scale.

The question is not whether $186 million is too much or too little; it is what proportion of that figure is being allocated to the operational infrastructure required to derive value from the models themselves. The survey data suggests that most organisations are still underweighting this latter category.

Compute and licensing costs are visible and relatively easy to budget for. The friction costs – the engineering hours spent integrating AI outputs with legacy ERP systems, the latency introduced by retrieval-augmented generation pipelines built on top of poorly structured data, and the compliance overhead of maintaining audit trails for AI-assisted decisions in regulated industries – tend to surface late in deployment cycles and often exceed initial estimates.

Vector database integration is a useful example. Many agentic workflows depend on the ability to retrieve relevant context from large, unstructured document repositories in real time. Building and maintaining the infrastructure for this – selecting between providers such as Pinecone, Weaviate, or Qdrant, embedding and indexing proprietary data, and managing refresh cycles as underlying data changes – adds meaningful engineering complexity and ongoing operational cost that rarely appears in initial AI investment proposals. 

When that infrastructure is absent or poorly maintained, agent performance degrades in ways that are often difficult to diagnose, as the model’s behaviour is correct relative to the context it receives, but that context is stale or incomplete.

Governance as an operational variable, not a compliance exercise

Perhaps the most practically useful finding in the KPMG survey is the relationship between AI maturity and risk confidence.

Among organisations still in the experimentation phase, just 20 percent feel confident in their ability to manage AI-related risks. Among AI leaders, that figure rises to 49 percent. 75 percent of global leaders cite data security, privacy, and risk as ongoing concerns regardless of maturity level—but maturity changes how those concerns are operationalised.

This is an important distinction for boards and risk functions that tend to frame AI governance as a constraint on deployment. The KPMG data suggests the opposite dynamic: governance frameworks do not slow AI adoption among mature organisations; they enable it. The confidence to move faster – to deploy agents into higher-stakes workflows, to expand agentic coordination across functions – correlates directly with the maturity of the governance infrastructure surrounding those agents.

In practice, this means that organisations treating governance as a retrospective compliance layer are doubly disadvantaged. They are slower to deploy, because every new use case triggers a fresh governance review, and they are more exposed to operational risk, because the absence of embedded governance mechanisms means that edge cases and failure modes are discovered in production rather than in testing.

Organisations that have embedded governance into the deployment pipeline itself (e.g. model cards, automated output monitoring, explainability tooling, and human-in-the-loop escalation paths for low-confidence decisions) are the ones operating with the confidence that allows them to scale.

“Ultimately, there is no agentic future without trust and no trust without governance that keeps pace,” explains Steve Chase, Global Head of AI and Digital Innovation at KPMG International. “The survey makes clear that sustained investment in people, training and change management is what allows organisations to scale AI responsibly and capture value.”

Regional divergence and what it signals for global deployment

For multinationals managing AI programmes across regions, the KPMG data flags material differences in deployment velocity and organisational posture that will affect global rollout planning.

ASPAC is advancing most aggressively on agent scaling; 49 percent of organisations there are scaling AI agents, compared with 46 percent in the Americas and 42 percent in EMEA. ASPAC also leads on the more complex capability of orchestrating multi-agent systems, at 33 percent.

The barrier profiles also differ in ways that carry real operational implications. In both ASPAC and EMEA, 24 percent of organisations cite a lack of leadership trust and buy-in as a primary barrier to AI agent deployment. In the Americas, that figure drops to 17 percent.

Agentic systems, by definition, make or initiate decisions without per-instance human approval. In organisational cultures where decision accountability is tightly concentrated at the senior level, this can generate institutional resistance that no amount of technical capability resolves. The fix is governance design; specifically, defining in advance what categories of decision an agent is authorised to make autonomously, what triggers escalation, and who carries accountability for agent-initiated outcomes.

The expectation gap around human-AI collaboration is also worth noting for anyone designing agent-assisted workflows at a global scale.

East Asian respondents anticipate AI agents leading projects at a rate of 42 percent. Australian respondents prefer human-directed AI at 34 percent. North American respondents lean toward peer-to-peer human-AI collaboration at 31 percent. These differences will affect how agent-assisted processes need to be designed in different regional deployments of the same underlying system, adding localisation complexity that is easy to underestimate in centralised platform planning.

One data point in the KPMG survey that deserves particular attention from CFOs and boards: 74 percent of respondents say AI will remain a top investment priority even in the event of a recession. This is either a sign of genuine conviction about AI’s role in cost structure and competitive positioning, or it reflects a collective commitment that has not yet been tested against actual budget pressure. Probably both, in different proportions across different organisations.

What it does indicate is that the window for organisations still in the experimentation phase is not indefinite. If the 11 percent of AI leaders continue to compound their advantage (and the KPMG data suggests the mechanisms for doing so are in place) the question for the remaining 89 percent is not whether to accelerate AI deployment, but how to do so without compounding the integration debt and governance deficits that are already constraining their returns.

See also: Hershey applies AI across its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
Secure governance accelerates financial AI revenue growth https://www.artificialintelligence-news.com/news/secure-governance-accelerates-financial-ai-revenue-growth/ Mon, 30 Mar 2026 15:54:58 +0000 https://www.artificialintelligence-news.com/?p=112817 Financial institutions are learning to deploy compliant AI solutions for greater revenue growth and market advantage. For the better part of ten years, financial institutions viewed AI primarily as a mechanism for pure efficiency gains. During that era, quantitative teams programmed systems designed to discover ledger discrepancies or eliminate milliseconds from automated trading execution times. […]

The post Secure governance accelerates financial AI revenue growth appeared first on AI News.

]]>
Financial institutions are learning to deploy compliant AI solutions for greater revenue growth and market advantage.

For the better part of ten years, financial institutions viewed AI primarily as a mechanism for pure efficiency gains. During that era, quantitative teams programmed systems designed to discover ledger discrepancies or eliminate milliseconds from automated trading execution times. As long as the quarterly balance sheets reflected positive gains, stakeholders outside the core engineering groups rarely scrutinised the actual maths driving these returns.

The arrival of generative applications and highly complex neural networks completely dismantled that widespread state of comfortable ignorance. Today, it’s not acceptable for banking executives to approve new technology rollouts based simply on promises of accurate predictive capabilities.

Across Europe and North America, lawmakers are aggressively drafting legislation aimed at punishing institutions that utilise opaque algorithmic decision-making processes. Consequently, the dialogue within corporate boardrooms has narrowed intensely to focus on safe AI deployment, ethics, model oversight, and legislation specific to the financial industry.

Institutions that choose to ignore this impending regulatory reality actively place their operational licenses in jeopardy. However, treating this transition purely as a compliance exercise ignores the immense commercial upside. Mastering these requirements creates a highly efficient operational pipeline where good governance functions as a massive accelerant for product delivery rather than an administrative handbrake.

Commercial lending and the price of opacity

The mechanics of retail and commercial lending perfectly illustrate the tangible business impact of proper algorithmic oversight.

Consider a scenario where a multinational bank introduces a deep learning framework to process commercial loan applications. This automated system evaluates credit scores, market sector volatility, and historical cash flows to generate an approval decision in a matter of milliseconds. The resulting competitive edge is immediate and obvious, as the institution reduces administrative overhead while clients secure necessary liquidity exactly when they require it.

However, the inherent danger of this velocity resides entirely within the training data. If the deployed model unknowingly utilises proxy variables that discriminate against a specific demographic or geographic area, the ensuing legal consequences are swift and punishing.

Modern regulators demand total explainability and categorically refuse to accept the complexity of neural networks as an excuse for discriminatory outcomes. When an external auditor investigates why a regional logistics enterprise was denied funding, the bank must possess the capability to trace that exact denial directly back to the specific mathematical weights and historical data points that caused the rejection.

Investing capital into ethics and oversight infrastructure is essentially how modern banks purchase speed-to-market. Constructing an ethically-sound and thoroughly vetted pipeline enables an institution to release new digital products without constantly looking over its shoulder out of fear. Guaranteeing fairness from the absolute beginning prevents nightmarish scenarios that involve delayed product rollouts and retrospective compliance audits. This level of operational confidence translates directly into sustained revenue generation while entirely avoiding massive regulatory penalties.

Engineering unbroken information provenance

Achieving this high standard of safety is impossible without adopting a brutal and uncompromising approach toward internal data maturity. Any algorithm merely reflects the information it consumes. 

Unfortunately, legacy banking institutions are infamous for maintaining highly fractured information architectures. It remains incredibly common to discover customer details resting on thirty-year-old mainframe systems, transaction histories floating in public cloud environments, and risk profiles gathering dust within entirely separate databases. Attempting to navigate this disjointed landscape makes achieving regulatory compliance physically impossible.

To rectify this, data officers must enforce the widespread adoption of comprehensive metadata management across the entire enterprise. Implementing strict data lineage tracking represents the only viable path forward. For example, if a live production model suddenly exhibits bias against minority-owned businesses, engineering teams require the exact capability to surgically isolate the specific dataset responsible for poisoning the results.

Constructing this underlying infrastructure mandates that every single byte of ingested training data becomes cryptographically signed and tightly version-controlled. Modern enterprise platforms must maintain an unbroken chain of custody for every input, stretching all the way from a customer’s initial interaction to the final algorithmic ruling.

Beyond data storage, integration issues arise when connecting advanced vector databases to these legacy systems. Vector embeddings require massive compute resources to process unstructured financial documents. If these databases are not perfectly synchronised with real-time transactional feeds, the AI risks generating severe hallucinations, presenting outdated or entirely fabricated financial advice as absolute fact.

Furthermore, as we’re currently all too aware, economic environments change at a rapid pace. A model trained on interest rates from three years ago will fail spectacularly in today’s market. Technology teams refer to this specific phenomenon as concept drift.

To combat this, developers must wire continuous monitoring systems directly into their live production algorithms. These specialised tools observe the model’s output in real-time, actively comparing results against baseline expectations. If the system begins to drift outside approved ethical parameters, the monitoring software automatically suspends the automated decision-making process.

Exceptional predictive accuracy means absolutely nothing without real-time observability; without it, a highly-tuned model becomes a corporate liability waiting to explode.

Defending the mathematical perimeter

Of course, implementing governance over financial algorithms introduces an entirely new category of operational headaches for CISOs. Traditional cybersecurity disciplines focus primarily on building protective walls around endpoints and corporate networks. Securing advanced AI, however, requires actively defending the actual mathematical integrity of the deployed models. This represents a complex discipline that most internal security operations centres barely understand.

Adversarial attacks present a very real and present danger to modern financial institutions. In a scenario known as a data poisoning attack, malicious actors subtly manipulate the external data feeds that a bank relies upon to train its internal fraud detection models. By doing so, they essentially teach the algorithm to turn a blind eye to specific and highly-lucrative types of illicit financial transfers.

Consider also the threat of prompt injection, where attackers utilise natural language inputs to trick generative customer service bots into freely handing over sensitive account details. Model inversion represents another nightmare scenario for executives, occurring when outsiders repeatedly query a public-facing algorithm until they successfully reverse-engineer the highly confidential financial data buried deep within its training weights.

To counter these evolving threats, security teams are forced to bury zero-trust architectures deep within the machine learning operations pipeline. Absolute device trust becomes non-negotiable. Only fully-authenticated data scientists, working exclusively on locked-down corporate endpoints, should ever possess the administrative permissions required to tweak model weights or introduce new data to the system.

Before any algorithm touches live financial data, it must successfully survive rigorous adversarial testing. Internal red teams must intentionally attempt to break the algorithm’s ethical guardrails using sophisticated simulation techniques. Surviving these simulated corporate attacks serves as a mandatory prerequisite for any public deployment.

Eradicating the engineering and compliance divide

The highest barrier to creating safe AI is rarely the underlying software itself; rather, it is the entrenched corporate culture.

For decades, a very thick wall separated software engineering departments from legal compliance teams. Developers were heavily incentivised to chase speed and rapid feature delivery. Conversely, compliance officers chased institutional safety and maximum risk mitigation. These groups typically operated from entirely different floors, used different software applications, and followed entirely different performance incentives.

That division has to come down. Data scientists can no longer construct models in an isolated engineering vacuum and then carelessly toss them over the fence to the legal team for a quick blessing. Legal constraints, ethical guidelines, and strict compliance rules must dictate the exact architecture of the algorithm starting on day one. Leaders need to actively force this internal collaboration by establishing cross-functional ethics boards. Banks should pack these specific committees with lead developers, corporate counsel, risk officers, and external ethicists.

When a particular business unit pitches a new automated wealth management application, this ethics board dissects the entire project. They must look past the projected profitability margins to deeply interrogate the societal impact and regulatory viability of the proposed tool.

By retraining software developers to view compliance as a core design requirement rather than annoying red tape, a bank actively builds a lasting culture of responsible innovation.

Managing vendor ecosystems and retaining control

The enterprise technology market recognises the urgency surrounding compliance and is aggressively pumping out algorithmic governance solutions.

The major cloud service providers now bake sophisticated compliance dashboards directly into their AI platforms. These tech giants offer banks automated audit trails, reporting templates designed to satisfy global regulators, and built-in bias-detection algorithms.

Simultaneously, a smaller ecosystem of independent startups offers highly specialised governance services. These agile firms focus entirely on testing model explainability or spotting complex concept drift exactly as it happens.

Purchasing these vendor solutions is highly tempting. Buying off-the-shelf software offers operational convenience and allows the enterprise to deploy governed algorithms without writing heavy auditing infrastructure from scratch. Startups are rapidly building application programming interfaces that plug directly into legacy banking systems, providing instant, third-party validation of internal models.

Despite these advantages, relying entirely on outsourced governance introduces a risk of vendor lock-in. If a bank ties its entire compliance architecture to one hyperscale cloud provider, migrating those specific models later to satisfy a new local data sovereignty law becomes an expensive and multi-year nightmare. 

A hard line must be drawn regarding open standards and system interoperability. The specific tools tracking data lineage and auditing model behaviour have to be completely portable across different environments. The bank must retain absolute control over its compliance posture, regardless of whose physical servers actually hold the algorithm.

Vendor contracts require ironclad provisions guaranteeing data portability and safe model extraction. A financial institution must always own its core intellectual property and internal governance frameworks. 

By fixing internal data maturity, securing the development pipeline against adversarial threats, and forcing legal and engineering teams to actually speak to one another, leaders can safely deploy modern algorithms. Treating strict compliance as the absolute foundation of engineering guarantees that AI drives secure and sustainable growth.

See also: Ocorian: Family offices turn to AI for financial data insights

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Secure governance accelerates financial AI revenue growth appeared first on AI News.

]]>
Glia wins Excellence Award for safer AI in banking https://www.artificialintelligence-news.com/news/glia-wins-excellence-award-for-safer-ai-in-banking/ Mon, 30 Mar 2026 14:12:59 +0000 https://www.artificialintelligence-news.com/?p=112814 Glia, a customer service platform providing AI-powered interactions for the banking sector, has been named a winner in the Banking and Financial Services Category at the 2026 Artificial Intelligence Excellence Awards. The awards recognises achievements in a range of industries and use cases, spotlighting “companies and leaders moving AI beyond experimentation and into practical, accountable […]

The post Glia wins Excellence Award for safer AI in banking appeared first on AI News.

]]>
Glia, a customer service platform providing AI-powered interactions for the banking sector, has been named a winner in the Banking and Financial Services Category at the 2026 Artificial Intelligence Excellence Awards. The awards recognises achievements in a range of industries and use cases, spotlighting “companies and leaders moving AI beyond experimentation and into practical, accountable deployment.”

Speaking on the awards, Russ Fordyce, Chief Recognition Officer at Business Intelligence Group commented, “AI has arrived! 2026 is about execution and results. Glia stood out because its work in banking reflects where the market is headed: practical AI that solves real problems, earns trust, and delivers measurable value. The recognition highlights a team that is not participating in the AI shift, but helping define what meaningful progress looks like.”

Glia’s Banking AI platform helps financial institutions navigate security and regulatory risks common in generative AI. It was chosen by a panel of AI experts and analysts as a platform that deploys AI trained precisely for banking workflows. It helps banks and credit unions automate up to 80% of all interactions, according to Glia. For the customer service and member care functions, this can free up time for other tasks, including strengthening client relationships and expanding lending and deposit portfolios; in other words, doing what humans can do and AI can’t.

Dan Michaeli, CEO and co-founder of Glia, said: “The award celebrates the future of banking in an time where AI is everywhere. With consumers in every demographic now using AI to manage their lives, the pressure on financial institutions to provide instant, intelligent service has never been higher.”

“Our platform is designed to help banks and credit unions lead this transition, using secure, banking-specific AI to amplify their efficiency while protecting the human connection that defines their brand,” he said.

Glia has enjoyed positive business momentum recently, with the company announcing recently it will be the first to contractually promise to resist AI hallucinations and circumvent prompt injections for its clients’ use of the platform.

As AI becomes increasingly complex, particularly in financial institutions, Glia’s focus on AI safety provides a model that banks and credit unions might rely on to help them use AI effectively and securely.

(Image source: “Space Invaders does cones and safety barriers” by Gene Hunt is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Glia wins Excellence Award for safer AI in banking appeared first on AI News.

]]>
Assessing AI powered price forecasting tools in currency markets https://www.artificialintelligence-news.com/news/assessing-ai-powered-price-forecasting-tools-in-currency-markets/ Mon, 30 Mar 2026 11:49:38 +0000 https://www.artificialintelligence-news.com/?p=112804 As artificial intelligence becomes a driving force in financial prediction, the reliability of its forecasting tools faces increasing scrutiny. Many traders question whether claims of high accuracy translate into consistent results under live market conditions. Understanding how these AI systems are evaluated reveals important distinctions between performance in theory and practice. Few financial domains are […]

The post Assessing AI powered price forecasting tools in currency markets appeared first on AI News.

]]>
As artificial intelligence becomes a driving force in financial prediction, the reliability of its forecasting tools faces increasing scrutiny. Many traders question whether claims of high accuracy translate into consistent results under live market conditions. Understanding how these AI systems are evaluated reveals important distinctions between performance in theory and practice.

Few financial domains are as dependent on accurate prediction as forex trading, where slight changes in exchange rates can have consequences for participants. The surge of AI powered price forecasting tools has brought new abilities, but it has also raised questions about what constitutes meaningful accuracy. Readers in this rapidly evolving landscape of predictive technology seek clarity on how well these tools perform and which factors should inform their assessment of forecasts in live environments.

Scrutinising claims of accuracy in predictive tools

Accuracy claims regarding AI forecasting in currency markets are often presented optimistically, particularly when based on controlled demonstrations. These scenarios typically reflect historical data or optimised backtests, which can differ sharply from the volatility and unpredictability seen in live trading environments. The central issue lies in the gap between demonstration results and how models react to real-time market changes. While technical accuracy metrics are frequently referenced, their practical meaning for financial decision-making can remain ambiguous.

When evaluating the accuracy of AI powered price forecasting tools, it is crucial to clarify what “accuracy” represents in this context. For some, accuracy might mean correctly predicting the direction of currency moves, while for others, it could relate to the exact magnitude or timing of price changes. The complexity of forex, with its fast moving variables and interdependencies, underscores why simplistic accuracy scores rarely provide the full picture. Professional users often demand both statistical rigor and domain expertise to interpret results effectively.

Understanding the mechanics behind AI market predictions

AI powered price forecasting tools commonly employ machine learning models specialised for time series prediction. These tools typically use advanced architectures like recurrent neural networks, convolutional neural networks, or transformer-based models designed to capture sequential patterns in financial data. They rely on inputs ranging from historical pricing and trading volumes to macroeconomic indicators and alternative data sources, including geopolitical events or sentiment analysis from news and social media.

There are varied approaches in predictive modeling, with some systems focusing on point predictions that offer specific future prices, while others generate probabilistic forecasts reflecting outcome likelihoods in confidence intervals. The distinction affects how users interpret and trust model outputs. Although probabilistic methods can better accommodate market uncertainty, understanding distributional forecast accuracy and related concepts requires additional expertise. This complexity highlights why headline accuracy figures alone are not sufficient for assessing a system’s practical value.

Evaluating model performance with robust accuracy metrics

Practitioners typically assess AI powered price forecasting tools using a range of evaluation metrics, each shedding light on different facets of prediction quality. Directional accuracy measures whether forecasts correctly predict upward or downward movement of currency pairs, while metrics like mean absolute error or root mean squared error focus on the magnitude of prediction errors. Calibration, which reflects how well predicted probabilities align with actual market occurrences, adds another important dimension.

Meaningful assessment requires benchmarks and rigorous out-of-sample testing, because models effective on past data may not remain reliable as markets change. Overfitting, where models treat noise as signal, can cause high-scoring tools to lose effectiveness once deployed. Similarly, regime shifts and nonstationarity in forex can quickly undermine predictive accuracy, highlighting the importance of ongoing monitoring and validation. It is recognised that participants benefit from understanding both the strengths and limitations of these tools before integrating them into operational processes.

Navigating real world frictions and effective risk controls

When AI powered price forecasting tools are integrated into live strategies, various real world frictions become significant. Issues like latency – the delay between signal and execution – with slippage, spread widening, and inconsistent execution quality, may degrade results observed in backtesting. And, data quality concerns and the risk of look ahead bias present ongoing challenges, particularly if datasets inadvertently include future information unavailable at decision time. As algorithmic signals become more prevalent, financial markets may adapt, reducing the effectiveness of commonly used forecasting techniques.

Effective deployment requires a blend of quantitative insight and robust risk management. Rather than relying solely on single-point forecasts, applying confidence intervals and scenario analysis can yield greater operational stability. Position sizing rules and drawdown controls, with continuous stress testing during volatile periods, help mitigate the effects of erroneous predictions. Ongoing review and adaptation, grounded in an understanding of model limitations and maintained with human oversight, are essential for the sustainable application of AI powered price forecasting tools in currency markets.

(Image source: Bazoom)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Assessing AI powered price forecasting tools in currency markets appeared first on AI News.

]]>
JPMorgan begins tracking how employees use AI at work https://www.artificialintelligence-news.com/news/jpmorgan-begins-tracking-how-employees-use-ai-at-work/ Mon, 30 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112785 Banking house JPMorgan Chase is asking its roughly 65,000 engineers and technologists to use AI tools as part of their regular workflow. Business Insider reported that managers are tracking how often staff use these tools. That use may also influence performance reviews. The report states employees are encouraged to use tools like ChatGPT and Claude […]

The post JPMorgan begins tracking how employees use AI at work appeared first on AI News.

]]>
Banking house JPMorgan Chase is asking its roughly 65,000 engineers and technologists to use AI tools as part of their regular workflow. Business Insider reported that managers are tracking how often staff use these tools. That use may also influence performance reviews.

The report states employees are encouraged to use tools like ChatGPT and Claude Code when writing code, reviewing documents, or handling routine tasks. Internal systems then classify workers based on their level of use. Some are labelled “light users,” while others fall into a “heavy user” category.

JPMorgan has been using in fraud detection and risk analysis. What stands out here is not the technology itself, but how it is being woven into day-to-day expectations for staff.

According to internal materials cited by Business Insider, managers are paying close attention to how employees use AI tools.

JPMorgan shows AI adoption in banks

Many companies have spent the past two years rolling out AI tools in departments. In most cases, adoption has been uneven. Some teams experiment heavily, while others stick to existing workflows.

JPMorgan is treating AI as a standard part of the job. That creates a more uniform level of adoption in teams. In the past, performance reviews focused on output and accuracy. Now, they may also include how effectively employees use AI tools to reach those results.

That raises a practical question for large organisations. If AI can reduce the time needed for certain tasks, should employees be expected to produce more work in the same amount of time?

Keeping pace with internal change

By tracking use, the bank may be trying to avoid a familiar problem in enterprise software rollouts. Tools are deployed, but adoption is slow, limiting their impact. Making AI part of performance reviews creates a stronger incentive to engage with the technology. It also suggests that AI literacy is becoming a baseline skill, similar to how spreadsheets or code tools became standard over time.

New challenges include employees feeling pressure to use AI even in cases where it does not clearly improve the outcome. There is also the matter of how to measure “good” use, as opposed to simply frequent use.

JPMorgan’s AI risks and efficiency gains

Banks operate in a regulated environment, where introducing AI into more workflows increases the need for oversight.

Tools like ChatGPT and Claude Code can help summarise information or generate drafts, but they can also produce incorrect or incomplete results. That means employees still need to verify outputs before using them in decision-making or client-facing work.

JPMorgan has developed internal controls for AI systems in areas like trading and risk. Expanding use in a broader group of employees may require similar safeguards, creating a situation for the bank in which it wants to improve efficiency, but also needs to ensure that heavier AI use does not introduce new risks.

Other financial institutions are likely watching closely. If tying AI use to performance leads to measurable gains in productivity, similar models may spread in the sector.

The bank’s approach may reshape how companies hire and train employees, and skills like prompt writing and output checks could become part of standard job requirements. JPMorgan’ approach suggests that this change is already underway, at least in banking.

(Photo by IKECHUKWU JULIUS UGWU)

See also: RPA matters, but AI changes how automation works

Want to experience the full spectrum of enterprise technology innovation? Join TechEx in Amsterdam, California, and London. Covering AI, Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing, and Data Centres, TechEx brings together global leaders to share real-world use cases and in-depth insights. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post JPMorgan begins tracking how employees use AI at work appeared first on AI News.

]]>
Ocorian: Family offices turn to AI for financial data insights https://www.artificialintelligence-news.com/news/ocorian-family-offices-ai-for-financial-data-insights/ Wed, 25 Mar 2026 14:58:29 +0000 https://www.artificialintelligence-news.com/?p=112774 To gain financial data insights, the majority of family offices now turn to AI, according to new research from Ocorian. The global study reveals 86 percent of these private wealth groups are utilising AI to improve their daily operations and data analysis. Representing a combined wealth of $119.37 billion, these organisations want machine learning to […]

The post Ocorian: Family offices turn to AI for financial data insights appeared first on AI News.

]]>
To gain financial data insights, the majority of family offices now turn to AI, according to new research from Ocorian. The global study reveals 86 percent of these private wealth groups are utilising AI to improve their daily operations and data analysis.

Representing a combined wealth of $119.37 billion, these organisations want machine learning to modernise their workflows. The technology offers practical benefits for institutions handling complex portfolios, particularly in detecting anomalies, streamlining reporting, and navigating strict regulatory frameworks.

Securing financial data insights via AI and system governance

Implementing these tools requires careful alignment with existing enterprise architectures. Financial institutions frequently rely on major cloud ecosystems, such as Microsoft Azure or Google Cloud, to provide the necessary computing power and security protocols for advanced data processing. By using these platforms, operations teams can deploy machine learning models that identify potential fraud patterns or compliance breaches much faster than manual reviews allow.

While 26 percent of surveyed wealth executives strongly agree that AI will reshape administration and boost performance within the next year, 72 percent expect the broader effects to materialise over a two to five-year horizon.

This cautious timeline reflects the reality of integrating complex algorithms into highly-regulated environments. Integrating new systems without disrupting daily client services presents a major challenge. Legacy data architectures often require heavy re-engineering before they can fully support predictive analytics.

Michael Harman, Commercial Director for the UK and Channel Islands at Ocorian, said: “Family offices are gradually adopting AI and technology as part of their operations and are particularly using it for data insights … there is a realisation that it will have a major impact and family offices need to start exploring the sector and will need support in making the transition.”

Balancing operational upgrades with capital exposure

Despite high operational adoption rates, direct capital allocation into the AI sector remains low. Only seven percent of respondents across 16 territories – including the UK, US, UAE, and Singapore – are currently seeking direct investment opportunities in such technology firms.

This current hesitation highlights a preference for using proven enterprise solutions rather than absorbing the venture-style risks associated with emerging startups. Leaders are focused on immediate operational stability and verifiable returns on investment.

However, this dynamic is likely to change rapidly over the next three years, as 74 percent of these organisations expect to increase their investments in digital assets. Within that group, 20 percent plan to increase their financial commitment to the sector dramatically.

Outsourcing the technical burden to established service providers allows institutions to benefit from enhanced fraud detection and compliance monitoring without directly managing the algorithmic infrastructure. Success will depend on establishing clean data pipelines and ensuring cross-functional teams understand how to interpret algorithmic outputs for risk assessment.

By prioritising secure and scalable cloud platforms, and focusing on specific operational pain points like regulatory reporting, financial leaders can effectively use these AI capabilities to bolster their data insights while maintaining the necessary oversight required in modern wealth management.

See also: AI agents enter banking roles at Bank of America

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Ocorian: Family offices turn to AI for financial data insights appeared first on AI News.

]]>
AI agents enter banking roles at Bank of America https://www.artificialintelligence-news.com/news/ai-agents-enter-banking-roles-at-bank-of-america/ Wed, 25 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112768 AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move into systems that support client interactions. Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisers, rolled out to around 1,000 financial advisers, according to Banking Dive. […]

The post AI agents enter banking roles at Bank of America appeared first on AI News.

]]>
AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move into systems that support client interactions.

Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisers, rolled out to around 1,000 financial advisers, according to Banking Dive. The move is one of the clearer early examples of how AI is being used in core banking roles, where systems support decision-making in real time.

The platform is based on Salesforce’s Agentforce, which enables the creation of AI agents to handle tasks. It is designed to help advisers handle client queries and prepare recommendations. It can also help manage daily workflows. According to Banking Dive, the system is part of a wider push among major banks to test how AI agents can work alongside human staff.

Bank of America has been expanding its use of AI in its business. It’s said its virtual assistant Erica handles work equivalent to about 11,000 employees, while 18,000 software developers use AI coding tools that have improved productivity by around 20%.

AI agents move to financial decision-making

The approach differs from earlier deployments of AI in banking, which focused mainly on chatbots or internal productivity tools. In those cases, AI was used to answer simple questions or automate routine tasks. The newer systems are built to handle more complex work, including analysing client data.

Firms like JPMorgan, Wells Fargo, and Goldman Sachs are also testing AI tools aimed at improving productivity and helping staff in client-facing roles, though these efforts vary and are not always focused on advisor-specific AI agent systems. While each bank is taking a different approach, the common goal is to increase output without expanding headcount.

Banks report gains in how quickly advisers can access information or prepare for meetings, based on industry reporting and early deployment feedback. Yet there are ongoing concerns about accuracy and oversight, especially when AI systems are used to suggest financial decisions.

Some analysts remain cautious about how quickly AI is changing banking. Wells Fargo analyst Mike Mayo wrote that recent developments have yet to produce major new products, describing the current phase as “a little boring from a product standpoint”.

Human oversight

Bank of America’s rollout stands out because of its scale. Financial advisers sit at the centre of the bank’s relationship with clients, particularly in wealth management. Introducing AI into that role suggests a growing level of trust in the technology. It also shows a willingness to let it influence how advice is formed and delivered.

When dealing with complex financial decisions or high-value clients, industry executives acknowledge AI is unlikely to completely replace expert roles, particularly in complex financial workflows where context and judgement matter.

This hybrid model is becoming more common in the sector. Firms are treating AI as a part of the workforce, with staff expected to work alongside systems day-to-day.

Progress’s limits

There are also practical challenges. AI systems depend on clean, structured data, which is not always easy to achieve in large organisations. Integration with existing tools can take time, and staff may need training to use new systems effectively.

Regulation adds another layer of complexity. Financial institutions must ensure that AI-driven recommendations meet compliance standards and explain decisions if questioned by regulators. This requirement may limit the amount of autonomy provided to AI systems, particularly in areas like lending or investment advice.

Some estimates imply that up to one-third of banking jobs, or parts of those roles, could eventually be handled by AI. The introduction of AI agents into advisory roles raises questions about how the job itself may change. If systems can handle more of the analytical work, advisers may spend more time on client relationships and less on preparation. Over time, this could shift the skills required for the role.

Reliance on AI introduces new risks. Errors in data or model output could affect recommendations, and over-reliance on automated systems may reduce critical review by human staff. The issues are still being studied as deployments expand.

Bank of America’s rollout offers a view into how an AI transition may play out. It shows a large institution testing how far AI can be integrated into everyday work. As more banks follow a similar path, the focus is likely to shift to how AI can be managed once it becomes part of core operations.

See also: Visa prepares payment systems for AI agent-initiated transactions

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI agents enter banking roles at Bank of America appeared first on AI News.

]]>