TechEx Events - AI News https://www.artificialintelligence-news.com/categories/techex-events/ Artificial Intelligence News Mon, 13 Apr 2026 10:59:57 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png TechEx Events - AI News https://www.artificialintelligence-news.com/categories/techex-events/ 32 32 Companies expand AI adoption while keeping control https://www.artificialintelligence-news.com/news/companies-expand-ai-adoption-while-keeping-control/ Mon, 13 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112964 Many companies are taking a slower, more controlled approach to autonomous systems as AI adoption grows. Rather than deploying systems that act on their own, they are focusing on tools that assist human decision-making and keep control over outputs. This approach is especially clear in sectors where errors carry real financial or legal risk. One […]

The post Companies expand AI adoption while keeping control appeared first on AI News.

]]>
Many companies are taking a slower, more controlled approach to autonomous systems as AI adoption grows. Rather than deploying systems that act on their own, they are focusing on tools that assist human decision-making and keep control over outputs. This approach is especially clear in sectors where errors carry real financial or legal risk.

One example comes from S&P Global Market Intelligence, which builds AI tools into its Capital IQ Pro platform. The system is used by analysts to review company filings, earnings calls, and market data. Its AI features are designed to stay grounded in source material.

According to S&P Global Market Intelligence, its AI tools extract insights from structured and unstructured data, including transcripts and reports, while working with verified source data.

AI adoption ahead of autonomy

The current wave of AI tools in business is often described as a step toward autonomous agents. Systems may eventually plan tasks and act without direct human input. But most companies are not there yet. AI adoption is already widespread, with a majority of organisations using AI in at least one part of their business, according to research from McKinsey & Company. Many organisations have yet to scale AI in the enterprise, showing a disconnect between initial use and broader deployment.

Instead, AI helps with tasks like summarising documents or answering queries, but it does not act independently.

S&P Global Market Intelligence’s tools let users to query large datasets through a chat interface, but the results are tied to verified financial content. In many cases, users can refer back underlying documents, lowering the risk of errors or unsupported outputs.

In its research, the company outlines AI governance as a process in which systems are designed and monitored, with attention to fairness and accountability.

AI in high-risk sectors

In finance, small errors can have large consequences. That shapes how AI is built and used. Tools like Capital IQ Pro are designed to support analysts not replace them. The system may help surface insights or highlight trends, but final decisions still rest with human users.

The gap between adoption and value is becoming clearer. Many organisations report a gap between AI deployment and measurable business outcomes, according to findings from McKinsey & Company.

While autonomous systems may be able to handle certain tasks, companies often need clear accountability. When decisions affect investments, compliance, or reporting, there must be a way to explain how those decisions were made.

Research from S&P Global notes that organisations are increasingly focused on building governance frameworks to manage AI risks, including data quality issues and model bias.

Toward future systems

The difference between today’s controlled AI tools and future autonomous systems remains wide. Interest in more autonomous and agent-driven systems is also growing, even as most organisations remain in early stages of deployment. Systems that can explain their outputs, show their sources, and operate in defined limits are more likely to be trusted.

Autonomous agents may one day handle tasks like financial analysis or supply chain planning with minimal input. But without clear control mechanisms, their use will remain limited.

The themes will feature at AI & Big Data Expo North America 2026 on May 18 – 19. S&P Global Market Intelligence is listed as a bronze sponsor of the event. The agenda features topics like AI governance and the use of AI in regulated industries.

Balancing ability and control

The push toward autonomous AI is unlikely to slow down. Advances in large language models and agent-based systems continue to expand what AI can do.

Enterprise users are asking the question of how to keep those systems under control. S&P Global Market Intelligence’s approach reflects that concern. By keeping AI grounded in verified data and placing humans at the centre of decision-making, it prioritises trust over autonomy.

As systems grow more capable, the ability to govern and control them could become just as important as the tasks they perform.

(Photo by Hitesh Choudhary)

See also: Why companies like Apple are building AI agents with limits

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Companies expand AI adoption while keeping control appeared first on AI News.

]]>
IBM: How robust AI governance protects enterprise margins https://www.artificialintelligence-news.com/news/ibm-how-robust-ai-governance-protects-enterprise-margins/ Fri, 10 Apr 2026 13:57:15 +0000 https://www.artificialintelligence-news.com/?p=112947 To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure. When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a […]

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure.

When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a platform to foundational infrastructure, altering the governing rules entirely.

At the initial product stage, exerting tight corporate control often feels highly advantageous. Closed development environments iterate quickly and tightly manage the end-user experience. They capture and concentrate financial value within a single corporate entity, an approach that functions adequately during early product development cycles.

However, IBM’s analysis highlights that expectations change entirely when a technology solidifies into a foundational layer. Once other institutional frameworks, external markets, and broad operational systems rely on the software, the prevailing standards adapt to a new reality. At infrastructure scale, embracing openness ceases to be an ideological stance and becomes a highly practical necessity.

AI is currently crossing this threshold within the enterprise architecture stack. Models are increasingly embedded directly into the ways organisations secure their networks, author source code, execute automated decisions, and generate commercial value. AI functions less as an experimental utility and more as core operational infrastructure.

The recent limited preview of Anthropic’s Claude Mythos model brings this reality into sharper focus for enterprise executives managing risk. Anthropic reports that this specific model can discover and exploit software vulnerabilities at a level matching few human experts.

In response to this power, Anthropic launched Project Glasswing, a gated initiative designed to place these advanced capabilities directly into the hands of network defenders first. From IBM’s perspective, this development forces technology officers to confront immediate structural vulnerabilities. If autonomous models possess the capability to write exploits and shape the overall security environment, Thomas notes that concentrating the understanding of these systems within a small number of technology vendors invites severe operational exposure.

With models achieving infrastructure status, IBM argues the primary issue is no longer exclusively what these machine learning applications can execute. The priority becomes how these systems are constructed, governed, inspected, and actively improved over extended periods.

As underlying frameworks grow in complexity and corporate importance, maintaining closed development pipelines becomes exceedingly difficult to defend. No single vendor can successfully anticipate every operational requirement, adversarial attack vector, or system failure mode.

Implementing opaque AI structures introduces heavy friction across existing network architecture. Connecting closed proprietary models with established enterprise vector databases or highly sensitive internal data lakes frequently creates massive troubleshooting bottlenecks. When anomalous outputs occur or hallucination rates spike, teams lack the internal visibility required to diagnose whether the error originated in the retrieval-augmented generation pipeline or the base model weights.

Integrating legacy on-premises architecture with highly gated cloud models also introduces severe latency into daily operations. When enterprise data governance protocols strictly prohibit sending sensitive customer information to external servers, technology teams are left attempting to strip and anonymise datasets before processing. This constant data sanitisation creates enormous operational drag. 

Furthermore, the spiralling compute costs associated with continuous API calls to locked models erode the exact profit margins these autonomous systems are supposed to enhance. The opacity prevents network engineers from accurately sizing hardware deployments, forcing companies into expensive over-provisioning agreements to maintain baseline functionality.

Why open-source AI is essential for operational resilience

Restricting access to powerful applications is an understandable human instinct that closely resembles caution. Yet, as Thomas points out, at massive infrastructure scale, security typically improves through rigorous external scrutiny rather than through strict concealment.

This represents the enduring lesson of open-source software development. Open-source code does not eliminate enterprise risk. Instead, IBM maintains it actively changes how organisations manage that risk. An open foundation allows a wider base of researchers, corporate developers, and security defenders to examine the architecture, surface underlying weaknesses, test foundational assumptions, and harden the software under real-world conditions.

Within cybersecurity operations, broad visibility is rarely the enemy of operational resilience. In fact, visibility frequently serves as a strict prerequisite for achieving that resilience. Technologies deemed highly important tend to remain safer when larger populations can challenge them, inspect their logic, and contribute to their continuous improvement.

Thomas addresses one of the oldest misconceptions regarding open-source technology: the belief that it inevitably commoditises corporate innovation. In practical application, open infrastructure typically pushes market competition higher up the technology stack. Open systems transfer financial value rather than destroying it.

As common digital foundations mature, the commercial value relocates toward complex implementation, system orchestration, continuous reliability, trust mechanics, and specific domain expertise. IBM’s position asserts that the long-term commercial winners are not those who own the base technological layer, but rather the organisations that understand how to apply it most effectively.

We have witnessed this identical pattern play out across previous generations of enterprise tooling, cloud infrastructure, and operating systems. Open foundations historically expanded developer participation, accelerated iterative improvement, and birthed entirely new, larger markets built on top of those base layers. Enterprise leaders increasingly view open-source as highly important for infrastructure modernisation and emerging AI capabilities. IBM predicts that AI is highly likely to follow this exact historical trajectory.

Looking across the broader vendor ecosystem, leading hyperscalers are adjusting their business postures to accommodate this reality. Rather than engaging in a pure arms race to build the largest proprietary black boxes, highly profitable integrators are focusing heavily on orchestration tooling that allows enterprises to swap out underlying open-source models based on specific workload demands. Highlighting its ongoing leadership in this space, IBM is a key sponsor of this year’s AI & Big Data Expo North America, where these evolving strategies for open enterprise infrastructure will be a primary focus.

This approach completely sidesteps restrictive vendor lock-in and allows companies to route less demanding internal queries to smaller and highly efficient open models, preserving expensive compute resources for complex customer-facing autonomous logic. By decoupling the application layer from the specific foundation model, technology officers can maintain operational agility and protect their bottom line.

The future of enterprise AI demands transparent governance

Another pragmatic reason for embracing open models revolves around product development influence. IBM emphasises that narrow access to underlying code naturally leads to narrow operational perspectives. In contrast, who gets to participate directly shapes what applications are eventually built. 

Providing broad access enables governments, diverse institutions, startups, and varied researchers to actively influence how the technology evolves and where it is commercially applied. This inclusive approach drives functional innovation while simultaneously building structural adaptability and necessary public legitimacy.

As Thomas argues, once autonomous AI assumes the role of core enterprise infrastructure, relying on opacity can no longer serve as the organising principle for system safety. The most reliable blueprint for secure software has paired open foundations with broad external scrutiny, active code maintenance, and serious internal governance.

As AI permanently enters its infrastructure phase, IBM contends that identical logic increasingly applies directly to the foundation models themselves. The stronger the corporate reliance on a technology, the stronger the corresponding case for demanding openness.

If these autonomous workflows are truly becoming foundational to global commerce, then transparency ceases to be a subject of casual debate. According to IBM, it is an absolute, non-negotiable design requirement for any modern enterprise architecture.

See also: Why companies like Apple are building AI agents with limits

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
As AI agents take on more tasks, governance becomes a priority https://www.artificialintelligence-news.com/news/as-ai-agents-take-on-more-tasks-governance-becomes-a-priority/ Mon, 06 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112888 AI systems are starting to move beyond simple responses. In many organisations, AI agents are now being tested to plan tasks, make decisions, and carry out actions with limited human input. It is no longer just about whether a model gives the right answer. It is about what happens when that model is allowed to […]

The post As AI agents take on more tasks, governance becomes a priority appeared first on AI News.

]]>
AI systems are starting to move beyond simple responses. In many organisations, AI agents are now being tested to plan tasks, make decisions, and carry out actions with limited human input. It is no longer just about whether a model gives the right answer. It is about what happens when that model is allowed to act.

Autonomous systems need clear boundaries. They need rules that define what they can access, what they are allowed to do, and how their actions are tracked. Without those controls, even well-trained systems can create problems that are hard to detect or reverse.

One company working on this problem is Deloitte. The firm has been developing governance frameworks and advisory approaches to help organisations manage AI systems.

From tools to AI agents

Most AI systems in use today still depend on human prompts. They generate text, analyse data, or make predictions, but a person usually decides what happens next. Agentic AI changes that pattern. These systems can break down a goal into steps, choose actions, and interact with other systems to complete tasks.

That added independence brings new challenges. When a system acts on its own, it may take paths that were not fully expected or use data in ways that were not intended.

Deloitte’s work focuses on helping organisations prepare for these risks. Rather than treating AI as a standalone tool, the firm looks at how it fits into business processes, including how decisions are made and how data flows through systems.

Building governance into the lifecycle

Governance should not be added after deployment. It needs to be built into the full lifecycle of an AI system.

This starts at the design stage. Organisations need to define what a system is allowed to do and where its limits are. This may include setting rules around data use and outlining how the system should respond in uncertain situations.

The next stage is deployment. At this point, governance focuses on access and control, including who can use the system and what it can connect to. Once the system is live, monitoring becomes the main concern. Autonomous systems can change over time as they interact with new data. Without regular checks, they may drift away from their original purpose.

The role of transparency and accountability

As AI systems take on more responsibility, it becomes more difficult to trace how decisions are made. This creates a demand for stronger transparency. Deloitte’s work highlights the importance of keeping track of how systems operate. This includes logging actions and documenting decisions. These records help organisations in determining what happened if something goes wrong. If an autonomous system takes an action, there needs to be clarity about who is responsible.

Research from Deloitte shows that adoption of AI agents is moving faster than the controls needed to manage them. Around 23% of companies already use them, and that figure is expected to reach 74% within two years. Only 21% report having strong safeguards in place to oversee how they behave.

Real-time oversight for AI agents

Once an autonomous system is active, the focus shifts to how it behaves in real-world conditions. Static rules are not always enough, and systems need to be observed as they operate.

Deloitte’s approach includes real-time monitoring, allowing organisations to track what an AI system is doing as it performs tasks. If the system behaves in an unexpected way, teams can step in quickly. This may involve pausing certain actions or adjusting permissions. Real-time oversight also helps with compliance. In regulated industries, companies need to show that systems follow rules and standards.

In practice, these controls are starting to appear in operational settings. Deloitte describes scenarios where AI systems monitor equipment performance across sites. Sensor data can signal early signs of failure, which can trigger maintenance workflows and update internal systems. Governance frameworks define what actions the system can take, when human approval is required, and how decisions are recorded. The process runs across multiple systems, but from a user’s point of view, it appears as a single action.

Governance is part of discussions at AI & Big Data Expo North America 2026, taking place on May 18–19 in Santa Clara, California. Deloitte is listed as a Diamond Sponsor for the event, placing it among the firms contributing to conversations around how autonomous systems are deployed and controlled in practice.

The challenge is not just building smarter systems, but ensuring they behave in ways organisations can understand, manage, and trust over time.

(Photo by Roman)

See also: Autonomous AI systems depend on data governance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post As AI agents take on more tasks, governance becomes a priority appeared first on AI News.

]]>
Experian uncovers fraud paradox in financial services’ AI adoption https://www.artificialintelligence-news.com/news/experian-ai-fraud-detection-financial-services-2026/ Thu, 02 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112849 The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it. According to FTC data cited in the forecast, consumers lost […]

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it.

According to FTC data cited in the forecast, consumers lost more than US$12.5 billion to fraud in 2024. As per Experian’s own data accompanying the report, nearly 60% of companies reported an increase in fraud losses from 2024 to 2025. Experian’s fraud prevention solutions helped clients avoid an estimated US$19 billion in fraud losses globally in 2025, a figure that underscores the scale of the problem and how much defence now depends on AI matching the speed and autonomy of attacks.

The agentic AI issue

The most pressing finding in Experian’s forecast is what the company calls machine-to-machine mayhem, the point at which agentic AI systems, designed to transact autonomously on behalf of users, become indistinguishable from the bots fraudsters deploy for the same purpose.

According to Experian’s forecast, as organisations strive to integrate AI agents capable of independent decision-making, fraudsters are exploiting those same systems to run high-volume digital fraud at a scale and speed no human operation could sustain. The core challenge, as per the report, is that machine-to-machine interactions carry no clear ownership of liability; when an AI agent initiates a transaction that turns out to be fraudulent, the question of who is responsible has no settled answer.

Kathleen Peters, chief innovation officer for Fraud and Identity at Experian North America, framed the problem: “Technology is accelerating the evolution of fraud, making it more sophisticated and harder to detect. By combining differentiated data with advanced analytics and cutting-edge technology, businesses can strengthen fraud defences, safeguard consumers, and deliver secure, seamless experiences.”

Experian predicts that this will reach a tipping point in 2026, forcing substantive industry conversations around liability and the governance of agentic AI in commerce. Some organisations are already making preemptive moves. Amazon, for instance, has stated it blocks third-party AI agents from browsing and transacting on its platform, citing security and privacy concerns.

Four other threats the forecast identifies

Beyond the agentic AI issue, Experian’s forecast identifies four additional trends that financial institutions need to consider in 2026.

Deepfake candidates infiltrating remote workforces; Generative AI tools can now produce tailored CVs and real-time deepfake video capable of passing job interviews. According to the forecast, employers will onboard individuals who are not who they claim to be, granting bad actors access to internal systems. The FBI and Department of Justice issued multiple warnings in 2025 about documented instances of North Korean operatives using this approach to gain employment at US companies.

Website cloning overwhelms fraud teams; AI tools have made it easier to create replicas of legitimate sites, and harder to eliminate them permanently. As per the forecast, even after takedown requests are actioned, spoofed domains continue to resurface, forcing fraud teams into reactive patterns.

Emotionally intelligent scam bots; Generative AI means bots can conduct complex romance fraud and relative-in-need scams without human operators. According to Experian’s forecast, such bots respond convincingly, build trust over extended periods, and are becoming increasingly difficult distinguish from genuine human interaction.

Smart home vulnerabilities: Devices including virtual assistants, smart locks, and connected appliances create new entry points for fraudsters. Experian forecasts that bad actors will exploit these devices to access personal data and monitor household activity as the connected home becomes a more greater part of everyday financial behaviour.

Financial institutions’ responses

According to Experian’s Perceptions of AI Report, drawing on responses from more than 200 decision-makers at leading financial institutions, 84% identify AI as a critical or high priority for their business strategy over the next two years. A further 89% say AI will play an important role in the lending lifecycle.

The governance dimension, however, is where institutions struggle. According to the same report, 73% of respondents are concerned about the regulatory environment around AI, and 65% identify AI-ready data as one of their biggest deployment challenges. Data quality was rated the single most important factor in choosing an AI vendor, which positions Experian’s data-first positioning at the intersection of what financial institutions say they need most.

On the compliance side, Experian’s AI-powered Assistant for Model Risk Management addresses one of the most resource-intensive requirements facing institutions deploying AI. According to a 2025 Experian study of more than 500 global financial institutions, 67% struggle to meet their country’s regulatory requirements, 79% report more frequent supervisory communications from regulators than a year ago, and 60% still use manual compliance processes. In Experian’s announcement, the company states that more than 70% of larger institutions report model documentation compliance involves over 50 people, a figure that signals the scale of the automation opportunity.

Vijay Mehta, EVP of Global Solutions and Analytics at Experian Software Solutions, described the challenge the product addresses: “The AI-enabled speed of data analytics and model development is driving unprecedented business opportunities for financial institutions, but it comes with a challenge: global regulations that require time-consuming documentation. Experian Assistant for Model Risk Management helps solve this labour and resource-intensive requirement with end-to-end model documentation automation.”

The data quality foundation

Running underneath Experian’s fraud and compliance products is the same structural argument that appears in both IBM and Salesforce’s AI narratives that appeared this week: AI is only as reliable as the data it runs on. As per Experian’s Perceptions of AI Report, 65% of financial institution decision-makers consider AI-ready data one of their biggest challenges, and data quality is the most critical factor influencing trust in AI vendors.

That is not a coincidence of messaging. It reflects a constraint facing financial services institutions as they move AI from pilots into production credit decisioning, fraud detection, and regulatory reporting; functions where explainability and auditability are not optional.

Experian’s CDAO Paul Heywood is among the confirmed speakers at the AI & Big Data Expo, part of TechEx North America, taking place 18 – 19 May 2026 at the San Jose McEnery Convention Centre, California. Experian is a Platinum Sponsor at TechEx Global.

See also: Hershey applies AI in its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
Autonomous AI systems depend on data governance https://www.artificialintelligence-news.com/news/autonomous-ai-systems-depend-on-data-governance/ Thu, 02 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112846 Much of the current focus on AI safety has centred on models – how they are trained and monitored. But as systems become more autonomous, attention is changing toward the data those systems depend on. If the data feeding an AI system is fragmented, outdated, or lacks oversight, the system’s behaviour can become more unpredictable. […]

The post Autonomous AI systems depend on data governance appeared first on AI News.

]]>
Much of the current focus on AI safety has centred on models – how they are trained and monitored. But as systems become more autonomous, attention is changing toward the data those systems depend on. If the data feeding an AI system is fragmented, outdated, or lacks oversight, the system’s behaviour can become more unpredictable.

Data governance is becoming a core part of how autonomous systems are controlled. Denodo is one of the companies working in this area, focusing on how organisations access and manage data in different sources.

Autonomous AI systems carry out tasks with limited supervision, retrieving information, making decisions based on that information, and triggering actions in business workflows. The challenge is that these systems depend on a steady flow of data. In regulated industries, unpredictable results can create compliance risks. In customer-facing systems, it might result in poor decisions or incorrect responses.

How data alters AI behaviour

Data is often spread in multiple systems. Large organisations store information in cloud platforms, internal databases, and third-party services. This creates silos, where different parts of the business operate on different versions of the same data.

Denodo addresses this problem by providing a way to access data without moving it into a single repository. Its platform creates a unified view of data from different sources for applications, including AI systems.

It lets allows organisations apply consistent policies in all data sources. Access rules, compliance requirements, and use limits can be defined in one place. It also supports approaches that allow AI systems to query enterprise data using defined structures and policies.

The platform logs how data is queried and what is returned, creating an audit trail. This can help organisations understand how an AI system reached a decision and support compliance requirements. It can also help teams monitor data use in real time and identify unusual activity.

If multiple AI systems rely on the same governed data layer, they are more likely to produce aligned results which can help reduce the risk of conflicting outputs in different parts of the business.

Governance in the stack

As autonomous AI systems become more common, governance is being applied at several levels. Data governance, which sits underneath models and applications, helps ensure that the inputs to those systems are reliable. A well-governed model can still produce poor results especially if it relies on flawed data. Strong data governance can support better outcomes even when systems operate with some degree of independence.

This is why data-focused companies are becoming part of the broader AI governance conversation. By controlling how data is accessed and used, they help alter how autonomous systems behave in practice.

At AI & Big Data Expo North America 2026, discussions around AI include oversight and system behaviour. Denodo is among the companies taking part in those discussions, particularly around data management and enterprise AI. Early deployments often focused on what AI systems could do. Current discussions are more concerned with how those systems should be managed once they are in use.

From ability to control

The next stage of AI adoption is likely to depend less on new model features and more on how well organisations manage the systems around them. Governance is not an added feature, but a requirement for systems that are expected to act on their own.

(Photo by Hyundai Motor Group)

See also: SAP and ANYbotics drive industrial adoption of physical AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data ExpoAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Autonomous AI systems depend on data governance appeared first on AI News.

]]>
DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI https://www.artificialintelligence-news.com/news/deepls-borderless-business-report-reveals-83-of-enterprises-are-still-behind-on-language-ai/ Wed, 01 Apr 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112828 AI is everywhere in the enterprise. The translation workflow often is not. That is the core finding of DeepL’s 2026 Language AI report, “Borderless Business: Transforming Translation in the Age of AI,” published on March 10. Despite broad AI investment across business functions, the report reveals that language and multilingual operations–workflows that touch sales, legal, customer support, and […]

The post DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI appeared first on AI News.

]]>
AI is everywhere in the enterprise. The translation workflow often is not. That is the core finding of DeepL’s 2026 Language AI report, “Borderless Business: Transforming Translation in the Age of AI,” published on March 10. Despite broad AI investment across business functions, the report reveals that language and multilingual operations–workflows that touch sales, legal, customer support, and global expansion–remain the most underautomated part of the enterprise technology stack.

The automation gap hiding in plain sight

According to DeepL’s Borderless Business report, 35% of international businesses still handle translation entirely through manual processes, while a further 33% rely on traditional automation paired with systematic human review. Only 17% have implemented next-generation AI tools–large language models or agentic AI–for multilingual operations. 

That means, as per the report’s findings, 83% of enterprises have not transitioned to modern language AI capabilities despite investing in AI across other parts of the business. The report, which draws on survey data from business leaders across the United States, United Kingdom, France, Germany, and Japan, also found that enterprise content volume has grown 50% since 2023, yet 68% of companies still rely on workflows built for a different era.

Jarek Kutylowski, CEO and founder of DeepL, put it plainly: “AI is everywhere, but efficiency is not. Most companies have deployed AI in some form, yet few achieve real productivity at scale because core workflows remain designed around people, not systems.”

Why language AI is becoming infrastructure

The angle that makes this more than a translation story is where language AI is now being deployed. According to DeepL’s research, global expansion is the top driver of language AI investment at 33%, followed by sales and marketing at 26%, customer support at 23%, and legal and finance at 22%. These are mission-critical business functions, not peripheral content tasks.

DeepL’s broader research from December 2025, surveying 5,000 senior business leaders across the same five markets, found that 54% of global executives say real-time voice translation will be essential in 2026, up from 32% today. As perthat research, the UK and France are leading early adoption at 48% and 33% respectively, while Japan sits at 11%, a gap that points to significant variance in enterprise readiness across global markets.

The company now serves over 200,000 business customers across 228 markets, and at the AI & Big Data Expo in London in February 2026, Scott Ivell, vice president of product marketing at DeepL, told SiliconANGLE that the company has 2,000 customers globally deploying AI agents — being used for report analysis, sales targeting, and legal document review.

The sovereign AI dimension

What separates DeepL’s positioning from general-purpose AI competitors is where it sits on the enterprise trust spectrum.As enterprises in regulated industries–financial services, healthcare, legal, government–accelerate AI adoption, data sovereignty is increasingly the deciding factor in platform selection.

DeepL is ISO 27001, SOC 2 Type 2, and GDPR certified, and offers Bring Your Own Key encryption for enterprise customers, giving organisations the ability to withdraw data access in seconds, a control level that most large language model providers do not offer. As per DeepL’s own security documentation, this means data can effectively be placed beyond anyone’s reach, including DeepL itself, at the customer’s discretion.

Sebastian Enderlein, CTO at DeepL, has framed 2026 as a year of execution rather than experimentation: “I believe 2026 will be the year AI stops experimenting and starts executing, at a scale we haven’t yet seen. After a cycle of pilots and proofs of concept, businesses are now ready to scale, and they’re betting big on agentic AI to do it.”

DeepL Agent and the broader pivot

DeepL’s product direction in 2026 reflects the same shift visible across enterprise AI broadly, from single-function tools to autonomous workflow execution. DeepL Agent, launched in general availability in November 2025, is designed to navigate business systems, execute multi-step workflows, and operate across CRM, email, calendars, and project management tools without requiring complex integrations.

According to DeepL’s announcement, the agent operates with enterprise-grade security and data sovereignty built in by default, a deliberate positioning choice that targets the segment of enterprises that cannot send sensitive documents to OpenAI or Microsoft’s public cloud endpoints.

DeepL’s chief scientist, Stefan Miedzianowski, has described the current moment as a transition on the technology adoption curve: “2026 will undoubtedly be the year of the agent. 2025 was the year when public awareness caught up with the science showing what agents can do, but enterprise adoption at scale will happen now. We are moving from the innovators to the early majority.”

As per the Borderless Business report, 71% of business leaders say transforming workflows with AI is a priority for 2026, with expected returns across customer experience, employee productivity, and time to market. The gap between that ambition and the 17% who have actually modernised their language operations is the market DeepL is squarely targeting.

DeepL is a Platinum Sponsor at TechEx Global, appearing at the AI & Big Data Expo and co-located events at Olympia London, February 3 & 4, 2027.

See also: Automating complex finance workflows with multimodal AI

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI appeared first on AI News.

]]>
SS&C Blue Prism: On the journey from RPA to agentic automation https://www.artificialintelligence-news.com/news/ssc-blue-prism-on-the-journey-from-rpa-to-agentic-automation/ Tue, 17 Feb 2026 15:27:34 +0000 https://www.artificialintelligence-news.com/?p=112272 For organizations who are still wedded to the rules and structures of robotic process automation (RPA), then considering agentic AI as the next step for automation may be faintly terrifying. SS&C Blue Prism, however, is here to help, taking customers on the journey from RPA to agentic automation at a pace with which they’re comfortable. […]

The post SS&C Blue Prism: On the journey from RPA to agentic automation appeared first on AI News.

]]>
For organizations who are still wedded to the rules and structures of robotic process automation (RPA), then considering agentic AI as the next step for automation may be faintly terrifying. SS&C Blue Prism, however, is here to help, taking customers on the journey from RPA to agentic automation at a pace with which they’re comfortable.

Big as it may be, this move is a necessary one. Modern workflows are at a level of complexity that outlines what traditional RPA was designed to do, according to Steven Colquitt, VP Software Engineering, SS&C Blue Prism. Unstructured data comes from various sources resembling non-deterministic real-world interactions. “Inputs can vary, outcomes can shift and decisions depend on context in real-time,” notes Colquitt.

Brian Halpin, Managing Director, Automation, SS&C Blue Prism, gives the example of a credit agreement where you might need to get 30 or 40 answers from it. He uses the word “answers” deliberately as opposed to data points to account for the level of reasoning that a large language model (LLM) performs.

The element of this being a journey continues to resonate, however. “We’re now saying we’re giving an AI agent the outcome that we want, but we’re not giving it the instructions on how to complete,” says Halpin. “We’re not saying, ‘follow step one, two, three, four, five.’ We’re saying, ‘I want this loan reviewed’ or ‘I want this customer onboarded.’

“Ultimately, I think that’s where the market will go,” adds Halpin. “Is it ready for that? No. Why? Because there’s trust, there’s regulations, there’s auditability […] stability, security. We know LLMs are prone to hallucinations, we know they drift, and [if] you change the underlying model, things change and responses get different.

“There’s an awful lot of learning to happen before I think companies go fully autonomous and real agentic workflows [are] driven from that sort of non-deterministic perspective,” says Halpin. “But then, there will be something else, right? There will be another model. So really, it is all a journey right now.”

SS&C Blue Prism has thousands of customers who have automated processes in place, from centers of excellence (CoEs) to running digital workers in their operations, who they’re hoping to upgrade into the “world of AI”, as Halpin puts it. Sometimes it’s about connecting two separate areas.

“It’s been interesting,” Halpin notes. “As I talk to [our] customers, I see a common thread among companies right now where, in a lot of cases, AI has been established as a separate unit in a company. You go over to the process automation team, and they’re maybe not even allowed to use the AI.

“So, it’s about, ‘How do you help them get that capability and blend it into their process efficiency and allow them to get to the next 20%, 30% of automation, in terms of the end-to-end process?’”

As part of this, SS&C Blue Prism is soon to launch new technology which helps organizations build and embed AI agents within workflows, as well as assist with orchestration. Those who attended TechEx Global, on February 4-5 as part of the Intelligent Automation conference, where SS&C Blue Prism participated, got the full story, as well as understanding the company’s ongoing path.

“[SS&C Technologies] are one of the biggest users of RPA in the world,” adds Halpin. “We have over three and a half thousand digital workers deployed [across the SS&C estate]. We’re saving hundreds of millions in run-rate benefit. We’ve about 35 AI agents in production attached to those digital workers doing […] complex tasks, and really, we just want to share that journey.”

Watch the full interview with Brian Halpin below:

Photo by Patrick Tomasso on Unsplash

The post SS&C Blue Prism: On the journey from RPA to agentic automation appeared first on AI News.

]]>
AI Expo 2026 Day 2: Moving experimental pilots to AI production https://www.artificialintelligence-news.com/news/ai-expo-2026-day-2-moving-experimental-pilots-ai-production/ Thu, 05 Feb 2026 16:08:36 +0000 https://www.artificialintelligence-news.com/?p=112021 The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more […]

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>

The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition.

Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more on the infrastructure needed to run them: data lineage, observability, and compliance.

Data maturity determines deployment success

AI reliability depends on data quality. DP Indetkar from Northern Trust warned against allowing AI to become a “B-movie robot.” This scenario occurs when algorithms fail because of poor inputs. Indetkar noted that analytics maturity must come before AI adoption. Automated decision-making amplifies errors rather than reducing them if the data strategy is unverified.

Eric Bobek of Just Eat supported this view. He explained how data and machine learning guide decisions at the global enterprise level. Investments in AI layers are wasted if the data foundation remains fragmented.

Mohsen Ghasempour from Kingfisher also noted the need to turn raw data into real-time actionable intelligence. Retail and logistics firms must cut the latency between data collection and insight generation to see a return.

Scaling in regulated environments

The finance, healthcare, and legal sectors have near-zero tolerance for error. Pascal Hetzscholdt from Wiley addressed these sectors directly.

Hetzscholdt stated that responsible AI in science, finance, and law relies on accuracy, attribution, and integrity. Enterprise systems in these fields need audit trails. Reputational damage or regulatory fines make “black box” implementations impossible.

Konstantina Kapetanidi of Visa outlined the difficulties in building multilingual, tool-using, scalable generative AI applications. Models are becoming active agents that execute tasks rather than just generating text. Allowing a model to use tools – like querying a database – creates security vectors that need serious testing.

Parinita Kothari from Lloyds Banking Group detailed the requirements for deploying, scaling, monitoring, and maintaining AI systems. Kothari challenged the “deploy-and-forget” mentality. AI models need continuous oversight, similar to traditional software infrastructure.

The change in developer workflows

Of course, AI is fundamentally changing how code is written. A panel with speakers from Valae, Charles River Labs, and Knight Frank examined how AI copilots reshape software creation. While these tools speed up code generation, they also force developers to focus more on review and architecture.

This change requires new skills. A panel with representatives from Microsoft, Lloyds, and Mastercard discussed the tools and mindsets needed for future AI developers. A gap exists between current workforce capabilities and the needs of an AI-augmented environment. Executives must plan training programmes that ensure developers sufficiently validate AI-generated code.

Dr Gurpinder Dhillon from Senzing and Alexis Ego from Retool presented low-code and no-code strategies. Ego described using AI with low-code platforms to make production-ready internal apps. This method aims to cut the backlog of internal tooling requests.

Dhillon argued that these strategies speed up development without dropping quality. For the C-suite, this suggests cheaper internal software delivery if governance protocols stay in place.

Workforce capability and specific utility

The broader workforce is starting to work with “digital colleagues.” Austin Braham from EverWorker explained how agents reshape workforce models. This terminology implies a move from passive software to active participants. Business leaders must re-evaluate human-machine interaction protocols.

Paul Airey from Anthony Nolan gave an example of AI delivering literally life-changing value. He detailed how automation improves donor matching and transplant timelines for stem cell transplants. The utility of these technologies extends to life-saving logistics.

A recurring theme throughout the event is that effective applications often solve very specific and high-friction problems rather than attempting to be general-purpose solutions.

Managing the transition

The day two sessions from the co-located events show that enterprise focus has now moved to integration. The initial novelty is gone and has been replaced by demands for uptime, security, and compliance. Innovation heads should assess which projects have the data infrastructure to survive contact with the real world.

Organisations must prioritise the basic aspects of AI: cleaning data warehouses, establishing legal guardrails, and training staff to supervise automated agents. The difference between a successful deployment and a stalled pilot lies in these details.

Executives, for their part, should direct resources toward data engineering and governance frameworks. Without them, advanced models will fail to deliver value.

See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>
AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise https://www.artificialintelligence-news.com/news/ai-expo-2026-day-1-governance-data-readiness-enable-agentic-enterprise/ Wed, 04 Feb 2026 16:33:34 +0000 https://www.artificialintelligence-news.com/?p=112005 While the prospect of AI acting as a digital co-worker dominated the day one agenda at the co-located AI & Big Data Expo and Intelligent Automation Conference, the technical sessions focused on the infrastructure to make it work. A primary topic on the exhibition floor was the progression from passive automation to “agentic” systems. These […]

The post AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise appeared first on AI News.

]]>
While the prospect of AI acting as a digital co-worker dominated the day one agenda at the co-located AI & Big Data Expo and Intelligent Automation Conference, the technical sessions focused on the infrastructure to make it work.

A primary topic on the exhibition floor was the progression from passive automation to “agentic” systems. These tools reason, plan, and execute tasks rather than following rigid scripts. Amal Makwana from Citi detailed how these systems act across enterprise workflows. This capability separates them from earlier robotic process automation (RPA).

Scott Ivell and Ire Adewolu of DeepL described this development as closing the “automation gap”. They argued that agentic AI functions as a digital co-worker rather than a simple tool. Real value is unlocked by reducing the distance between intent and execution. Brian Halpin from SS&C Blue Prism noted that organisations typically must master standard automation before they can deploy agentic AI.

This change requires governance frameworks capable of handling non-deterministic outcomes. Steve Holyer of Informatica, alongside speakers from MuleSoft and Salesforce, argued that architecting these systems requires strict oversight. A governance layer must control how agents access and utilise data to prevent operational failure.

Data quality blocks deployment

The output of an autonomous system relies on the quality of its input. Andreas Krause from SAP stated that AI fails without trusted, connected enterprise data. For GenAI to function in a corporate context, it must access data that is both accurate and contextually-relevant.

Meni Meller of Gigaspaces addressed the technical challenge of “hallucinations” in LLMs. He advocated for the use of eRAG (retrieval-augmented generation) combined with semantic layers to fix data access issues. This approach allows models to retrieve factual enterprise data in real-time.

Storage and analysis also present challenges. A panel featuring representatives from Equifax, British Gas, and Centrica discussed the necessity of cloud-native, real-time analytics. For these organisations, competitive advantage comes from the ability to execute analytics strategies that are scalable and immediate.

Physical safety and observability

The integration of AI extends into physical environments, introducing safety risks that differ from software failures. A panel including Edith-Clare Hall from ARIA and Matthew Howard from IEEE RAS examined how embodied AI is deployed in factories, offices, and public spaces. Safety protocols must be established before robots interact with humans.

Perla Maiolino from the Oxford Robotics Institute provided a technical perspective on this challenge. Her research into Time-of-Flight (ToF) sensors and electronic skin aims to give robots both self-awareness and environmental awareness. For industries such as manufacturing and logistics, these integrated perception systems prevent accidents.

In software development, observability remains a parallel concern. Yulia Samoylova from Datadog highlighted how AI changes the way teams build and troubleshoot software. As systems become more autonomous, the ability to observe their internal state and reasoning processes becomes necessary for reliability.

Infrastructure and adoption barriers

Implementation demands reliable infrastructure and a receptive culture. Julian Skeels from Expereo argued that networks must be designed specifically for AI workloads. This involves building sovereign, secure, and “always-on” network fabrics capable of handling high throughput.

Of course, the human element remains unpredictable. Paul Fermor from IBM Automation warned that traditional automation thinking often underestimates the complexity of AI adoption. He termed this the “illusion of AI readiness”. Jena Miller reinforced this point, noting that strategies must be human-centred to ensure adoption. If the workforce does not trust the tools, the technology yields no return.

Ravi Jay from Sanofi suggested that leaders need to ask operational and ethical questions early on in the process. Success depends on deciding where to build proprietary solutions versus where to buy established platforms.

The sessions from day one of the co-located events indicate that, while technology is moving toward autonomous agents, deployment requires a solid data foundation.

CIOs should focus on establishing data governance frameworks that support retrieval-augmented generation. Network infrastructure must be evaluated to ensure it supports the latency requirements of agentic workloads. Finally, cultural adoption strategies must run parallel to technical implementation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Banner for AI & Big Data Expo by TechEx events.

The post AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise appeared first on AI News.

]]>
From blogosphere to the AI & Big Data Expo: Rackspace and operational AI https://www.artificialintelligence-news.com/news/combing-the-rackspace-blogfiles-for-operational-ai-pointers/ Wed, 04 Feb 2026 10:01:00 +0000 https://www.artificialintelligence-news.com/?p=111961 In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its […]

The post From blogosphere to the AI & Big Data Expo: Rackspace and operational AI appeared first on AI News.

]]>
In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its own effort.

One of the clearest examples of operational AI inside Rackspace sits in its security business. In late January, the company described RAIDER (Rackspace Advanced Intelligence, Detection and Event Research) as a custom back-end platform built for its internal cyber defense centre. With security teams working amid many alerts and logs, standard detection engineering doesn’t scale if dependent on the manual writing of security rules. Rackspace says its RAIDER system unifies threat intelligence with detection engineering workflows and uses its AI Security Engine (RAISE) and LLMs to automate detection rule creation, generating detection criteria it describes as “platform-ready” in line with known frameworks such as MITRE ATT&CK. The company claims it’s cut detection development time by more than half and reduced mean time to detect and respond. This is just the kind of internal process change that matters.

The company also positions agentic AI as a way of taking the friction out of complex engineering programmes. A January post on modernising VMware environments on AWS describes a model in which AI agents handle data-intensive analysis and many repeating tasks, yet it keeps “architectural judgement, governance and business decisions” remain in the human domain. Rackspace presents this workflow as stopping senior engineers being sidelined into migration projects. The article states the target is to keep day two operations in scope – where many migration plans fail as teams discover they have modernised infrastructure but not operating practices.

Elsewhere the company sets out a picture of AI-supported operations where monitoring becomes more predictive, routine incidents are handled by bots and automation scripts, and telemetry (plus historical data) are used to spot patterns and, it turn, recommend fixes. This is conventional AIOps language, but it Rackspace is tying such language to managed services delivery, suggesting the company uses AI to reduce the cost of labour in operational pipelines in addition to the more familiar use of AI in customer-facing environments.

In a post describing AI-enabled operations, the company stresses the importance of focus strategy, governance and operating models. It specifies the machinery it needed to industrialise AI, such as choosing infrastructure based on whether workloads involve training, fine-tuning or inference. Many tasks are relatively lightweight and can run inference locally on existing hardware.

The company’s noted four recurring barriers to AI adoption, most notably that of fragmented and inconsistent data, and it recommends investment in integration and data management so models have consistent foundations. This is not an opinion unique to Rackspace, of course, but having it writ large by a technology-first, big player is illustrative of the issues faced by many enterprise-scale AI deployments.

A company of even greater size, Microsoft, is working to coordinate autonomous agents’ work across systems. Copilot has evolved into an orchestration layer, and in Microsoft’s ecosystem, multi-step task execution and broader model choice do exist. However, it’s noteworthy that Redmond is called out by Rackspace on the fact that productivity gains only arrive when identity, data access, and oversight are firmly ensconced into operations.

Rackspace’s near-term AI plan comprises of AI-assisted security engineering, agent-supported modernisation, and AI-augmented service management. Its future plans can perhaps be discerned in a January article published on the company’s blog that concerns private cloud AI trends. In it, the author argues inference economics and governance will drive architecture decisions well into 2026. It anticipates ‘bursty’ exploration in public clouds, while moving inference tasks into private clouds on the grounds of cost stability, and compliance. That’s a roadmap for operational AI grounded in budget and audit requirements, not novelty.

For decision-makers trying to accelerate their own deployments, the useful takeaway is that Rackspace has treats AI as an operational discipline. The concrete, published examples it gives are those that reduce cycle time in repeatable work. Readers may accept the company’s direction and still be wary of the company’s claimed metrics. The steps to take inside a growing business are to discover repeating processes, examine where strict oversight is necessary because of data governance, and where inference costs might be reduced by bringing some processing in-house.

(Image source: Pixabay)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post From blogosphere to the AI & Big Data Expo: Rackspace and operational AI appeared first on AI News.

]]>
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical’ https://www.artificialintelligence-news.com/news/ronnie-sheth-ceo-senen-group-why-now-is-the-time-for-enterprise-ai-to-get-practical/ Tue, 03 Feb 2026 11:47:14 +0000 https://www.artificialintelligence-news.com/?p=111981 Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality. Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad […]

The post Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical’ appeared first on AI News.

]]>
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.

Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.

That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.

“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.

Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.

“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.

“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.

With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.

It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”

Watch the full video conversation with Ronnie Sheth below:

The post Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical’ appeared first on AI News.

]]>
Apptio: Why scaling intelligent automation requires financial rigour https://www.artificialintelligence-news.com/news/apptio-why-scaling-intelligent-automation-requires-financial-rigour/ Tue, 03 Feb 2026 10:52:22 +0000 https://www.artificialintelligence-news.com/?p=111972 Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour. The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide […]

The post Apptio: Why scaling intelligent automation requires financial rigour appeared first on AI News.

]]>
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.

The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.

Headshot of Greg Holmes, Field CTO for the EMEA region at Apptio, an IBM company.

“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.

This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”

The unit economics of scaling intelligent automation

Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.

“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”

Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.

To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.

Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”

However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”

Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.

“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”

When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.

“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”

The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.

“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving  it to be more expensive as I consume more.”

Addressing legacy debt and budgeting for the long-term

Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”

A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.

“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”

In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.

Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.

Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.

“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.

Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.

IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.

See also: Klarna backs Google UCP to power AI agent payments

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Apptio: Why scaling intelligent automation requires financial rigour appeared first on AI News.

]]>