Computer Vision - AI News https://www.artificialintelligence-news.com/categories/how-it-works/computer-vision/ Artificial Intelligence News Mon, 13 Apr 2026 08:15:10 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Computer Vision - AI News https://www.artificialintelligence-news.com/categories/how-it-works/computer-vision/ 32 32 SAP and ANYbotics drive industrial adoption of physical AI https://www.artificialintelligence-news.com/news/sap-and-anybotics-drive-industrial-adoption-physical-ai/ Tue, 31 Mar 2026 15:20:53 +0000 https://www.artificialintelligence-news.com/?p=112821 Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that. ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot […]

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that.

ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot as a standalone asset, this turns it into a mobile data-gathering node within an industrial IoT network.

This initiative shows that hardware innovation can now effectively connect with established business workflows. Underscoring that broader trend, SAP is sponsoring this year’s AI & Big Data Expo North America at the San Jose McEnery Convention Center, CA, an event that is fittingly co-located with the IoT Tech Expo and Intelligent Automation & Physical AI Summit.

When equipment breaks at a chemical plant or offshore rig, it costs a fortune. People do routine inspections to catch these issues early, but humans get tired and plants are massive. Robots, on the other hand, can walk the floor constantly, carrying thermal, acoustic, and visual sensors. Hook those sensors into SAP, and a hot pump instantly generates a maintenance request without waiting for a human to report it.

Cutting out the reporting lag

Usually, finding a problem and logging a work order are two disconnected steps. A worker might hear a weird noise in a compressor, write it down, and type it into a computer hours later. By the time the replacement part gets approved, the machine might be wrecked.

Connecting ANYbotics to SAP eliminates that delay. The robot’s onboard AI processes what it sees and hears instantly. If it hears an irregular motor frequency, it doesn’t just flash a warning on a separate screen, it uses APIs to tell the SAP asset management module directly. The system immediately checks for spare parts, figures out the cost of potential downtime, and schedules an engineer.

This automates the flow of information from the floor to management. It also means machinery gets judged on hard, consistent numbers instead of a human inspector’s subjective opinion.

Putting robots in heavy industry isn’t like installing software in an office—companies have to deal with unreliable infrastructure. Factories usually have awful internet connectivity due to thick concrete, metal scaffolding, and electromagnetic interference.

To make this work, the setup relies on edge computing. It takes too much bandwidth to constantly stream high-def thermal video and lidar data to the cloud. So, the robots crunch most of that data locally. Onboard processors figure out the difference between a machine running normally and one that’s dangerously overheating. They only send the crucial details (i.e. the specific fault and its location) back to SAP.

To handle the network issues, many early adopters build private 5G networks. This gives them the coverage they need across huge facilities where regular Wi-Fi fails. It also locks down access, keeping the robot’s data safe from interception.

Of course, security is a major issue. A walking robot packed with cameras is effectively a roaming vulnerability. Companies must use zero-trust network protocols to constantly verify the robot’s identity and limit what SAP modules it can touch. If the robot gets hacked, the system has to cut its connection instantly to stop the attackers from moving laterally into the corporate network.

These robots generate a massive amount of unstructured data as they walk around. Turning raw audio and thermal images into the neat tables SAP requires is difficult.

If companies don’t manage this right, maintenance teams will drown in alerts. A robot that is too sensitive might spit out hundreds of useless warnings a day, making the SAP dashboard completely ignored. IT teams have to set strict rules before turning the system on. They need exact thresholds for what triggers a real maintenance ticket and what just needs to be watched.

The setup usually uses middleware to translate the robot’s telemetry into SAP’s language. This software acts as a filter, throwing out the noise so only actual problems reach the ERP system. The data lake storing all this information also needs to be organised for future machine learning projects. Fixing broken machines is the short-term goal; the long-term payoff is using years of robot data to predict failures before they happen.

Ensuring a successful physical AI deployment

Dropping robots into a factory naturally makes people nervous. The project’s success often comes down to how human resources handles it. Workers usually look at the robots and assume layoffs are next.

Management has to be clear about why the robots are there. The goal is to get people out of dangerous areas like high-voltage zones or toxic chemical sectors to reduce injuries. The robot collects the data, and the human engineer shifts to analysing that data and doing the actual repairs.

This requires retraining. Workers who used to walk the perimeter now have to read SAP dashboards, manage automated tickets, and work with the robots. They have to trust the sensors, and management has to make sure operators know they can take manual control if something unexpected happens.

Companies need to take the rollout slowly. Because syncing physical robots with enterprise software is complicated, large-scale rollouts should start as small, targeted pilots.

The first test should be in one specific area with known hazards but rock-solid internet. This lets IT watch the data flow between the hardware and SAP in a controlled space. At this stage, the main job is making sure the data matches reality. If the robot sees one thing and SAP records another, it has to be audited and fixed daily.

Once the data pipeline actually works, the company can add more robots and connect other systems, like automated parts ordering. IT chiefs have to keep checking if their private networks can handle more robots, while security teams update their defenses against new threats.

If companies treat these autonomous inspectors as an extension of their corporate data architecture, they get a massive amount of information about their physical assets. But pulling it off means getting the network infrastructure, the data rules, and the human element exactly right.

See also: The rise of invisible IoT in enterprise operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Ai2: Building physical AI with virtual simulation data https://www.artificialintelligence-news.com/news/ai2-building-physical-ai-with-virtual-simulation-data/ Wed, 11 Mar 2026 16:50:56 +0000 https://www.artificialintelligence-news.com/?p=112603 Virtual simulation data is driving the development of physical AI across corporate environments, led by initiatives like Ai2’s MolmoBot. Instructing hardware to interact with the real world has historically relied on highly expensive and manually-collected demonstrations. Technology providers building generalist manipulation agents typically frame extensive real-world training as the basis for these systems. For some […]

The post Ai2: Building physical AI with virtual simulation data appeared first on AI News.

]]>
Virtual simulation data is driving the development of physical AI across corporate environments, led by initiatives like Ai2’s MolmoBot.

Instructing hardware to interact with the real world has historically relied on highly expensive and manually-collected demonstrations. Technology providers building generalist manipulation agents typically frame extensive real-world training as the basis for these systems.

For some context, projects like DROID include 76,000 teleoperated trajectories gathered across 13 institutions, representing roughly 350 hours of human effort. Google DeepMind’s RT-1 required 130,000 episodes collected over 17 months by human operators. This reliance on proprietary, manual data collection inflates research budgets and concentrates capabilities within a small group of well-resourced industrial laboratories.

“Our mission is to build AI that advances science and expands what humanity can discover,” said Ali Farhadi, CEO of Ai2. “Robotics can become a foundational scientific instrument, helping researchers move faster and explore new questions. To get there, we need systems that generalise in the real world and tools the global research community can build on together. Demonstrating transfer from simulation to reality is a meaningful step in that direction.”

Researchers from the Allen Institute for AI (Ai2) offer a different economic model with MolmoBot, an open robotic manipulation model suite trained entirely on synthetic information. By generating trajectories procedurally within a system called MolmoSpaces, the team bypasses the need for human teleoperation.

The accompanying dataset, MolmoBot-Data, contains 1.8 million expert manipulation trajectories. This collection was produced by combining the MuJoCo physics engine with aggressive domain randomisation, varying objects, viewpoints, lighting, and dynamics.

“Most approaches try to close the sim-to-real gap by adding more real-world data,” said Ranjay Krishna, Director of the PRIOR team at Ai2. “We took the opposite bet: that the gap shrinks when you dramatically expand the diversity of simulated environments, objects, and camera conditions. Our latest advancement shifts the constraint in robotics from collecting manual demonstrations to designing better virtual worlds, and that’s a problem we can solve.”

Generating virtual simulation data for physical AI

Using 100 Nvidia A100 GPUs, the pipeline created roughly 1,024 episodes per GPU-hour, equating to over 130 hours of robot experience for every hour of wall-clock time.

Compared to real-world data collection, this represents nearly four times the data throughput, directly impacting project return on investment by accelerating deployment cycles.

The MolmoBot suite includes three distinct policy classes evaluated on two platforms: the Rainbow Robotics RB-Y1 mobile manipulator, and the Franka FR3 tabletop arm.  The primary model, built on a Molmo2 vision-language backbone, processes multiple timesteps of RGB observations and language instructions to dictate actions.

Hardware flexibility with Ai2’s MolmoBot

For edge computing environments where resources are constrained, the researchers provide MolmoBot-SPOC, a lightweight transformer policy with fewer parameters. MolmoBot-Pi0 uses a PaliGemma backbone to match the architecture of Physical Intelligence’s π0 model, permitting direct performance comparisons.

During physical testing, these policies demonstrated zero-shot transfer to real-world tasks involving unseen objects and environments without any fine-tuning.

In tabletop pick-and-place evaluations, the primary MolmoBot model achieved a success rate of 79.2 percent. This outperformed π0.5, a model trained on extensive real-world demonstration data, which achieved a 39.2 percent success rate. For mobile manipulation, the policies successfully executed tasks such as approaching, grasping, and pulling doors through their full range of motion.

Providing these varied architectures allows organisations to integrate capable physical AI systems without being locked into a single proprietary vendor ecosystem or extensive data collection infrastructure.

The open release of the entire MolmoBot stack – including the training data, generation pipelines, and model architectures – permits internal auditing and adaptation. Anyone exploring physical AI can leverage these open tools for the simulation and building of capable systems while controlling costs.

“For AI to truly advance science, progress cannot depend on closed data or isolated systems,” continues Ali Farhadi, CEO of Ai2. “It requires shared infrastructure that researchers everywhere can build on, test, and improve together. This is how we believe physical AI will move forward.”

See also: New partnership to offer smart robots for dangerous environments

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Ai2: Building physical AI with virtual simulation data appeared first on AI News.

]]>
How physical AI integration accelerates vehicle innovation https://www.artificialintelligence-news.com/news/how-physical-ai-integration-accelerates-vehicle-innovation/ Wed, 11 Mar 2026 09:52:15 +0000 https://www.artificialintelligence-news.com/?p=112592 The integration of physical AI into vehicles remains a primary objective for automakers looking to accelerate innovation. A technical collaboration between Qualcomm and Wayve offers a framework for how hardware and software providers can consolidate their efforts to supply production-ready advanced driver assistance systems to manufacturers worldwide. The partnership combines Wayve’s AI driving layer with […]

The post How physical AI integration accelerates vehicle innovation appeared first on AI News.

]]>
The integration of physical AI into vehicles remains a primary objective for automakers looking to accelerate innovation.

A technical collaboration between Qualcomm and Wayve offers a framework for how hardware and software providers can consolidate their efforts to supply production-ready advanced driver assistance systems to manufacturers worldwide.

The partnership combines Wayve’s AI driving layer with Qualcomm’s Snapdragon Ride system-on-chips and active safety software. This aims to simplify implementation while meeting baseline requirements around reliability, safety, and time-to-market.

Simplifying physical AI integration for modern vehicles

Building an autonomous driving stack often involves piecing together fragmented components from various vendors. This closed method increases development costs, complexity, and project risk. 

Pre-integrating the core processor, safety protocols, and the neural intelligence layer allows vehicle manufacturers to implement reliable capabilities faster while demanding less engineering effort. The unified system is engineered to support global deployment and long-term platform strategies over the lifespan of a vehicle.

Unlike traditional rule-based autonomy that relies heavily on detailed mapping, Wayve utilises a unified foundation model trained on diverse global data. This data-driven software learns driving behaviour directly from real-world exposure. This allows the system to adapt across different regions and road types without requiring location-specific engineering.

When embedded within a commercial vehicle, this form of physical AI needs massive yet energy-efficient processing power. Qualcomm provides that compute infrastructure through a safety-certified architecture featuring redundancy, real-time monitoring, and secure system isolation.

By establishing an open architecture that scales from mainstream models to premium systems, automotive brands can ensure consistent high performance. The design helps provide flexibility, supporting software portability and reuse across various platforms and model years.

Anshuman Saxena, VP and GM of ADAS and Robotics at Qualcomm, said: “ADAS is where scale, safety, and real‑world impact matter most for automakers today. Snapdragon Ride is built to support the widest range of long‑term platform strategies, enabling automakers to standardise across programs and regions while retaining flexibility.

“Together with Wayve, we’re empowering automakers with more choice for how advanced driving systems are developed, deployed, and scaled, while also helping them reduce development cycles, effort and risk.”

The alliance also secures future optionality for enterprise investments. Both companies plan to explore applying these system-on-chips in future Level 4 robotaxi deployments.

Balancing standardisation with brand identity

A common concern among leaders adopting pre-integrated vendor platforms, especially in an often brand loyalty-heavy industry like automotive, is the potential loss of differentiation. Building on an open physical AI framework allows vehicle manufacturers to standardise underlying hardware and software across regions while retaining the ability to differentiate brand experiences and model tiers.

Alex Kendall, Co-founder and CEO of Wayve, commented: “Wayve AI Driver is designed as a flexible, vehicle-agnostic software that serves as the intelligence layer for autonomy for any vehicle, anywhere. Our collaboration with Qualcomm Technologies provides global automakers building on Snapdragon Ride with a streamlined path to deploy market-leading, end-to-end AI automated driving capability alongside Qualcomm’s Active Safety stack.

“By combining our embodied AI driving intelligence with Qualcomm Technologies’ compute performance, platform maturity, and global scale, we are expanding choice and delivering immediate value to automakers across ADAS and automated driving systems, with natural progression from hands-off to eyes-off operation.”

As autonomous technology matures, leaders must evaluate vendor alignments that lower implementation hurdles. Pre-integrated systems offer a practical route to delivering complex physical AI, controlling operational costs, and securing a competitive edge in the global vehicle landscape.

See also: ABB: Physical AI simulation boosts ROI for factory automation

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How physical AI integration accelerates vehicle innovation appeared first on AI News.

]]>
From cloud to factory – humanoid robots coming to workplaces https://www.artificialintelligence-news.com/news/from-cloud-to-factory-humanoid-robots-coming-to-workplaces/ Fri, 09 Jan 2026 13:06:00 +0000 https://www.artificialintelligence-news.com/?p=111539 The Microsoft-Hexagon partnerships may mark a turning point in the acceptance of humanoid robots in the workplace, as prototypes become operational realities.

The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News.

]]>
The partnership announced this week between Microsoft and Hexagon Robotics marks an inflection point in the commercialisation of humanoid, AI-powered robots for industrial environments. The two companies will combine Microsoft’s cloud and AI infrastructure with Hexagon’s expertise in robotics, sensors, and spatial intelligence to advance the deployment of physical AI systems in real-world settings.

At the centre of the collaboration is AEON, Hexagon’s industrial humanoid robot, a device designed to operate autonomously in environments like factories, logistics hubs, engineering plants, and inspection sites.

The partnership will focus on multimodal AI training, imitation learning, real-time data management, and integration with existing industrial systems. Initial target sectors include automotive, aerospace, manufacturing, and logistics, the companies say. It’s in these industries where labour shortages and operational complexity are already constraining financial growth.

The announcement is the sign of a maturing ecosystem: cloud platforms, physical AI, and robotics engineering’s convergence, making humanoid automation commercially viable.

Humanoid robots out of the research lab

While humanoid robots have been the subject of work at research institutions, demonstrated proudly at technology events, the last five years have seen a move to practical deployment in real-world, working environments. The main change has been the combination of improved perception, advances in reinforcement and imitation learning, and the availability of scalable cloud infrastructure.

One of the most visible examples is Agility Robotics’ Digit, a bipedal humanoid robot designed for logistics and warehouse operations. Digit has been piloted in live environments by companies like Amazon, where it performs material-handling tasks including tote movement and last-metre logistics. Such deployments tend to focus on augmenting human workers rather than replacing them, with Digit handling more physically demanding tasks.

Similarly, Tesla’s Optimus programme has moved out of the phase where concept videos were all that existed, and is now undergoing factory trials. Optimus robots are being tested on structured tasks like part handling and equipment transport inside Tesla’s automotive manufacturing facilities. While still limited in scope, these pilots demonstrate the pattern of humanoid-like machines chosen over less anthropomorphic form-factors so they can operate in human-designed and -populated spaces.

Inspection, maintenance, and hazardous environments

Industrial inspection is emerging as one of the earliest commercially viable use cases for humanoid and quasi-humanoid robots. Boston Dynamics’ Atlas, while not yet a general-purpose commercial product, has been used in live industrial trials for inspection and disaster-response environments. It can navigate uneven terrain, climb stairs, and manipulate tools in places considered unsafe for humans.

Toyota Research Institute has deployed humanoid robotics platforms for remote inspection and manipulation tasks in similar settings. Toyota’s systems rely on multimodal perception and human-in-the-loop control, the latter reinforcing an industry trend: early deployments prioritise reliability and traceability, so need human oversight.

Hexagon’s AEON aligns closely with this trend. Its emphasis on sensor fusion and spatial intelligence is relevant for inspection and quality assurance tasks, where precise understanding of physical environments is more valuable than the conversational abilities most associated with everyday use of AIs.

Cloud platforms central to robotics strategy

A defining feature of the Microsoft-Hexagon partnership is the use of cloud infrastructure in the scaling of humanoid robots. Training, updating, and monitoring physical AI systems generates large quantities of data, including video, force feedback from on-device sensors, spatial mapping (such as that derived from LIDAR), and operational telemetry. Managing this data locally has historically been a bottleneck, due to storage and processing constraints.

By using platforms like Azure and Azure IoT Operations, plus real-time intelligence services in the cloud, humanoid robots can be trained fleet-wide, not isolated units. This leads to multiple possibilities in shared learning, improvement by iteration, and greater consistency. For board-level buyers, these IT architecture shifts mean humanoid robots become viable entities that can be treated – in terms of IT requirements – more like enterprise software than machinery.

Labour shortages drive adoption

The demographic trends in manufacturing, logistics, and asset-intensive industries are increasingly unfavourable. Ageing workforces, declining interest in manual roles, and persistent skills shortages create skills gaps that conventional automation cannot fully address – at least, not without rebuilding entire facilities to be more suited to a robotic workforce. Fixed robotic systems excel in repetitive, predictable tasks but struggle in dynamic, human environments.

Humanoid robots occupy a middle ground. Not designed to replace workflows, they can stabilise operations where human availability is uncertain. Case studies show early value in night shifts, periods of peak demand, and tasks deemed too hazardous for humans.

What boards should evaluate before investing

For decision-makers considering investment in next-generation workplace robots, several issues to note have emerged from existing, real-world deployments:

Task specificity matters more than general intelligence, with the more successful pilots focusing on well-defined activities. Data governance and security continue to have to be placed front and centre when robots are put into play, especially so when it’s necessary to connect them to cloud platforms.

At a human level, workforce integration can be more challenging than sourcing, installing, and running the technology itself. Yet human oversight remains essential at this stage in AI maturity, for safety and regulatory acceptance.

A measured but irreversible shift

Humanoid robots won’t replace the human workforce, but an increasing body of evidence from live deployments and prototyping shows such devices are moving into the workplace. As of now, humanoid, AI-powered robots can perform economically-valuable tasks, and integration with existing industrial systems is immensely possible. For boards with the appetite to invest, the question could be when competitors might deploy the technology responsibly and at scale.

(Image source: Source: Hexagon Robotics)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News.

]]>
ClinCheck Live brings AI planning to Invisalign dental treatments https://www.artificialintelligence-news.com/news/clincheck-live-brings-ai-planning-to-invisalign-dental-treatments/ Tue, 04 Nov 2025 11:37:13 +0000 https://www.artificialintelligence-news.com/?p=110204 Align Technology, a medical device company that designs, manufactures, and sells the Invisalign system of clear aligners, exocad CAD/CAM software, and iTero intra-oral scanners, has unveiled ClinCheck Live Plan, a new feature in its Invisalign digital dental treatment planning. ClinCheck Live Plan is designed to automate the creation of an initial Invisalign treatment plan that’s […]

The post ClinCheck Live brings AI planning to Invisalign dental treatments appeared first on AI News.

]]>
Align Technology, a medical device company that designs, manufactures, and sells the Invisalign system of clear aligners, exocad CAD/CAM software, and iTero intra-oral scanners, has unveiled ClinCheck Live Plan, a new feature in its Invisalign digital dental treatment planning.

ClinCheck Live Plan is designed to automate the creation of an initial Invisalign treatment plan that’s ready for a practitioner to review and approve, cutting treatment planning cycles from days down to just 15 minutes. The goal is to help patients get the treatment they need faster.

The latest plan follows Align’s range of new treatment planning tools and automation features launched in recent years, like cloud-based ClinCheck Pro 6.0 software, the automated Invisalign Personalised Plan templates, and the one-page Flex Rx prescription form for simplified workflows. Each new feature has been designed to improve consistency, dentist control, and speed.

Built on Align’s data and algorithms, ClinCheck Live Plan has been in development for decades, with insights from dentists and orthodontists who have treated over 21 million Invisalign patients globally.

Dentists will be able to create and adjust treatment plans and, once an eligible case has been submitted using the Flex Rx system, receive a personalised ClinCheck treatment plan in approximately 15 minutes.

Invisalign specialists can review their patients’ teeth and how they plan to adjust them, helping improve service while the patient is present. Once an Invisalign clinician submits a new case with an iTero intra-oral scan and a completed Flex Rx prescription, the ClinCheck Live Plan system makes a 3D plan. Ultimately, a faster process should help clinics operate more efficiently and enhance their patients’ experiences.

Invisalign-trained specialists that currently use the ClinCheck preferences template and Flex Rx form will gain access to ClinCheck Live Plan when it becomes available in their region. A worldwide rollout of the plan is set to start in the first quarter of 2026.

(Image source: “Visiting the dentist in SL” by Daniel Voyager is licensed under CC BY 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post ClinCheck Live brings AI planning to Invisalign dental treatments appeared first on AI News.

]]>
AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way https://www.artificialintelligence-news.com/news/ai-redaction-that-puts-privacy-first-caseguard-studio-leading-the-way/ Wed, 08 Oct 2025 09:07:44 +0000 https://www.artificialintelligence-news.com/?p=109632 Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities. To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, […]

The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News.

]]>
Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities.

To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, there can be costly consequences. Last year, healthcare company Advanced was fined £6 million for losing patient records that, among other details, contained information about how to gain entry to the homes of 890 care receivers. Even the smallest oversights can create unpleasant headlines and catastrophic fines.

This is the reality of modern data handling: leaks can be catastrophic, and compliance frameworks like GDPR, HIPAA, and FERPA, plus FOIA requests, require more vigilance than manual redaction can provide. What organizations need is not more staff to ensure proper redaction, but tools that achieve it quickly, reliably, and securely.

CaseGuard Studio, a US-based AI redaction & investigation platform, has built software that automates this manual work with 98% accuracy. It can process thousands of files in minutes, working on data that’s kept securely on-premises of any file type, including video, audio, documents, and images.

Why Manual Redaction No Longer Works

Redaction is not new, but the tools most people reach for were not built for the complexity of today’s compliance requirements. Adobe Acrobat, for example, offers text redaction but needs manual work on each document. Premiere’s video editing software requires frame-by-frame subject tracking for video redaction, which is slow and impractical. These solutions provide only limited capability and were never designed for departments that process a multitude of redactions on a weekly basis.

CaseGuard Studio, by contrast, was purpose-built for just this challenge. It can detect 12 categories of PII (personally-identifiable information) in video and images, such as faces, license plates, notepads, and more. It tracks and redacts all PII without needing manual frame-by-frame intervention.

For audio and documents, CaseGuard Studio supports over 30 PII types, like names, phone numbers, and addresses. Custom keywords, phrases, or sentences can be auto-detected and redacted directly from thousands of documents and transcripts, streamlining compliance in ways manual tools can’t match. It transcribes recordings with high accuracy and can translate to and from 100+ languages, so it can redact sensitive terms in multilingual content.

What once took days of human labor can now happen in minutes. CaseGuard Studio automates redaction work with 98% accuracy, up to 30 times faster than manual methods, and because it runs fully on-premise, data never leaves the device.

What to Ask When Choosing Redaction Software

For organizations evaluating redaction software, the decision often comes down to a handful of critical questions that determine whether a platform can deliver on both compliance and efficiency. The following questions are central to making the right choice.

  • Can the software handle every file type we work with? From scanned forms and handwritten notes to video, audio, and still images, organizations in sensitive sectors deal with more than PDFs.
  • Is the platform fully automated? If redaction still means blacking out text with a Sharpie or scrubbing video frame by frame, the process is slow and prone to error. Full automation ensures accuracy and frees staff for higher-impact work.
  • Does the software ensure data never leaves your environment? On-premise deployment means sensitive files are processed locally, so nothing is exposed to third-party servers or cloud risks.
  • Does the pricing stay predictable as you scale? Per-file or per-minute pricing quickly becomes unsustainable as workloads grow. Look for a flat subscription with unlimited redaction, so costs stay predictable no matter how much data you process.

Evaluating CaseGuard Studio Against the Four Redaction Essentials

When assessed against these requirements, CaseGuard Studio was the only platform in our evaluation that consistently delivered across all four redaction essentials.

     1. Auto-redact files from any source

From text documents and scanned forms to video, audio, images, and even handwriting, redaction has to cover every format where sensitive information might appear. Missing one identifiable feature, a face in a crowd or an un-redacted license plate, and a single oversight can be the difference between full compliance and a lawsuit. CaseGuard Studio automatically detects and redacts sensitive information across all these file types within a single platform with complete compliance.

      2. Automated bulk redaction at speed and scale

Thousands of files can be redacted in bulk, turning weeks of manual effort into minutes of processing. CaseGuard Studio handles workloads up to 32x faster than manual methods, with 98% accuracy, giving organizations the speed and scalability to meet growing compliance demands.

    3. Your data, your control

CaseGuard Studio runs fully on-premise, within your secure environment, including air-gapped systems that are completely isolated from external networks. This ensures organizations retain full control of their data, with nothing exposed to third-party servers or cloud risks.

   4. Unlimited redaction, no pay-per-file fees

Pay-per-file pricing quickly adds up, making every additional redaction more expensive. CaseGuard Studio offers predictable pricing under a flat subscription with unlimited redaction, so costs remain the same no matter how heavy the redaction load is.

Final Thoughts

Over the course of our evaluation, we compared methods and platforms ranging from manual redaction and legacy PDF editors to newer AI-driven tools that have appeared in the last few years. Most delivered partial solutions, treating written documents well but failing on audio, while others blurred faces in video, but weren’t practical to use at scale. Cloud-only options raised sovereignty and compliance concerns that, for many users, would count them out of the running entirely.

CaseGuard Studio was the only platform that consistently met all five requirements detailed above. It supports the widest of file types, from body-cam video to scanned or handwritten forms.

Audio and video are probably the most difficult formats to redact, especially at scale. Here, CaseGuard wins our vote with its AI-powered smarts. It runs fully on-premise, keeps sensitive files under organizational control, and its local AI models are refined with each version release.

At a time when many cloud redaction software licensing models drive up costs as workloads grow, CaseGuard’s flat pricing offers a refreshing change — predictable, transparent, and sustainable.

For any organization facing rising compliance demands and ever-larger volumes of sensitive data, CaseGuard Studio is well worth a closer look. Click here to book a consultation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News.

]]>
UK deploys AI to boost Arctic security amid growing threats https://www.artificialintelligence-news.com/news/uk-deploys-ai-to-boost-arctic-security-amid-growing-threats/ Tue, 27 May 2025 14:39:13 +0000 https://www.artificialintelligence-news.com/?p=106587 The UK is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today. The deployment is seen as a signal of the UK’s commitment to leveraging technology to […]

The post UK deploys AI to boost Arctic security amid growing threats appeared first on AI News.

]]>
The UK is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today.

The deployment is seen as a signal of the UK’s commitment to leveraging technology to navigate an increasingly complex global security landscape. For Britain, what unfolds in the territories of two of its closest Arctic neighbours – Norway and Iceland – has direct and profound implications.

The national security of the UK is linked to stability in the High North. The once remote and frozen expanse is changing, and with it, the security calculus for the UK.

Foreign Secretary David Lammy said: “The Arctic is becoming an increasingly important frontier for geopolitical competition and trade, and a key flank for European and UK security. 

“We cannot bolster the UK’s defence and deliver the Plan for Change without greater security in the Arctic. This is a region where Russia’s shadowfleet operates, threatening critical infrastructure like undersea cables to the UK and Europe, and helping fund Russia’s aggressive activity.”

British and Norwegian naval vessels conduct vital joint patrols in the Arctic. These missions are at the sharp end of efforts to detect, deter, and manage the increasing subsea threats that loom over vital energy supplies, national infrastructure, and broader regional security.

Russia’s Northern Fleet, in particular, presents a persistent challenge in these icy waters. This high-level engagement follows closely on the heels of the Prime Minister’s visit to Norway earlier this month for a Joint Expeditionary Force meeting, where further support for Ukraine was a key talking point with allies from the Baltic and Scandinavian states.

During the Icelandic stop of his tour, Lammy will unveil a UK-Iceland tech partnership to boost Arctic security. This new scheme is designed to harness AI technologies for monitoring hostile activity across this vast and challenging region. It’s a forward-looking strategy, acknowledging that as the Arctic opens up, so too do the opportunities for those who might seek to exploit its vulnerabilities.

As global temperatures climb and the ancient ice caps continue their retreat, previously impassable shipping routes are emerging. This is not just a matter for climate scientists; it’s redrawing geopolitical maps. The Arctic is fast becoming an arena of increased competition, with nations eyeing newly accessible reserves of gas, oil, and precious minerals. Unsurprisingly, this scramble for resources is cranking up security concerns.

Adding another layer of complexity, areas near the Arctic are being actively used by Russia’s fleet of nuclear-powered icebreakers. Putin’s vessels are crucial to his “High North” strategy, carving paths for tankers that, in turn, help to bankroll his illegal war in Ukraine.

Such operations cast a long shadow, threatening not only maritime security but also the delicate Arctic environment. Reports suggest Putin has been forced to rely on “dodgy and decaying vessels,” which frequently suffer breakdowns and increase the risk of devastating oil spills.

The UK’s defence partnership with Norway is deeply rooted, with British troops undertaking vital Arctic training in the country for over half a century. This enduring collaboration is now being elevated through an agreement to fortify the security of both nations.

“It’s more important than ever that we work with our allies in the High North, like Norway and Iceland, to enhance our ability to patrol and protect these waters,” added Lammy.

“That’s why we have today announced new UK funding to work more closely with Iceland, using AI to bolster our ability to monitor and detect hostile state activity in the Arctic.”

Throughout his Arctic tour, the Foreign Secretary will be emphasising the UK’s role in securing NATO’s northern flank. This includes the often unseen but hugely significant task of protecting the region’s critical undersea infrastructure – the cables and pipelines that are the lifelines for stable energy supplies and telecoms for the UK and much of Europe.

These targeted Arctic security initiatives are part and parcel of a broader, robust enhancement of the UK’s overall defence posture. Earlier this year, the Prime Minister announced the most significant sustained increase in defence spending since the Cold War. This will see UK defence expenditure climb to 2.5% of GDP by April 2027, with a clear ambition to reach 3% in the next Parliament, contingent on economic and fiscal conditions.

The significance of maritime security and the Arctic is also recognised in the UK’s ambitious new Security and Defence Partnership with the EU, agreed last week. This pact commits both sides to closer collaboration to make Europe a safer place.

In today’s interconnected world, security, climate action, and international collaboration are inextricably linked. The turn to AI isn’t just a tech upgrade; it’s a strategic necessity.

(Photo by Annie Spratt)

See also: Thales: AI and quantum threats top security agendas

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deploys AI to boost Arctic security amid growing threats appeared first on AI News.

]]>
Congress pushes GPS tracking for every exported semiconductor https://www.artificialintelligence-news.com/news/congress-pushes-gps-tracking-for-every-exported-semiconductor/ Fri, 16 May 2025 12:17:30 +0000 https://www.artificialintelligence-news.com/?p=106457 America’s quest to protect its semiconductor technology from China has taken increasingly dramatic turns over the past few years—from export bans to global restrictions—but the latest proposal from Congress ventures into unprecedented territory.  Lawmakers are now pushing for mandatory GPS-style tracking embedded in every AI chip exported from the United States, essentially turning advanced semiconductors […]

The post Congress pushes GPS tracking for every exported semiconductor appeared first on AI News.

]]>
America’s quest to protect its semiconductor technology from China has taken increasingly dramatic turns over the past few years—from export bans to global restrictions—but the latest proposal from Congress ventures into unprecedented territory. 

Lawmakers are now pushing for mandatory GPS-style tracking embedded in every AI chip exported from the United States, essentially turning advanced semiconductors into devices that report their location back to Washington.

On May 15, 2025, a bipartisan group of eight House representatives introduced the Chip Security Act, which would require companies like Nvidia to embed location verification mechanisms in their processors before export. 

This represents perhaps the most invasive approach yet in America’s technological competition with China, moving far beyond restricting where chips can go to actively monitoring where they end up.

The mechanics of AI chip surveillance

Under the proposed Chip Security Act, AI chip surveillance would become mandatory for all “covered integrated circuit products”—including those classified under Export Control Classification Numbers 3A090, 3A001.z, 4A090, and 4A003.z. Companies like Nvidia would be required to embed location verification mechanisms in their AI chips before export, reexport, or in-country transfer to foreign nations.

Representative Bill Huizenga, the Michigan Republican who introduced the House bill, stated that “we must employ safeguards to help ensure export controls are not being circumvented, allowing these advanced AI chips to fall into the hands of nefarious actors.” 

His co-lead, Representative Bill Foster—an Illinois Democrat and former physicist who designed chips during his scientific career—added, “I know that we have the technical tools to prevent powerful AI technology from getting into the wrong hands.”

The legislation goes far beyond simple location tracking. Companies would face ongoing surveillance obligations, required to report any credible information about chip diversion, including location changes, unauthorized users, or tampering attempts. 

This creates a continuous monitoring system that extends indefinitely beyond the point of sale, fundamentally altering the relationship between manufacturers and their products.

Cross-party support for technology control

Perhaps most striking about this AI chip surveillance initiative is its bipartisan nature. The bill enjoys broad support across party lines, co-led by House Select Committee on China Chairman John Moolenaar and Ranking Member Raja Krishnamoorthi. Other cosponsors include Representatives Ted Lieu, Rick Crawford, Josh Gottheimer, and Darin LaHood.

Moolenaar said that “the Chinese Communist Party has exploited weaknesses in our export control enforcement system—using shell companies and smuggling networks to divert sensitive US technology.” 

The bipartisan consensus on AI chip surveillance reflects how deeply the China challenge has penetrated American political thinking, transcending traditional partisan divisions.

The Senate has already introduced similar legislation through Senator Tom Cotton, suggesting that semiconductor surveillance has broad congressional support. Coordination between chambers indicates that some form of AI chip surveillance may become law regardless of which party controls Congress.

Technical challenges and implementation questions

The technical requirements for implementing AI chip surveillance raise significant questions about feasibility, security, and performance. The bill mandates that chips implement “location verification using techniques that are feasible and appropriate” within 180 days of enactment, but provides little detail on how such mechanisms would work without compromising chip performance or introducing new vulnerabilities.

For industry leaders like Nvidia, implementing mandatory surveillance technology could fundamentally alter product design and manufacturing processes. Each chip would need embedded capabilities to verify its location, potentially requiring additional components, increased power consumption, and processing overhead that could impact performance—precisely what customers in AI applications cannot afford.

The bill also grants the Secretary of Commerce broad enforcement authority to “verify, in a manner the Secretary determines appropriate, the ownership and location” of exported chips. This creates a real-time surveillance system where the US government could potentially track every advanced semiconductor worldwide, raising questions about data sovereignty and privacy.

Commercial surveillance meets national security

AI chip surveillance proposal represents an unprecedented fusion of national security imperatives with commercial technology products. Unlike traditional export controls that simply restrict destinations, the approach creates ongoing monitoring obligations that blur the lines between private commerce and state surveillance.

Representative Foster’s background as a physicist lends technical credibility to the initiative, but it also highlights how scientific expertise can be enlisted in geopolitical competition. The legislation reflects a belief that technical solutions can solve political problems—that embedding surveillance capabilities in semiconductors can prevent their misuse.

Yet the proposed law raises fundamental questions about the nature of technology export in a globalized world. Should every advanced semiconductor become a potential surveillance device? 

How will mandatory AI chip surveillance affect innovation in countries that rely on US technology? What precedent does this set for other nations seeking to monitor their technology exports?

Accelerating technological decoupling

The mandatory AI chip surveillance requirement could inadvertently accelerate the development of alternative semiconductor ecosystems. If US chips come with built-in tracking mechanisms, countries may intensify efforts to develop domestic alternatives or source from suppliers without such requirements.

China, already investing heavily in semiconductor self-sufficiency following years of US restrictions, may view these surveillance requirements as further justification for technological decoupling. The irony is striking: efforts to track Chinese use of US chips may ultimately reduce their appeal and market share in global markets.

Meanwhile, allied nations may question whether they want their critical infrastructure dependent on chips that can be monitored by the US government. The legislation’s broad language suggests that AI chip surveillance would apply to all foreign countries, not just adversaries, potentially straining relationships with partners who value technological sovereignty.

The future of semiconductor governance

As the Trump administration continues to formulate its replacement for Biden’s AI Diffusion Rule, Congress appears unwilling to wait. The Chip Security Act represents a more aggressive approach than traditional export controls, moving from restriction to active surveillance in ways that could reshape the global semiconductor industry.

This evolution reflects deeper changes in how nations view technology exports in an era of great power competition. The semiconductor industry, once governed primarily by market forces and technical standards, increasingly operates under geopolitical imperatives that prioritize control over commerce.

Whether AI chip surveillance becomes law depends on congressional action and industry response. But the bipartisan support suggests that some form of semiconductor monitoring may be inevitable, marking a new chapter in the relationship between technology, commerce, and national security.

Conclusion: The end of anonymous semiconductors from America?

The question facing the industry is no longer whether the US will control technology exports, but how extensively it will monitor them after they leave American shores. In this emerging paradigm, every chip becomes a potential intelligence asset, and every export a data point in a global surveillance network.

The semiconductor industry now faces a critical choice: adapt to a future where products carry their own tracking systems, or risk being excluded from the US market entirely. 

As Congress pushes for mandatory AI chip surveillance, we may be witnessing the end of anonymous semiconductors and the beginning of an era where every processor knows exactly where it belongs—and reports back accordingly.

See also: US-China tech war escalates with new AI chips export controls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Congress pushes GPS tracking for every exported semiconductor appeared first on AI News.

]]>
Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation https://www.artificialintelligence-news.com/news/dame-wendy-hall-ai-council-shaping-ai-with-ethics-diversity-and-innovation/ Mon, 31 Mar 2025 10:54:40 +0000 https://www.artificialintelligence-news.com/?p=105089 Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council […]

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Dame Wendy Hall is a pioneering force in AI and computer science. As a renowned ethical AI speaker and one of the leading voices in technology, she has dedicated her career to shaping the ethical, technical and societal dimensions of emerging technologies. She is the co-founder of the Web Science Research Initiative, an AI Council Member and was named as one of the 100 Most Powerful Women in the UK by Woman’s Hour on BBC Radio 4.

A key advocate for responsible AI governance and diversity in tech, Wendy has played a crucial role in global discussions on the future of AI.

In our Q&A, we spoke to her about the gender imbalance in the AI industry, the ethical implications of emerging technologies, and how businesses can harness AI while ensuring it remains an asset to humanity.

The AI sector remains heavily male-dominated. Can you share your experience of breaking into the industry and the challenges women face in achieving greater representation in AI and technology?

It’s incredibly frustrating because I wrote my first paper about the lack of women in computing back in 1987, when we were just beginning to teach computer science degree courses at Southampton. That October, we arrived at the university and realised we had no women registered on the course — none at all.

So, those of us working in computing started discussing why that was the case. There were several reasons. One significant factor was the rise of the personal computer, which was marketed as a toy for boys, fundamentally changing the culture. Since then, in the West — though not as much in countries like India or Malaysia — computing has been seen as something nerdy, something that only ‘geeks’ do. Many young girls simply do not want to be associated with that stereotype. By the time they reach their GCSE choices, they often don’t see computing as an option, and that’s where the problem begins.

Despite many efforts, we haven’t managed to change this culture. Nearly 40 years later, the industry is still overwhelmingly male-dominated, even though women make up more than

half of the global population. Women are largely absent from the design and development of computers and software. We apply them, we use them, but we are not part of the fundamental conversations shaping future technologies.

AI is even worse in this regard. If you want to work in machine learning, you need a degree in mathematics or computer science, which means we are funnelling an already male-dominated sector into an even more male-dominated pipeline.

But AI is about more than just machine learning and programming. It’s about application, ethics, values, opportunities, and mitigating potential risks. This requires a broad diversity of voices — not just in terms of gender, but also in age, ethnicity, culture, and accessibility. People with disabilities should be part of these discussions, ensuring technology is developed for everyone.

AI’s development needs input from many disciplines — law, philosophy, psychology, business, and history, to name just a few. We need all these different voices. That’s why I believe we must see AI as a socio-technical system to truly understand its impact. We need diversity in every sense of the word.

As businesses increasingly integrate AI into their operations, what steps should they take to ensure emerging technologies are developed and deployed ethically?

Take, for example, facial recognition. We still haven’t fully established the rules and regulations for when and how this technology should be applied. Did anyone ask you whether you wanted facial recognition on your phone? It was simply offered as a system update, and you could either enable it or not.

We know facial recognition is used extensively for surveillance in China, but it is creeping into use across Europe and the US as well. Security forces are adopting it, which raises concerns about privacy. At the same time, I appreciate the presence of CCTV cameras in car parks at night — they make me feel safer.

This duality applies to all emerging technologies, including AI tools we haven’t even developed yet. Every new technology has a good and a bad side — the yin and the yang, if you will. There are always benefits and risks.

The challenge is learning how to maximise the benefits for humanity, society and business while mitigating the risks. That’s what we must focus on — ensuring AI works in service of people rather than against them.

The rapid advancement of AI is transforming everyday life. How do you envision the future of AI, and what significant changes will it bring to society and the way we work?

I see a future where AI becomes part of the decision-making process, whether in legal cases, medical diagnoses, or education.

AI is already deeply embedded in our daily lives. If you use Google on your phone, you’re using AI. If you unlock your phone with facial recognition, that’s AI. Google Translate? AI. Speech processing, video analysis, image recognition, text generation, and natural language processing — these are all AI-driven technologies.

Right now, the buzz is around generative AI, particularly ChatGPT. It’s like how ‘Hoover’ became synonymous with vacuum cleaners — ChatGPT has become shorthand for AI. In reality, it’s just a clever interface created by OpenAI to allow public access to its generative AI model.

It feels like you’re having a conversation with the system, asking questions and receiving natural language responses. It works with images and videos too, making it seem incredibly advanced. But the truth is, it’s not actually intelligent. It’s not sentient. It’s simply predicting the next word in a sequence based on training data. That’s a crucial distinction.

With generative AI becoming a powerful tool for businesses, what strategies should companies adopt to leverage its capabilities while maintaining human authenticity in their processes and decision-making?

Generative AI is nothing to be afraid of, and I believe we will all start using it more and more. Essentially, it’s software that can assist with writing, summarising, and analysing information.

I compare it to when calculators first appeared. People were outraged: ‘How can we allow calculators in schools? Can we trust the answers they provide?’ But over time, we adapted. The finance industry, for example, is now entirely run by computers, yet it employs more people than ever before. I expect we’ll see something similar with generative AI.

People will be relieved not to have to write endless essays. AI will enhance creativity and efficiency, but it must be viewed as a tool to augment human intelligence, not replace it, because it’s simply not advanced enough to take over.

Look at the legal industry. AI can summarise vast amounts of data, assess the viability of legal cases, and provide predictive analysis. In the medical field, AI could support diagnoses. In education, it could help assess struggling students.

I envision AI being integrated into decision-making teams. We will consult AI, ask it questions, and use its responses as a guide — but it’s crucial to remember that AI is not infallible.

Right now, AI models are trained on biased data. If they rely on information from the internet, much of that data is inaccurate. AI systems also ‘hallucinate’ by generating false information when they don’t have a definitive answer. That’s why we can’t fully trust AI yet.

Instead, we must treat it as a collaborative partner — one that helps us be more productive and creative while ensuring that humans remain in control. Perhaps AI will even pave the way for shorter workweeks, giving us more time for other pursuits.

Photo by Igor Omilaev on Unsplash and AI Speakers Agency.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dame Wendy Hall, AI Council: Shaping AI with ethics, diversity and innovation appeared first on AI News.

]]>
Lighthouse AI for Review enhances document eDiscovery https://www.artificialintelligence-news.com/news/lighthouse-ai-for-review-enhances-document-ediscovery/ Wed, 26 Mar 2025 12:02:32 +0000 https://www.artificialintelligence-news.com/?p=105013 In an increasing number of industries, eDiscovery of regulation and compliance documents can make trading (across state borders in the US, for example) less complex. In an industry like pharmaceutical, and its often complex supply chains, companies have to be aware of the mass of changing rules and regulations emanating from different legislatures at local […]

The post Lighthouse AI for Review enhances document eDiscovery appeared first on AI News.

]]>
In an increasing number of industries, eDiscovery of regulation and compliance documents can make trading (across state borders in the US, for example) less complex.

In an industry like pharmaceutical, and its often complex supply chains, companies have to be aware of the mass of changing rules and regulations emanating from different legislatures at local and federal levels. It’s no surprise, therefore, that it’s in regulated supply chain compliance that AI can be hugely beneficial. Given that AIs excel at reading and parsing documentation and images, service providers like Lighthouse AI use the technology in its different forms to comb through existing and new documentation that governs the industry.

The company’s latest suite, Lighthouse AI for Review uses the variations on machine learning of predictive and generative AI, image recognition and OCR, plus linguistic modelling, to handle use cases in large volume, time-sensitive settings.

Predictive AI is used for classification of documents and generative AI helps with the review process for better, more defensible, downstream results. The company claims that the linguistic modelling element of the suite refines the platform’s accuracy to levels normally “beyond AI’s capabilities.”

eDiscovery – the broad term

Lighthouse AI is currently six years old, and has analysed billions of documents since 2019, but predictive AI remains important to the software, despite – it might be said – generative AI grabbing most of the headlines in the last 18 months. Fernando Delgado, Director of AI and Analytics at Lighthouse, said, “While much attention has been rightly paid to the impact of GenAI recently, the power and relevancy of predictive AI cannot be overlooked. They do different things, and there is often real value in combining them to handle different elements in the same workflow.”

Given that the blanket term ‘the pharmaceutical industry’ includes concerns as disparate as medical technology, drug research, and production, right through to dispensing stores, the compliance requirements for an individual company in the sector can be wildly varied. “Rather than a one-size-fits-all approach, we’ve been able to shape the technology to fit our unique needs – turning our ideas into real, impactful solutions,” says Christian Mahoney, Counsel at Cleary Gottlieb Steen & Hamilton.

Lighthouse AI for Review includes use cases including AI for Responsive Review, AI for Privilege Review, AI for Privilege Analysis, and AI for PII/PHI/PCI Identification. The Lighthouse AI claims that its users see an up to 40% reduction in the volume of classification and summary documents with the AI for Responsive Review feature, with less training required by the LLM before it begins to create ROI.
AI Privilege for Review is also “60% more accurate than keyword-based models,” Lighthouse AI says.

AI’s acuity with visual data is handled by AI for Image Analysis uses GenAI to analyse images and, for example, produce text descriptions of media, presenting results using the interface users interact with for other tasks.

Lighthouse’s AI for PII/PHI/PCI Identification automates the mapping of relationships between entities, and can reduce the need for manual reviews. “The new offerings are highly differentiated and designed to provide the most impact for the volume, velocity, and complexity of eDiscovery,” said Lighthouse CEO, Ron Markezich.

(Image source: “Basel – Roche Building 1” by corno.fulgur75 is licensed under CC BY 2.0.)

See also: Hugging Face calls for open-source focus in the AI Action Plan

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Lighthouse AI for Review enhances document eDiscovery appeared first on AI News.

]]>
LG EXAONE Deep is a maths, science, and coding buff https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/ Tue, 18 Mar 2025 12:49:26 +0000 https://www.artificialintelligence-news.com/?p=104905 LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding. The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with […]

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding.

The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with these leading models, showcasing a competitive level of reasoning ability.

LG AI Research has focused its efforts on dramatically improving EXAONE Deep’s reasoning capabilities in core domains. The model also demonstrates a strong ability to understand and apply knowledge across a broader range of subjects.

The performance benchmarks released by LG AI Research are impressive:

  • Maths: The EXAONE Deep 32B model outperformed a competing model, despite being only 5% of its size, in a demanding mathematics benchmark. Furthermore, the 7.8B and 2.4B versions achieved first place in all major mathematics benchmarks for their respective model sizes.
  • Science and coding: In these areas, the EXAONE Deep models (7.8B and 2.4B) have secured the top spot across all major benchmarks.
  • MMLU (Massive Multitask Language Understanding): The 32B model achieved a score of 83.0 on the MMLU benchmark, which LG AI Research claims is the best performance among domestic Korean models.

The capabilities of the EXAONE Deep 32B model have already garnered international recognition.

Shortly after its release, it was included in the ‘Notable AI Models’ list by US-based non-profit research organisation Epoch AI. This listing places EXAONE Deep alongside its predecessor, EXAONE 3.5, making LG the only Korean entity with models featured on this prestigious list in the past two years.

Maths prowess

EXAONE Deep has demonstrated exceptional mathematical reasoning skills across its various model sizes (32B, 7.8B, and 2.4B). In assessments based on the 2025 academic year’s mathematics curriculum, all three models outperformed global reasoning models of comparable size.

The 32B model achieved a score of 94.5 in a general mathematics competency test and 90.0 in the American Invitational Mathematics Examination (AIME) 2024, a qualifying exam for the US Mathematical Olympiad.

In the AIME 2025, the 32B model matched the performance of DeepSeek-R1—a significantly larger 671B model. This result showcases EXAONE Deep’s efficient learning and strong logical reasoning abilities, particularly when tackling challenging mathematical problems.

The smaller 7.8B and 2.4B models also achieved top rankings in major benchmarks for lightweight and on-device models, respectively. The 7.8B model scored 94.8 on the MATH-500 benchmark and 59.6 on AIME 2025, while the 2.4B model achieved scores of 92.3 and 47.9 in the same evaluations.

Science and coding excellence

EXAONE Deep has also showcased remarkable capabilities in professional science reasoning and software coding.

The 32B model scored 66.1 on the GPQA Diamond test, which assesses problem-solving skills in doctoral-level physics, chemistry, and biology. In the LiveCodeBench evaluation, which measures coding proficiency, the model achieved a score of 59.5, indicating its potential for high-level applications in these expert domains.

The 7.8B and 2.4B models continued this trend of strong performance, both securing first place in the GPQA Diamond and LiveCodeBench benchmarks within their respective size categories. This achievement builds upon the success of the EXAONE 3.5 2.4B model, which previously topped Hugging Face’s LLM Readerboard in the edge division.

Enhanced general knowledge

Beyond its specialised reasoning capabilities, EXAONE Deep has also demonstrated improved performance in general knowledge understanding.

The 32B model achieved an impressive score of 83.0 on the MMLU benchmark, positioning it as the top-performing domestic model in this comprehensive evaluation. This indicates that EXAONE Deep’s reasoning enhancements extend beyond specific domains and contribute to a broader understanding of various subjects.

LG AI Research believes that EXAONE Deep’s reasoning advancements represent a leap towards a future where AI can tackle increasingly complex problems and contribute to enriching and simplifying human lives through continuous research and innovation.

See also: Baidu undercuts rival AI models with ERNIE 4.5 and ERNIE X1

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
From punch cards to mind control: Human-computer interactions https://www.artificialintelligence-news.com/news/from-punch-cards-to-mind-control-human-computer-interactions/ Wed, 05 Mar 2025 15:22:07 +0000 https://www.artificialintelligence-news.com/?p=104721 The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends. With each […]

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>
The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends.

With each advance in human-computer interfaces, we’re getting closer to achieving the goal of interactions with machines, making computers more accessible and integrated with our lives.

Where did it all begin?

Modern computers emerged in the first half of the 20th century and relied on punch cards to feed data into the system and enable binary computations. The cards had a series of punched holes, and light was shone at them. If the light passed through a hole and was detected by the machine, it represented a “one”. Otherwise, it was a “zero”. As you can imagine, it was extremely cumbersome, time-consuming, and error-prone.

That changed with the arrival of ENIAC, or Electronic Numerical Integrator and Computer, widely considered to be the first “Turing-complete” device that could solve a variety of numerical problems. Instead of punch cards, operating ENIAC involved manually setting a series of switches and plugging patch cords into a board to configure the computer for specific calculations, while data was inputted via a further series of switches and buttons. It was an improvement over punch cards, but not nearly as dramatic as the arrival of the modern QWERTY electronic keyboard in the early 1950s.

Keyboards, adapted from typewriters, were a game-changer, allowing users to input text-based commands more intuitively. But while they made programming faster, accessibility was still limited to those with knowledge of the highly-technical programming commands required to operate computers.

GUIs and touch

The most important development in terms of computer accessibility was the graphical user interface or GUI, which finally opened computing to the masses. The first GUIs appeared in the late 1960s and were later refined by companies like IBM, Apple, and Microsoft, replacing text-based commands with a visual display made up of icons, menus, and windows.

Alongside the GUI came the iconic “mouse“, which enabled users to “point-and-click” to interact with computers. Suddenly, these machines became easily navigable, allowing almost anyone to operate one. With the arrival of the internet a few years later, the GUI and the mouse helped pave the way for the computing revolution, with computers becoming commonplace in every home and office.

The next major milestone in human-computer interfaces was the touchscreen, which first appeared in the late 1990s and did away with the need for a mouse or a separate keyboard. Users could now interact with their computers by tapping icons on the screen directly, pinching to zoom, and swiping left and right. Touchscreens eventually paved the way for the smartphone revolution that started with the arrival of the Apple iPhone in 2007 and, later, Android devices.

With the rise of mobile computing, the variety of computing devices evolved further, and in the late 2000s and early 2010s, we witnessed the emergence of wearable devices like fitness trackers and smartwatches. Such devices are designed to integrate computers into our everyday lives, and it’s possible to interact with them in newer ways, like subtle gestures and biometric signals. Fitness trackers, for instance, use sensors to keep track of how many steps we take or how far we run, and can monitor a user’s pulse to measure heart rate.

Extended reality & AI avatars

In the last decade, we also saw the first artificial intelligence systems, with early examples being Apple’s Siri and Amazon’s Alexa. AI chatbots use voice recognition technology to enable users to communicate with their devices using their voice.

As AI has advanced, these systems have become increasingly sophisticated and better able to understand complex instructions or questions, and can respond based on the context of the situation. With more advanced chatbots like ChatGPT, it’s possible to engage in lifelike conversations with machines, eliminating the need for any kind of physical input device.

AI is now being combined with emerging augmented reality and virtual reality technologies to further refine human-computer interactions. With AR, we can insert digital information into our surroundings by overlaying it on top of our physical environment. This is enabled using VR devices like the Oculus Rift, HoloLens, and Apple Vision Pro, and further pushes the boundaries of what’s possible.

So-called extended reality, or XR, is the latest take on the technology, replacing traditional input methods with eye-tracking, and gestures, and can provide haptic feedback, enabling users to interact with digital objects in physical environments. Instead of being restricted to flat, two-dimensional screens, our entire world becomes a computer through a blend of virtual and physical reality.

The convergence of XR and AI opens the doors to more possibilities. Mawari Network is bringing AI agents and chatbots into the real world through the use of XR technology. It’s creating more meaningful, lifelike interactions by streaming AI avatars directly into our physical environments. The possibilities are endless – imagine an AI-powered virtual assistant standing in your home or a digital concierge that meets you in the hotel lobby, or even an AI passenger that sits next to you in your car, directing you on how to avoid the worst traffic jams. Through its decentralised DePin infrastructure, it’s enabling AI agents to drop into our lives in real-time.

The technology is nascent but it’s not fantasy. In Germany, tourists can call on an avatar called Emma to guide them to the best spots and eateries in dozens of German cities. Other examples include digital popstars like Naevis, which is pioneering the concept of virtual concerts that can be attended from anywhere.

In the coming years, we can expect to see this XR-based spatial computing combined with brain-computer interfaces, which promise to let users control computers with their thoughts. BCIs use electrodes placed on the scalp and pick up the electrical signals generated by our brains. Although it’s still in its infancy, this technology promises to deliver the most effective human-computer interactions possible.

The future will be seamless

The story of the human-computer interface is still under way, and as our technological capabilities advance, the distinction between digital and physical reality will more blurred.

Perhaps one day soon, we’ll be living in a world where computers are omnipresent, integrated into every aspect of our lives, similar to Star Trek’s famed holodeck. Our physical realities will be merged with the digital world, and we’ll be able to communicate, find information, and perform actions using only our thoughts. This vision would have been considered fanciful only a few years ago, but the rapid pace of innovation suggests it’s not nearly so far-fetched. Rather, it’s something that the majority of us will live to see.

(Image source: Unsplash)

The post From punch cards to mind control: Human-computer interactions appeared first on AI News.

]]>