The hallucination that cost a top job

Down to business

🚓 Police discover the limits of “copilot” decision-making

West Midlands Police admitted that incorrect intelligence used to justify banning Maccabi Tel Aviv fans was generated with Microsoft Copilot, including a fictitious football match that never happened. The error was presented to decision-makers and later repeated to MPs before being corrected, prompting a formal apology from the chief constable. The incident underscores a growing risk for public-sector and enterprise organizations: AI-generated outputs can enter high-stakes workflows, be treated as fact, and influence real decisions unless verification, accountability, and human review are explicitly designed into how these tools are used.

💳 Mastercard moves to set the rules for AI shopping

As AI agents begin shopping and paying on people’s behalf, Mastercard is positioning itself as the trust layer behind the scenes. The company is working with Google and Microsoft to define standards for identity, intent verification, and secure checkout and is integrating its Agent Pay system into Microsoft Copilot and OpenAI-powered shopping flows. The bet isn’t on smarter agents but on who controls trust, payments, and rules when machines start spending real money.

🏦 Citi makes AI adoption peer-led

Instead of centralizing AI in a single tech team, Citi has built a 4,000-person internal network of volunteer “AI accelerators” and champions embedded across business units. These employees spend a few hours a week helping peers apply AI tools to real, job-specific work, from audit to operations, with adoption now above 70% across 182,000 staff in 84 countries. The model relies on peer-to-peer learning rather than top-down rollout, and Citi is using the network as a feedback loop to refine its internal tools.

🏦 Davos signals a more disciplined era for AI in finance

At Davos, the tone around AI in banking and fintech shifted from experimentation to control. Executives emphasized governance, auditability, and trust as AI moves deeper into payments, risk, and compliance. Rather than flashy demos, the focus was on where AI can safely automate decisions and where human oversight still matters as regulators, banks, and platforms align on guardrails for real-world deployment.

🤖 Agentic AI moves from advice to continuous, governed action

Instead of stopping at dashboards and recommendations, Ramsey Theory Capital says its agentic AI systems are now making and executing decisions in real time across enterprise operations. In early deployments, AI agents are coordinating hospital scheduling, rerouting logistics during disruptions, and adjusting retail pricing and inventory as demand shifts, without waiting for human approval at every step. Companies report 25–45% faster decision cycles, 30–50% gains in cross-system automation, and 20–35% less manual intervention, with the systems operating inside defined business rules, risk limits, and compliance controls.

Hot model news

🤖 Humanoid robots are starting to learn from what they see

Robotics startup 1X says its Neo humanoid robots can now pick up new skills by watching video, thanks to a new “world model” that helps them understand how objects behave in the real world. In practice, that means Neo can learn simple household actions it wasn’t explicitly trained on, like pulling out an air fryer basket, putting toast in a toaster, or giving a high five, and try them out on its own. It’s an early step, but one aimed at making home robots less scripted and more adaptable as 1X prepares to ship Neo into real homes.

📣 ChatGPT prepares to sell ads inside conversations

OpenAI has begun testing ads inside ChatGPT with a small group of advertisers, marking a shift away from relying solely on subscriptions. According to reports, brands are being charged based on how often ads are viewed, with placements expected to roll out to some U.S. users soon. The move reflects the growing pressure to turn conversational AI into a sustainable business, as the cost of running large models and data centers continues to climb, and signals that chatbots are starting to look less like neutral tools and more like new media channels.

💡 A greener way to generate AI images

A UCLA research team is testing a radically lower-energy approach to generative AI by swapping heavy computing for light. Instead of GPUs crunching numbers in data centers, the system uses lasers and optical patterns, more like scanning a barcode with light, to turn random noise into images. The idea: let physics do part of the work. It won’t replace today’s big image models, but it shows how future AI could be faster, cheaper, and far less energy-hungry for specific visual tasks.

🧒 ChatGPT starts estimating users’ ages to protect minors

OpenAI is rolling out an age-prediction system for ChatGPT that aims to identify users under 18 without requiring them to self-report. The model looks at account details and usage patterns, like when an account was created and how it’s typically used, to automatically apply extra safeguards, including limits on sensitive content.

Plot twist

🤖 Robots are speeding up construction

As demand for AI infrastructure surges, automation is moving into the construction layer itself. DEWALT and August Robotics unveiled a fleet-ready drilling robot that can autonomously drill thousands of precision holes for data centers, up to 10× faster than manual methods and with near-perfect accuracy. Already piloted with hyperscalers, the system cuts weeks off build timelines, showing how robotics is quietly becoming as critical to AI scale as the chips inside the racks.

🗣️ AI helps a councillor keep his own voice after MND diagnosis

A UK councillor with motor neurone disease is using AI to continue speaking publicly in a voice that sounds like his own. After his diagnosis, Nick Varley recorded samples of his speech, which were used to train an AI voice model that recreates how he used to sound. He recently used it for the first time at a council meeting, allowing him to ask questions and participate despite losing the ability to speak naturally. The technology, developed with support from the MND Association, is giving him a way to stay active in public life as the disease progresses.

🧠 Your brain may process language more like AI than we thought

New research suggests the human brain builds meaning from speech in much the same step-by-step way as modern AI language models. By tracking brain activity while people listened to a long podcast, scientists found that early neural signals handle basic word information, while later responses combine context and meaning, closely mirroring how systems like GPT process language across layers. The findings challenge older rule-based theories of language and point to understanding as something that gradually unfolds through context, offering AI researchers and neuroscientists a shared lens on how meaning actually forms.

🌱 How AI is bringing nature into the boardroom

Companies are using AI to turn once-overwhelming environmental data into decisions executives can actually act on. Systems now analyze satellite imagery, soil data, bioacoustics, and supply-chain signals to assess biodiversity, water risk, and land health in near real time, cutting environmental impact assessments from weeks to under an hour. Examples include AI that temporarily shuts down wind turbines to prevent bird strikes and robotic beehives that monitor pollinator health in real time, helping protect crops while reducing operational risk.

📬 Get the next edition in your LinkedIn feed.

🤔 Exploring AI but stuck between pilots and progress? Let’s talk.