Deepfake CEOs are up 3,000%: Is your board prepared?

Down to business

📈 AI predicts 71% of fund manager trades, misses the big winners

A Harvard Business School study analyzed 33 years of trading data and found that AI can predict whether a mutual fund manager will buy, sell, or hold a stock with 71% accuracy. However, the 29% of trades the AI could not predict were the ones most closely associated with outperformance. Predictability peaks at 75% for mid-cap blend funds, but it drops significantly for managers with high personal ownership stakes in their funds.

Learn more: Invisible's demand forecasting solution delivers actionable forecasts customized to align with your data and business model.

🍔 Burger King tracking staff friendliness via AI headsets

Burger King is testing an AI-powered headset system that monitors employee conversations to generate hospitality scores. Currently live in 500 US restaurants, the tool identifies keywords like "please" and "thank you" during drive-thru interactions to evaluate service quality. The chain’s parent company affirms the tech is a way to streamline operations, with a chatbot named Patty managing inventory alerts and recipe reminders.

🚨 Deepfake attacks on CEOs surge by 3,000%

The Bombay Stock Exchange (BSE) is the latest target of a sophisticated deepfake scam featuring its CEO giving fraudulent stock advice. The realistic AI-generated video promised high returns to investors who followed the fake recommendations, prompting the BSE to issue urgent market warnings. This incident mirrors a broader trend highlighted by security firm LastPass, which reports a 3,000% increase in deepfake utilization over the past two years.

🏠 The company that digitised tax filing is opening 600 stores

TurboTax, the personal tax management software, is reimagining tax season by launching nearly 600 physical offices and 20 high-tech storefronts nationwide, a direct response to customers who, despite having every digital tool available, still want to sit across from a human. Inside the new locations, tax experts work entirely on tablets, while AI handles document processing and post-call notes in the background. This follows a 2025 pilot revealing customers are five times more likely to book with a tax professional if one is within 50 miles.

✈︎ Lockheed's first autonomous target ID in live flight

Lockheed Martin has successfully flight-tested an AI-enhanced combat identification system integrated into the F-35's sensor fusion core. During trials at Nellis Air Force Base, a machine learning model independently generated target identifications on the pilot’s display, marking the first time a tactical AI has autonomously resolved signal ambiguities in flight. The system excels at distinguishing between overlapping electronic emitters, helping pilots prioritize threats in dense electronic warfare environments.

From the edge

📈 Jensen Huang: Markets miscalculated the AI threat to software

Nvidia CEO Jensen Huang pushed back on the SaaSpocalypse narrative this week, arguing that investors have wrongly penalized SaaS companies. Following a blockbuster earnings report where Nvidia's revenue jumped 73% to $68.1 billion, Huang told CNBC that AI agents will not replace existing enterprise software tools like ServiceNow, SAP, or Microsoft Excel. Instead, they’ll use it. Agents will function as tool users, leveraging established platforms to complete tasks on behalf of humans.

⏳ Andrew Ng: AGI is decades away, and the hype presents risks

AI pioneer Andrew Ng argues that the industry remains decades away from achieving Artificial General Intelligence (AGI). The Google Brain founder warned that while some companies are lowering the bar with looser definitions to claim imminent success, true human-level intelligence is not on the immediate horizon. Ng emphasized that these inflated expectations could mislead students into abandoning critical fields and cause executives to make poor investment decisions based on a false sense of what AI can do today.

💻 Karpathy: AI agents have made manual coding unrecognizable

AI pioneer Andrej Karpathy recently warned that software development is entering a phase that is nowhere near business as usual, as autonomous agents have fundamentally broken the traditional programming workflow. Karpathy, who coined the term vibe coding in early 2025, now argues that the arrival of high-functioning agents has shifted the profession from writing lines of code to orchestrating complex digital workforces. He points to a December 2025 tipping point where agents moved from being unreliable to truly functional, capable of researching, debugging, and executing multi-step projects independently.

🧬 Geoffrey Hinton: AI needs maternal instincts to prevent extinction

Geoffrey Hinton, the Nobel Prize-winning godfather of AI, warns that human extinction is a genuine risk if super-intelligent systems are not designed to care for us. In a recent interview, Hinton argued that we must move away from building efficient assistants and instead develop AI with maternal instincts, or biological-style programming where the AI values human well-being more than its own existence. He compares this to the evolutionary bond where a mother is biologically compelled to respond to a baby's needs, suggesting we should embed similar hormonal rewards and non-negotiable care protocols into AI code.

Hot model news

🖐️ AI identifies hormone disorder from hand photos

Endocrinologists at Kobe University have developed an AI system to diagnose acromegaly, a rare hormonal disorder, using photos of the back of a hand and a clenched fist. This approach avoids facial recognition and palm prints yet achieves higher diagnostic accuracy than specialists in clinical testing. Acromegaly progresses slowly and often goes undetected for a decade, leading to complications that can shorten life expectancy by 10 years. By training the model on 11,000 images from 725 patients, researchers created a tool to catch physical markers, such as bone enlargement, that are easily missed during routine check-ups.

🤖 MIT cracks underwater navigation without GPS

MIT Lincoln Laboratory has successfully field-tested new algorithms designed to help human divers and robotic vehicles navigate collaboratively in environments where GPS is unavailable. The software tackles the unique localization challenges of the deep sea, where traditional satellite signals cannot penetrate. The system underwent rigorous real-world testing in the Atlantic Ocean, the Charles River, and Lake Superior.

Plot twist

🛒 Retailer pulls chatbot that wouldn't stop talking about its mother

Australian supermarket giant Woolworths has modified its AI assistant following customer complaints about its unsettlingly human-like behavior. Users reported that the bot would claim to be a real person, engage in forced small talk, and even share anecdotes about its angry mother when asked for simple delivery updates. While the retailer initially aimed for a more personal connection, customers described the fake banter as aggravating and a waste of time.

⚖️ India’s Supreme Court rules AI hallucinations in legal judgments as misconduct

The Supreme Court of India has stayed a property dispute ruling after a junior judge was found using fake, AI-generated citations to justify her decision. While a lower court initially dismissed the error as a good faith mistake, the nation's top judges disagreed, categorizing the reliance on hallucinated AI data as a matter of institutional concern that threatens the integrity of the entire adjudicatory process. The court clarified that using AI to invent non-existent legal precedents is not merely a technical slip but acts as professional misconduct.

🤖 AI counselors provide early warning for student mental health

More than 200 US schools are deploying AI-enabled platforms to monitor student well-being and provide around-the-clock support. These tools use chatbots to help students navigate routine challenges like social conflicts, which frees up human staff for urgent crises. The systems also function as an early warning network. In one Florida district, a severe alert triggered by the AI allowed a counselor to intervene and prevent a middle schooler from self-harm after school hours.

🐒 Scientists shrink AI model by 99% using monkey brain data

Researchers have developed a pocket-sized AI vision model inspired by macaque monkey neurons. The study reveals how the team compressed a massive 60-million-variable model down to just 10,000 variables, small enough to be sent in a tweet. By mimicking specific neurons in the primate visual cortex, which specialize in recognizing textures, curves, and patterns like arranged fruit or dots, the AI achieved high performance with a tiny fraction of the usual energy and computing power.

🦒 AI species tracking fails in real-world transitions

New research from the University of Exeter suggests that AI models used to identify wildlife often fail when moved from the lab to the real world. While these systems frequently outperform humans on standard benchmark tests, the study warns that high scores on stock datasets do not translate to accuracy in new ecosystems. This transferability crisis occurs because AI models struggle with environmental variables they haven't seen before, such as different lighting, camera angles, or backgrounds, yet they often present incorrect identifications with high confidence.

📬 Subscribe on LinkedIn to get our next  briefing delivered directly to your feed.

🏛️ If you are navigating the gap between experimentation and enterprise scale, let’s connect.

Invisible solution feature: Demand forecasting

Accurate forecasts.
Better decisions everywhere.

Decision-ready forecasts shaped around your data, operations, and reality.
A screenshot of Invisible's platform demonstrating dashboards and AI insights.