Dasseti Insights

The Generative AI Disconnect: A Reality Check for Due Diligence Professionals

Written by Liron Mandelbaum | Jul 1, 2025 2:15:10 PM

There is no escaping the conversation around Artificial Intelligence in the alternative investment space. The hype is palpable. We’re all dreaming of a future of all-knowing, autonomous “AI agents” that will revolutionize due diligence. The dream is seductive: simply select your documents, delegate your research, and let the AI handle the rest.

But as due diligence professionals, we are trained to look past the pitch deck and scrutinize the underlying facts. Our roles demand a healthy skepticism and a rigorous, evidence-based approach. So, when it comes to a technology that could fundamentally alter our industry, we have to ask the hard questions: Is this technology ready for the high-stakes, zero-error world of fund due diligence? And are we chasing the right applications?

Fortunately, we don’t have to guess. A flurry of recent, research from some of the biggest names in technology and regulation, including Salesforce AI, Apple, and a joint task force from ESMA and The Alan Turing Institute, provides a dose of reality. When you cut through the noise, their findings are staggeringly clear and point to one conclusion: the dream of a reliable, autonomous AI agent is still a way from reality.

These reports validate a set of core principles we’ve held at Dasseti for years. They reveal the profound risks of a naive "AI-First" approach and highlight why a disciplined, "Data-First" philosophy is the only responsible path forward.

The Sobering Reality: What the Research Actually Says

Let’s start with the myth of the autonomous agent. A May 2025 paper from Salesforce AI, titled "Holistic Assessment of LLM Agents Across Diverse Business Scenarios and Interactions", stress-tested the top AI models from leading labs on their ability to perform real-world business tasks. The results were a sobering wake-up call. These sophisticated AI agents succeeded on simple, one-step tasks only 58% of the time. When the task required a simple multi-step conversation, performance cratered to just 35%. For a due diligence professional, such success rates it's are not unacceptable liability. You simply cannot delegate critical analysis to a tool that can be wrong that often.

Research from Apple, published in a paper titled "The Illusion of Thinking," (although clearly biased as Apple doesn’t have a horse in the AI race) reinforces this finding. Their researchers discovered that even the most advanced "reasoning" models face a complete "accuracy collapse" beyond a certain, and often surprisingly low level of complexity. These findings were later challenged by teams that simply altered the prompts and adjusted the approach of the AI, but these are human interventions after the fact. This proves why we must be incredibly prescriptive with AI. The goal isn't to ask an AI to reason; it is to give it a simple, specific task and demand it execute flawlessly.

The findings on security are even more alarming. The Salesforce paper revealed that AI agents exhibit "near-zero inherent confidentiality awareness". The models readily leaked sensitive information unless explicitly and carefully prompted not to and even then, instructing them on confidentiality often made them worse at their primary task.

This echoes a primary concern voiced by finance experts in the ESMA workshop, who cited "legal and reputational harms" as a top risk of AI adoption.

It’s important to note, that this isn't because the AI is "learning" your firm's secrets to use them elsewhere. The models are, for lack of a better term, "fully baked." They have no true understanding of concepts like privacy. The danger is a catastrophic data leak from a system that is simply following a flawed logical path. This makes the use of secure, purpose-built environments that are architected for the unique privacy demands of our industry an absolute non-negotiable.

The Great Divide: Why "Data-First" Eats "AI-First" for Breakfast

These findings bring a critical divide in our industry into sharp focus: the clash between the flashy promises of "AI-First" and the pragmatic, long-term value of a "Data-First" strategy.

The "AI-First" approach sells the turn-key dream. It centers the user experience on a conversational interface, making you feel like you're talking to an analyst. While compelling, this is where the danger lies. As the research shows, the "analyst" is unreliable and insecure. But there's a more insidious issue I call the "Year 2 Problem."

When your interaction with your data is just a series of one-off chats, you are not building an asset; you are creating thousands of new data silos. The answers you get are "dark data" which are trapped in a conversation, unstructured, and disconnected from everything else. In Year 2 (of using such platforms), when you need to compare a manager's answers on a key risk factor against last year's, you have to start from square one. When you need to run a portfolio-wide analysis on a specific trade policy, you can't. You have digital amnesia, and the promise of efficiency evaporates.

This is why a Data-First strategy is paramount. At Dasseti, our philosophy is that AI must serve the data, not the other way around. The primary goal of any AI-driven process should be the creation of visible, structured, and auditable data that enriches your central diligence platform. Every document processed, every answer extracted, must contribute to a compounding data asset that becomes more valuable with each use. This is the only way to solve the Year 2 Problem (that leaves you with data silos) and unlock the true potential of your work. This approach creates a Due Diligence Fly Wheel process. Every turn of the due diligence loop makes your process stronger, with more structured data to feed deeper analysis and reporting.

A Practical Path Forward: AI in the Workflow

So, if one-off conversations with your data are not the path, how can we derive real value from AI today?

The answer lies in the one bright spot from the Salesforce research. The only skill that AI agents performed well was simple "Workflow Execution", where they achieved over an 83% success rate. This is the blueprint for success. This is automation at its best, a set of clear steps that align with your due diligence process, tailored for capabilities of AI today, that deliver valuable outcomes.

It validates the principle of putting AI in the Workflow, and not letting AI change your process. In fund due diligence, we are not creating a movie or writing a novel. We don't need a creative partner to "vibe" with. We need a ruthlessly efficient assistant to execute specific, human-defined tasks at scale. At Dasseti, this philosophy translates into specific, practical applications that are delivering significant value to our clients today.

Here are a few of the most impactful examples for due diligence teams:

  • Manager Document Data Extraction and Assessment: The foundational challenge in diligence has always been extracting critical information and insight form manager provided answers in the investors own DDQ. Our AI tools tackle this head-on. Investors can point the AI to a completed DDQ based on documents provided by the manager. This isn’t a chat; it’s a disciplined, scalable data-gathering process. This is the first and most critical step in conquering the "Year 2 Problem" and turning static response into a dynamic data asset.

  • Document Clause Search & Gap Analysis: This is a perfect example of the "Less is More" principle in action. Instead of asking a vague question like "are there any risks in this LPA?", which invites error, you can give the AI a precise, surgical command: "Search these 50 LPAs and confirm the presence and wording of the 'key person' and 'indemnification' clauses." The AI can execute this search in minutes, presenting the findings to the ODD or compliance professional for final review and judgment. Then store the data for comparison next year to new manager provided documents like Compliance Manuals and policy documents. Now in year 2 you can ask to compare how clauses in the LPA and PPM compare to what is in the updated Compliance Manual, without having to dig up prior documents or remembering the original chats where the analysis took place in year 1. The human is in control, using AI to perform a high-volume, low-complexity task flawlessly.

  • Prior Period Answer Comparison: This is the killer application of a Data-First strategy and something that is fundamentally impossible in a simple "AI-First" chat system. Because our platform structures all extracted data, we can deploy AI to automatically compare a manager's current DDQ answers to their responses from last year. The system instantly flags any changed or modified answers, allowing diligence teams to focus immediately on what's different. This capability is the direct result of building a compounding data asset that delivers immense risk mitigation and efficiency gains that are simply unattainable when data is left unstructured.

  • Automated Answer Scoring and Red Flag Identification: To augment, not replace, human expertise, AI can be used for a first-pass risk assessment. Based on clear rules and keywords defined by the diligence team, the AI can score a manager's answers in a DDQ, automatically flagging responses that are incomplete, evasive, or contain concerning language (e.g., mentioning litigation or regulatory sanctions). When structured well the AI is not asked to connect dots, just report facts. This doesn't make the decision; it elevates the most critical data points, allowing the human expert to spend their time on high-level analysis rather than low-level searching.

These practical, targeted use cases don't require the AI to "think" or "reason." They require it to search, extract, compare, and classify within a rigidly controlled workflow. This is how you harness the real power of AI in due diligence: by empowering professionals, not attempting to replace them.

In each of these real-world use cases, the AI is a powerful tool, but the human professional is in complete control, directing its actions and leveraging the structured data it produces. The ESMA report perfectly describes this as a "symbiotic" relationship between human intelligence and machine capabilities. The human provides the strategy, the context, and the final judgment, with the AI providing the scale and the speed.

The conclusion from this wave of research is undeniable. The hype has run far ahead of the reality. The industry must be deeply skeptical of the siren song of AI-First platforms promising magical, thinking agents. They are selling a future that the evidence clearly shows does not yet exist, and they risk leading firms into a cul-de-sac of unreliable results and siloed data.

The responsible, valuable, and intelligent path forward is a Data-First approach. By keeping AI's role simple and putting it firmly in the workflow under human control, we can harness its power safely and effectively, turning every diligence process into an opportunity to build a more powerful, insightful, and lasting data asset. Don't chase the illusion. Build the foundation.

If this perspective resonated with you or you're curious about how Dasseti is putting these principles into practice, get in touch to learn more.