Engineering Efficiency: How Dasseti’s Sidekick is Transforming Due Diligence with Integrated AI
Dasseti's Sidekick: Revolutionizing due diligence with integrated AI, automating tasks, ensuring transparency, and enhancing workflow efficiency for...
Explore the risks of Quiet AI in investment management and how Dasseti AI solutions ensure transparency, auditability, and user control in regulated workflows.
Reprinted from the AIMA Journal Edition 142
AI has become a powerful enabler of productivity in alternative investment management, automating routine tasks, surfacing insights, and accelerating decision, making. A new AI, sometimes referred to as 'quiet AI' or 'background AI', is now entering the workflow. This AI operates invisibly, automating or influencing processes without explicit user instruction, visibility, or consent.
Quiet AI is often marketed as frictionless efficiency. It aims to reduce cognitive load, remove decision fatigue, and deliver a seamless user experience. Think of Outlook’s email filtering system, which quietly sorts your inbox to surface what matters most, no prompts, no configuration, just subtle automation. But the features that make quiet AI appealing, its invisibility, automation, and integration, also pose significant challenges in high, stakes, regulated sectors such as investment management.
Following the noise around Quiet AI, we have evaluated the benefits and risks, and argue for a middle path: an approach to AI that prioritizes transparency, accountability, and human agency.
Quiet AI is not to be confused with Agentic AI. Yes, they both aim to enhance productivity through automation, but they operate on fundamentally different principles, and have markedly different implications for trust, transparency, and user control.
Quiet AI refers to background automation, systems embedded into tools and workflows that act autonomously, often without user awareness or consent. Their interventions are subtle, designed to minimize friction, and typically not announced. A user might notice that a data point has been filled in, a sentence reworded, or a recommendation surfaced, but may not know that AI was involved at all.
Agentic AI, by contrast, is explicit, intentional, and goal, oriented. It refers to AI systems that can perform actions autonomously but operate as discernible agents with defined tasks. These systems are typically prompted or instructed by users and their outputs are clearly demarcated as AI, generated. Agentic AI may initiate follow, up actions, iterate on responses, or proactively identify next steps, but its role is visible, bounded, and subject to user approval.
From a workflow perspective, quiet AI operates by assumption, replacing decisions the system predicts you might make. Agentic AI, on the other hand, operates by instruction, supporting decisions the user explicitly wants help with.
This distinction matters deeply in sectors like investment management. Quiet AI may inadvertently alter key content in client documents without a clear audit trail. Agentic AI, while also automated, provides visibility and choice, which are essential for compliance, stakeholder confidence, and operational reliability.
There are legitimate reasons why quiet AI has gained traction, particularly in complex, document, intensive environments:
While quiet AI may streamline processes, several research-backed concerns have emerged regarding its uncritical adoption:
In environments where documentation trails, data lineage, and auditability are essential, such as operational due diligence or investor reporting, quiet AI introduces uncertainty. If a DDQ response was drafted based on AI input, but the source of that data (e.g., an outdated document or internal system) is unclear, confidence in the response is undermined. Inaccurate or unverifiable statements can compromise not only client relationships but also regulatory compliance.
Studies have shown that quiet AI can interfere with users’workflows by restructuring task sequences or inserting suggestions that interrupt concentration. This is particularly acute in complex decision-making tasks such as risk assessment, manager research, or compliance review, where precision and context matter deeply.
We’ve come some way since 2023, but a 2023 EY survey reported that 71% of employees familiar with AI expressed concern about its workplace impact, with 65% citing anxiety over lack of transparency. This is still an issue today as McKinsey’s 2025 workplace report notes that while AI is becoming less risky, it still lacks sufficient transparency and explainability, both of which are critical for safety, bias reduction, and user trust.
Trust is central to institutional investment. If users suspect their tools are silently altering outputs or surfacing content based on unknown algorithms, trust in both the tools and their own work erodes.
Inadvertent AI interference with sensitive or privileged data, particularly when the AI is operating in the background, raises concerns over data governance, client confidentiality, and ethical boundaries.
The investment industry has always demanded accountability, traceability, and discretion. These principles should extend to AI deployment. Several mitigation strategies have emerged from both industry guidance and academic research:
Human-in-the-loop models: Ensure humans can review, approve, or override AI outputs.
Clear disclosure: Notify users when AI is operating and clarify the source of AI-generated content.
Provenance tracing: Log the exact origin of AI inputs and outputs for audit and review.
Customizability: Allow firms to configure when and how AI is triggered, and whether to enable or disable automation features.
These recommendations align with operational due diligence standards and investor expectations around accountability. In essence, AI should be a sidekick not the main character.
Consider an operational due diligence team reviewing a manager’s risk controls. A quiet AI system might silently prioritize certain risk factors based on historical data. However, emerging risks, those not represented in past models, could be underweighted or ignored. In contrast, a transparent or agentic AI approach would clearly indicate its rationale, allowing the ODD professional to evaluate the reasoning, adjust inputs, and apply domain expertise to ensure nuanced oversight.
There is a middle ground. Platforms that embed AI within existing workflows, but make its presence optional and transparent, offer the best of both worlds. Users benefit from automation, but maintain oversight and control.
For example, in RFP and DDQ processes, Dasseti's AI capabilities can intelligently search through internal content libraries and previous responses to surface the most relevant answers. Our approach ensures users can:
This ‘assisted intelligence' model reduces user burden without compromising trust or compliance. It also helps drive adoption by empowering users rather than replacing them.
At Dasseti, we are working towards a shift from today’s tool-based AI implementations toward more integrated experiences. The key differentiator we see between firms executing successful and problematic implementations is not be the power of the AI itself, but rather how thoughtfully it is integrated into existing workflows and governance structures.
The firms that are thriving are those that view AI not as a replacement for human judgment but as an enhancement tool that respects the unique value of human expertise while eliminating low, value tasks. This approach has required Dasseti to make intentional design choices that prioritize transparency and user agency from the beginning, rather than attempting to retrofit these critical elements after implementation.
Firms considering adding AI to their investment workflows should be shaped by principles familiar to this industry: clarity, accountability, and informed decision-making. Platforms that embed optional, transparent AI, enhancing rather than obscuring human expertise, will ultimately deliver the greatest value.
Dasseti has embraced this philosophy in the design of our ENGAGE platform, where AI assistance is available, helpful, and optional, allowing investment professionals to benefit from advanced search capabilities, intelligent response suggestions, and comprehensive data extraction while maintaining complete visibility and control over the process. By prioritizing transparency alongside efficiency, we're helping firms balance innovation and integrity.
For investment professionals interested in exploring how transparent AI can enhance rather than complicate their workflows, the conversation begins not with technology capabilities, but with thoughtful consideration of where human judgment adds the most value, and how AI can be designed to respect and enhance that value rather than obscure it.
Dasseti's Sidekick: Revolutionizing due diligence with integrated AI, automating tasks, ensuring transparency, and enhancing workflow efficiency for...
Discover how Dasseti's AI-powered tools help asset managers streamline RFP and DDQ processes while still preserving the human element.
Streamline investor communications with AI-powered efficiency - without losing the human touch
Stay up to date with the latest insights from the Dasseti team.