Artificial Intelligence (AI) is changing the way impact investors are approaching impact management (IM). Nick O’Donohoe, Director of BlueMark’s Governing Board and Former CEO of British International Investment, remarked in a recent conversation at BlueMark HQ that he expects the integration of AI into IM to be one of the key trends in impact investing in 2026. We recently spoke with six BlueMark clients to understand more about how they are deploying AI to support their IM practice and the challenges they are experiencing.
The impact investors we spoke to are, for the most part, in the early stages of adopting AI. Here’s what we found about their adoption efforts to date.
1. Common uses of AI & IMM:
The most widespread application of AI is to automate aspects of data collection and analysis.
- Due Diligence Support : AI is being actively tested as a tool to support pre-investment impact assessments.
- Quona Capital is currently piloting a custom GPT that incorporates their impact framework, key performance indicators (KPIs), and impact pathways. Investment teams can plug in memos and receive a draft high-level impact summary during the due diligence process. The output from the custom GPT includes:
- An analysis of alignment with Quona’s impact framework
- Suggested KPIs to measure based on the impact model
- Core impact risks and challenges
- Quona Capital is currently piloting a custom GPT that incorporates their impact framework, key performance indicators (KPIs), and impact pathways. Investment teams can plug in memos and receive a draft high-level impact summary during the due diligence process. The output from the custom GPT includes:
The custom GPT was developed to support investment team capacity by automating aspects of the initial assessment work. A key goal is to augment the team’s capabilities, allowing AI to provide a high-level impact summary that serves as a useful starting point for deeper human analysis and more nuanced assessments.
- Data Extraction, Analysis and Visualization: Several of the firms we spoke with are using AI to simplify data collection – including by extracting data from impact reports using LLM tools and by using AI to build out complex formulas in an impact data spreadsheet.
- Elevar Equity is a prime example. They are moving away from “old-fashioned, once-a-year reports” in favor of an Impact Dashboard providing near-real-time results. By using AI to unify data from disparate sources and aggregate metrics across portfolio companies, the firm creates a single source of truth. The goal is clear: empower anyone at the firm to visualize impact, spot trends, and identify the best ways to support portfolio companies through intuitive, up-to-the-minute visualizations.
- Performance assessments:: AI is also being used as a tool to support ongoing monitoring and impact performance assessments.
- Better Society Capital (BSC), used Perplexity Deep Research (which was being piloted at the time) to test the use of AI-powered impact assessments during their annual impact performance reviews. AI was used to score entities in their venture portfolio across three of the Impact Frontiers five dimensions (who, depth, and contribution) using publicly available data. The outputs were compared to scores determined by human deal leads and were then reviewed by the Venture Impact Lead to create a final, verified set of scores. The team’s primary motivation was to test if AI could deliver rigorous impact assessments. After conducting the pilot, although the team had significant concerns about hallucinations, they were still optimistic that in the future AI could help reduce the time their team spends on administrative tasks around data collection and aggregation, allowing the team to spend more time on analysis and “asking the right questions.”
2. Best Practices for Implementation
Use of AI to support impact management at impact investing firms is happening alongside broader AI adoption efforts at those firms. As such, the firms we spoke to highlighted a number of implementation considerations that are both applicable to impact management as well as other use cases. Implementation tips included:
Human Oversight is Non-Negotiable: Most firms said that having a “human in the loop” or requiring every user to be diligent in verifying the accuracy and appropriateness of AI-generated content is a must. Even further, some impact managers are working on “responsible AI” frameworks, including Quona, which is engaging industry peers and interested portfolio companies in the effort.
Policies and Tool Controls from the Top: All firms we spoke to have either established internal AI policies and compliance controls that set stringent guardrails on usage or plan to in the coming year with an emphasis on responsible use of AI. To ensure the confidentiality and security of sensitive information, firms said they are also enforcing use of enterprise versions of approved AI platforms, such as paid Gemini or ChatGPT accounts.
Plan for Change Management: Firms say change management is needed and have put in place training initiatives and shared learning forums, such as town hall meetings and internal knowledge-sharing groups, where colleagues exchange tips and flag risks. This is especially important as AI adoption and proficiency is often heterogeneous across a firm.
Treat Prompt Engineering as a Product: Better Society Capital (BSC) recommends treating the prompt like a product: build a prototype, test, assess, iterate. Through a process of iteration the prompt that BSC used for impact assessments went from a 10-line prompt to over 200 lines of prompt after many iterations.
Prioritize based on impact and feasibility: Successful AI integration starts with process and workflow analyses to identify high-impact, feasible use cases. S2G Investments noted its process for AI adoption started with mapping potential AI-enabled IMM use cases to its investment lifecycle and workflows. That mapping ensured they prioritized the low-risk, high-utility applications first.
3. Common Concerns
Despite the clear potential of AI, the firms we spoke to reported facing challenges related to trust, data governance, and data readiness when rolling out AI. These are exacerbated by the lack of established practices for ensuring its responsible use. For example, the team at Open Road noted that “the tech capability is evolving faster than best practice surrounding it, so it’s up to us to set our own example for how we use AI responsibly.”
- Trust and Hallucination: A primary concern is limited trust in outputs from GenAI due to outputs including incorrect facts and statements, even when structured prompts are used. This makes verification difficult if the user is not confident a mistake has been made.
- BSC estimated that somewhere between 10-20% of the impact assessments it conducted using AI included hallucinations. In many instances, it required 2–3 further prompts for the AI to acknowledge it had made a mistake. The team further noted that AI tends to make more mistakes where there is less publicly available data, such as in the case of an early-stage company.
- Data Security and Privacy: Ensuring data privacy when processing confidential or sensitive information with AI systems is a constant consideration. Firms are addressing this by:
- Limiting the tools that employees can utilize with sensitive information
- Limiting the information that employees can input into the public tools (e.g. non-public material information)
- Developing formal compliance policies
- Data Readiness: Before AI-powered predictive or advanced analytics can be leveraged, firms must first focus on the challenge of getting base-level data in shape. For example, Open Road has made it a 2026 goal to build out a more robust data set for impact data in order to leverage AI for predictive analysis.
BlueMark is optimistic that the dissemination and uptake of AI will help reduce the cost and time associated with collecting and analyzing impact data, allowing firms to more effectively leverage this data to optimize the impact of their portfolios. We hope to continue to learn with our clients about their use of AI and share best practices about both how to best utilize AI and mitigate its risks.
Like our clients, we are experimenting with the best ways to utilize AI to drive both our work and the sector forward. We have integrated AI into our recently launched platform, BlueMark IQ. BlueMark IQ is an impact intelligence platform that assists asset allocators to identify, monitor, and report on the impact performance of their investment funds. The platform utilizes AI to generate impact assessments combining BlueMark’s Fund ID rating criteria and allocators’ custom due diligence criteria. In the near future, we will introduce features to extract data from documents such as impact reports to speed up the data collection process.
A big thank you to Elevar Equity, Lightrock, Open Road Impact, Quona Capital Management, S2G Investments, and Verge HealthTech Fund for their contributions to this article. We have also referenced Better Society Capital’s stellar work in experimenting with the use of AI recently documented in the Stanford Social Innovation Review.
Sarah Gelfand is President of BlueMark
Beth Richardson is Manager Director of Management of the Good