COA Accreditation
New AI Accreditation Standards Create Necessary Guardrails for AI Adoption
Artificial intelligence (AI) is increasingly used in the day-to-day work of the social sector; however, surveys of AI adoption find many nonprofits on the sidelines, with a substantial portion of organizations reporting they do not use AI at all.
This uneven adoption is often driven by gaps in knowledge and capacity, with most nonprofit organizations reporting they are interested in AI adoption but unsure how to implement it in practical and ethical ways.
Social Current recently released its annual COA Accreditation standards update, which include new standards for responsible and ethical AI use. These standards provide practical guidance for AI evaluation and implementation that will assist organizations in setting organization-wide guardrails that promote transparency, human oversight and accountability, stakeholder engagement, data stewardship, and mission-aligned decision making.
Guiding Principles
The following principles serve as the foundation for Social Current’s AI standards and are grounded in best practice literature on AI adoption and governance. They reflect a commitment to responsible, human‑centered AI use and prioritize human judgment, organizational accountability, and protection from unintended harms.
- Stakeholder Engagement. People impacted by AI should have meaningful opportunities to shape how it is used.
- Transparency. Organizations should communicate clearly about where AI is used, why, what safeguards exist, and how to opt out.
- Human Oversight and Accountability. AI should supplement human judgment, not replace it, especially in high-stakes contexts.
- Risk and Liability Exposure. AI introduces new compliance obligations and potential harm that should be continuously monitored and managed.
Translating Principles into Practice
Using these guiding principles and insights from literature and subject matter experts, Social Current has revised its COA Accreditation standards in the following ways.
Start with a Risk-Benefit Analysis
Before adopting an AI strategy, organizations should conduct a risk-benefit analysis that considers the potential impacts of AI use on the organization and its stakeholders.
See standards RPM 8.01/CA-RPM 8.01/PA-RPM 6.01
Engage Stakeholders Early and Often
Responsible AI adoption is inclusive. Staff and impacted communities should have ongoing, meaningful opportunities to influence AI use. Literature on successful AI deployment in the workplace points to the early engagement of staff as central to successful AI adoption. Asking staff what problems they need addressed, providing opportunities for staff to experiment with AI applications, and expanding successful solutions across functions or use cases is how organizations can promote engagement and buy-in among their staff.
See standards RPM 8.07/CA-RPM/PA-RPM 6.07
Develop AI Guidance, Keep It Up to Date, and Train Staff on It
Organizations must develop an AI acceptable use policy that clearly states whether AI use is permitted. If permitted, the policy should specify approved applications, their intended purpose, and how to use them responsibly.
The acceptable use policy should align with the organization’s existing confidentiality and data security requirements; reflect its mission, vision, values, and strategy; and be reviewed regularly as the technology develops and the needs and risks of the organization evolve.
Continual training ensures that staff understand whether AI use is permitted, which tools are approved and how to use them responsibly, and where to go with questions.
See standards RPM 4.04/PA-RPM 4.04/CA-RPM 4.04, TS 2.02/CA-TS 2.02/PA-PDS 2.02
Assess AI Risk Even if You Are Not Currently Using It
AI use should be part of the organization’s overall technology assessment, regardless of whether the organization is currently using it. This practice prepares organizations for thoughtful AI adoptionif the time comes, andmitigates riskassociated with staff using AI tools informally without appropriate guardrails.
See standard RPM 4.01/CA-RPM 4.01/PA-RPM 4.01
Modernize Procurement and Contracting for AI Vendors
Because organizations are exposed to additional risk through their vendor relationships, the standards now offer specific guidance for contracting with AI companies. For AI-embedded tools, contract terms should address issues like bias auditing, data use and security, and handling data breaches. Additionally, a newly added standard guides organizations on what to evaluate when choosing AI vendors and what tradeoffs to consider when comparing options.
See standards RPM 6/CA-RPM 6
Maintain Transparency to Enable Informed Consent and Clear Feedback Pathways
The AI acceptable use policy should be publicly available, and organizations should make efforts to ensure stakeholders understand how AI is used; risks, safeguards, and how to provide feedback; and how to opt out of data use where feasible.
See standards RPM 8.03/CA-RPM 6.03/PA-RPM 6.03, RPM 8.04/CA-RPM 8.04/PA-RPM 6.04
Establish Human Oversight and Accountability Mechanisms
One critical mechanism for providing oversight of AI-assisted processes is ensuring human review of AI outputs for quality, accuracy, compliance, respect, and fairness prior to their inclusion in the case record, dissemination, or use in decision making.
Additionally, any high-risk, AI-assisted decisions (e.g., diagnosis, risk of harm assessments, triaging cases, protective actions for children, etc.) should be documented to allow for transparency, accountability, and effective monitoring of high-stakes AI-assisted decisions over time.
Finally, every organization using AI should have a cross-functional AI oversight group that is responsible for keeping the acceptable use policy current; receiving and responding to feedback and complaints; investigating harmful outcomes; approving and reviewing AI technologies and use cases; and monitoring AI’s overall impact on the organization and its stakeholders.
See standards RPM 8.05/CA-RPM 8.05/PA-RPM 6.05, RPM 8.06/CA-RPM 8.06/PA-RPM 6.06, PRG 1.04/CA-PRG 1.04/PA-PRG 1.04
Include AI in Compliance and Risk Reviews
AI does not exist outside the law, and best practice starts with compliance. Organizations should consider AI use as part of their routine review of legal requirements including applicable privacy, confidentiality, discrimination, accessibility, copyright, consumer protection, and documentation requirements.
The revised standards also expand the focus of incident reviews to include client rights incidents such as confidentiality violations, data breaches, and other technology-related harms ensuring that technology-related incidents are reported and investigated.
See standards RPM 1/CA-RPM 1/PA-RPM 1, RPM 2/CA-RPM 2/PA-RPM 2, CR 2/CA-CR 2/PA-CR 2, ASE 3/CA-ASE 3/PA-ASE 3
Conclusion
As interest in AI-driven tools continues to grow across the human and social services sector, Social Current’s COA Accreditation standards provide a framework to assist organizations in effectively integrating AI governance into existing risk, quality, and ethics practices; maintaining human oversight and accountability; and prioritizing the well-being of people and communities in every deployment.
For more information on the AI updates, please visit 2026 COA Accreditation Standards Updates as well as the AI Reference List.
If you are interested in seeking accreditation for the first time, join a free informational webinar or request more information.