Accreditation leaders often face a common paradox: they have a mountain of documentation but struggle to find the “right” proof to support their claims. When a writing team begins to assemble a self-study, the natural instinct is to include everything in hopes that sheer volume will convince reviewers of the program’s quality.

However, effective accreditation reporting is not about the quantity of files; it is about the preponderance of evidence—ensuring the artifacts you select make your claims “more likely true than not”. To achieve this, your team must move from being mere collectors of data to being curators of a narrative.

The Strategic Lens: Is This Supporting Evidence or Just a “Different Story”?

One of the most challenging moments for a writing team occurs when a colleague presents an impressive piece of evidence that simply doesn’t fit the narrative. You might have a groundbreaking community outreach initiative or a unique faculty research project that is excellent in its own right, but if it does not directly support the specific intent of the standard you are addressing, it becomes “mystery evidence” that confuses the reviewer and sends them off track.  

Identifying the “Why” Behind the Standard

Before selecting an artifact, the team must understand the intent of the criteria. Standards are often written to allow for institutional mission-alignment, and your first task is to ask: Why was this standard written?

If your team is working on a standard regarding “Student Support Services” and a member suggests including a 50-page report on a new faculty hiring initiative, you must evaluate if that supports the “story” of student support or if it is a “different story” better suited for a standard on “Human Resources”.

This iterative approach—starting with the standard’s intent, identifying the most compelling piece of impact evidence (the “closer”), and then reverse-engineering the necessary contextual proofs—is a highly effective curation strategy. Instead of simply gathering every relevant document, the team first identifies the strongest artifact that proves the standard is met (e.g., an annual outcomes report). They then look backward, asking: What policy authorized this, what process generated this data, and what foundational documents must an external reviewer see to fully grasp the validity and context of this final, powerful piece of evidence? This technique ensures that every artifact selected serves the direct purpose of supporting the most critical claim.

Establishing a Success Theme early (based on your review of existing evidence) in the process acts as a filter for your evidence. If your theme is “Student-Centered Innovation,” every piece of evidence should be scrutinized: Does this artifact demonstrate how we innovated to help students succeed? If an artifact tells a different story—even a good one—it should be archived to maintain the report’s theme and voice.

Moving Beyond “Proof of Activity”

The most frequent error in accreditation submissions is relying on Proof of Activity—showing what you did (e.g., “We held a meeting” or “We have a committee”). While these are necessary for context, they are lower-value on the hierarchy of evidence.

Instead, strive for Proof of Impact. This requires a variety of evidence types that lead to a final, “culminating” proof. Consider this progression for a single criterion:

  1. The Foundation (Intent): A copy of the official policy or handbook that establishes the intent to meet the standard.
  2. The Process (Action): Meeting minutes or curriculum committee notes showing the activity taking place over a multi-year period.
  3. The Culmination (Impact): An aggregated data report or a rubric-based analysis of student work that proves the activity achieved its goal.

By presenting this progression, you lead the reviewer down a path where the final artifact acts as the “closer” for your argument.

The “Totality of Evidence” and Triangulation

Reviewers are trained to “trust but verify”. They look for triangulation—finding multiple independent sources that verify the same point.

When your writing team selects a variety of evidence—such as a narrative claim supported by an official policy, which is then backed by a data table, and finally confirmed by a site-visit interview—you are performing the triangulation for the reviewer. This systematic approach builds immense confidence and minimizes the risk of follow-up requests.

The 51% Rule

Remember that you are working toward a preponderance of evidence. You do not need to prove your case “beyond a reasonable doubt”. You simply need to tip the scales so that the reviewer concludes your compliance is the consistent norm, not an occasional occurrence.

Conclusion: Quality Over Volume

The goal of your self-study is to make the reviewers’ job as easy as possible. A report with a massive number of appended files but a weak, unsupported narrative suggests that the institution expected the reviewers to do the work of the self-study for them.

By focusing on a single, cohesive story for a standard and selecting artifacts that lead to a culminating proof of impact, you move the process from a “box-checking” burden into a powerful demonstration of excellence. You don’t have the bandwidth to tell every story of your program well and your accreditation organization doesn’t expect this of you. Accreditation is not an episodic event; it is the ongoing practice of documenting the meaningful work you do for your students every day to build a story of quality you can comprehensively present to your site reviewers.