Here's something most multi-state alternative providers don't say out loud: the evaluator problem isn't really about evaluators. It's a coordination problem that involves evaluators.
When you're operating in 5, 10, or 17 states, you're not dealing with one evaluation process. You're dealing with 5, 10, or 17 versions of it — each with different qualification requirements, different observation cadences, and different documentation expectations. Non-IHE alternative certification programs exist in at least 33 states and Washington, D.C. as of the most recent comprehensive national count, and every one of those states sets its own rules. The people doing the actual observations? They're not your employees. They're district mentor teachers, cooperating classroom partners, and contracted observers who have full-time jobs that don't include logging into your program portal.
That's the real problem. And a shared Google Drive doesn't fix it.
The actual complexity — broken into its parts
Let's be specific about what "managing evaluators across multiple states" looks like in practice.
State-level variation means you can't build one process. Each state independently sets its own certification and licensure requirements for educators — and those requirements change without coordinated notice. What counts as a qualified evaluator in Texas isn't what counts in Florida or Ohio. Observation frequency differs. The forms differ. The feedback cycles differ. In Texas alone, current EPP rules under 19 TAC §228.109 require between three and five formal observations per intern, depending on certificate type — and that's just one state's rulebook. Every state program you run is its own operational reality.
Evaluator pool fragmentation means you're relying on people who didn't sign up to be power users of your software. Research consistently shows that cooperating teachers are often selected because they are strong teachers of children, but are commonly underdeveloped as mentors of novice teachers — and they're doing this on top of full-time classroom responsibilities. If the submission process takes more than a few minutes to figure out, it won't get done on time. Or at all.
Visibility gaps mean your program administrators are often flying blind. NCTQ found that enrollment in non-IHE alternative programs jumped 131% while program completions actually fell 17% — a gap that's hard to close when no one has a real-time view of where candidates are in the evaluation cycle.
Where things break down
In most multi-state providers we've seen, the operational reality looks like this: observation data comes in via email, PDFs, or a shared spreadsheet, and it's reconciled manually by program staff at the end of each term. Evaluators send updates when they remember to. Candidates wait — sometimes for days — to hear whether their submission was received, let alone reviewed.
The administrative staff isn't doing it wrong. They're doing the best they can with tools that were never designed for this. The problem is structural.
What a scalable workflow actually requires
Here's the thing: you need to solve for three different people with three different needs — at the same time.
Evaluators need simplicity. Their interface should surface only their assigned candidates, their pending tasks, and a clear path to log an observation or approve a submission. Nothing more. Every additional field or navigation step increases the risk of dropout.
Program administrators need visibility. Not reports they have to pull manually — a live view of evaluator activity across every state program. Who's submitted? Who's overdue? Where candidate feedback is stalled. The information should come to them, not the other way around.
Candidates need timely feedback. Clinical supervision research shows the process typically follows a cyclical model: pre-conference, observation, analysis, and post-conference. When that loop takes two weeks to close because data is sitting in someone's inbox, the developmental moment is gone. Structured, logged feedback — even brief — changes that.
Role-based access is the design principle that enables all three to occur simultaneously. Each user type gets a view scoped to their role. Evaluators don't navigate a full program management system. Administrators don't lose state-level detail in an undifferentiated feed. Candidates don't wait for an email.
What program-level oversight actually looks like
Platforms built for structured evaluation workflows — like Craft — support exactly this kind of role-based setup. Evaluators receive automatic alerts when a candidate submission is waiting, log observation details in a structured form, and submit without needing to understand the broader program architecture. Administrators get a dashboard that surfaces evaluator activity across programs, showing completion status without requiring manual check-ins.
The result: compliance documentation becomes a byproduct of the workflow rather than a separate task at the end of the term.
What changes when you get this right
When your evaluator workflow is structured — not informal — a few things stop being problems.
Nothing falls into an email chain. Observation records are in the candidate's file the moment they're submitted. Adding a new state program doesn't proportionally increase your team's administrative load. And evaluators — even the ones with limited time — can do their part without a training session.
Multi-state alternative programs have enough genuine complexity. With 62% of alternative programs already fast-tracking candidates without full-time clinical placements, the programs that get evaluator coordination right are the ones that will stand out in quality—and in compliance. Evaluator coordination doesn't have to be one of the hard parts.

.webp)