State requirements for clinical practice hours are not uniform. California mandates a minimum of 600 hours for traditional and integrated pathways. Washington sets the alternative-route candidate limit at 540 hours. Other states use weeks of full-time student teaching rather than a fixed number of hours. What is consistent across states is that your EPP is accountable for documentation—and that accountability doesn't care which system you use to track it.
University-based alternative programs often sit outside the institution's primary EPP platform. The university invested in a system for its traditional undergraduate teacher preparation track. Your alternative program — whether it's a post-baccalaureate certification path, a district partnership, or an accelerated licensure route — runs in the margins of that infrastructure. The result is usually a patchwork: Google Forms for hour submission, email threads for evaluator sign-off, and a spreadsheet that someone reconciles at the end of each semester.
This post walks through what a structured workflow looks like — from the moment a candidate logs a placement hour to the moment that data shows up in accreditation evidence.
Stage 1: How candidates log time
A functional logging workflow has three requirements: structure, timing, and documentation.
Structure means candidates aren't submitting free text. Every entry should capture the placement site, the cooperating teacher or mentor's name, the date, and the hours logged. Without required fields, the data is inconsistent and unverifiable.
Timing means ongoing entry, not bulk submission at the end of a term. Candidates who log hours as they happen produce a timestamped record. Candidates who reconstruct weeks of hours from memory produce an approximation. CAEP's clinical practice standards require evidence of clinical experience that demonstrates sufficient depth, breadth, diversity, coherence, and duration — a bulk submission made on the last day of the semester is a summary, not evidence.
Documentation means supporting records are attached at the point of submission: a supervisor confirmation, a placement site letter, and a sign-off form. Not collected separately afterward.
Stage 2: How evaluators verify
Cooperating teachers and university supervisors are not employees of your EPP. They're professionals with their own schedules who volunteer to mentor a candidate. Any verification workflow that asks them to navigate a complex system, remember a login, or dig through an admin dashboard is going to break down.
Evaluator verification needs to be scoped. The evaluator sees only their assigned candidates and their pending actions. They confirm hours, flag discrepancies, and submit — in minutes, not in a process.
Role-based access isn't a nice-to-have here. It's the design requirement that makes evaluator participation realistic. The AACTE Clinical Practice Commission report calls for reconceptualizing how school- and university-based educators share responsibility for supervision — a shift that only works if the tools each party uses are designed for their actual role, not adapted from an administrator's interface.
Stage 3: How administrators pull reports
Real-time visibility means administrators aren't waiting until the end of a semester to find out which candidates are behind on hours. A functional reporting view shows cohort status — on track, at risk, complete — filtered by candidate, placement site, date range, or completion status.
The practical value of live data is early intervention. If a candidate is behind at midterm, a coordinator can act. If that gap is discovered during semester-end reconciliation, the options are much narrower.
Reports also serve a downstream audience. Faculty advisors, department chairs, and accreditation coordinators all consume this data differently. A system that requires a manual export and reformatting for each audience adds administrative work that could be eliminated at the infrastructure level.
Stage 4: How this becomes accreditation evidence
CAEP Standard 2 requires EPPs to document clinical partnerships and candidate clinical practice in ways that demonstrate quality and impact. A timestamped, supervisor-verified, continuously maintained record is far more defensible in a site visit than data reconstructed at the end of a review cycle.
CAEP's Quality Assurance System (Standard 5) requires a sustained, evidence-based process for documenting operational effectiveness — programs with continuous data systems are better positioned for accreditation review because the evidence exists when it's needed, not built when it's requested.
Where Craft helps
Craft's program management platform tracks this workflow end-to-end — candidate self-reporting with required fields, role-based evaluator verification, administrator dashboards with real-time cohort status, and compliance-ready exports built for accreditation documentation. It's built for programs that operate outside the institution's primary platform and need a structured infrastructure without building or maintaining custom code
The hours-tracking problem in alternative programs is fundamentally a credibility issue with documentation. The question isn't whether your candidates are completing their clinical hours — it's whether you can prove it in a format that holds up to accreditation scrutiny.
The workflow for producing that evidence exists. The question is whether your program is using it.

.webp)