Category

Education

Product

CurriculumClues

User

Instructors

Reading Time

5

min

What Was Taught vs. What Was Planned: The Syllabus Gap in Higher Education (1/3)

By

Dr. Perry J. Samson

Prof. Margaret Elliot Tracy, lecturing in University of Michigan School of Business (1935)

An Associate Vice Provost for Curriculum recently confided that:

sometimes in course descriptions, what they claim to teach and what they actually teach are not the same."

She named something that nearly everyone in higher education already knows but rarely says aloud. Syllabi describe intentions. Courses, as actually delivered, describe execution. The two are related but not identical, and the gap between them is the unexamined foundation on which most institutional decisions about curriculum are made.

This gap matters more than we have admitted, and while it has been technically infeasible to overcome the gap until very recently, the case for treating it as a solved problem now rests on the hypothesis that:

knowing what was actually taught in a course is more valuable, for nearly every institutional purpose that touches curriculum, than knowing what the syllabus said would be taught.

That is a claim, not a self-evident truth. It deserves an argument.

How the Syllabus Became Evidence

The modern syllabus has at least three overlapping identities. It is, first, a contract: a document that establishes mutual expectations between instructor and student about workload, assessment, and conduct. This is its oldest function and the one most carefully protected by faculty governance (Eberly et al., 2001). It is, second, a planning document: an instrument by which faculty think through the arc of a course before it begins, sequencing topics and assignments toward intended outcomes. This function is pedagogical and reflective; the syllabus is a tool for the instructor as much as for the student.

It is, third, and most recently, a record. Accreditors, administrators, transfer offices, and program reviewers treat the syllabus as evidence of what a course contains. Curriculum maps are built from syllabi. Articulation agreements depend on them. Accreditation binders are stuffed with them. This evidentiary role is comparatively new, dating roughly to the rise of outcomes-based accreditation in the 1990s and 2000s, and it is the use the syllabus was never designed to support (Maki, 2023).

The mismatch is structural. A document that is written before a course begins, by a single author, under the time constraints of summer preparation, and that often inherits boilerplate from previous versions, cannot reliably represent fifteen weeks of dynamic instruction. Yet that is the role we have asked it to play.

Why the Gap Is Rational

It would be easy to read the gap between syllabus and course as evidence of faculty failure: drift, indiscipline, or careless planning. This reading is wrong, and it has done real damage to the conversation about curriculum data.

The gap exists because responsive teaching produces it. I was teaching a course in extreme weather and climate change. The syllabus literally changed with the weather some weeks! A faculty member teaching macroeconomics in February 2020 did not, and should not have, taught the version of the course laid out in the syllabus written in December 2019. A literature instructor whose students arrive less prepared than expected adjusts the reading load. A computer science professor whose field has shifted three times since the last syllabus revision teaches what is now true rather than what was true. These adjustments are not failures of fidelity; they are evidence that teaching is alive.

Educational researchers have long recognized this pattern. The distinction between the intended curriculum, the enacted curriculum, and the experienced curriculum is foundational in K–12 curriculum theory and has been documented at the post-secondary level for decades (Goodlad, 1984). What students actually encounter in a course is shaped by the instructor's real-time judgment, the composition of the cohort, and the broader context in which the course is taught, none of which the syllabus can anticipate.

The syllabus, then, is best understood as a roadmap of intentions. It records what a course was meant to be at a single moment in the past. It does not, and cannot, record what the course became.

What Is Lost When We Just Measure Intentions?

If institutions only used syllabi for their original purposes like student orientation, faculty planning, and course approval, the gap would be unremarkable. Problems arise because we use syllabi as the primary data source for decisions that depend on knowing what students were actually exposed to. The downstream costs are larger than they appear.

Coverage gaps go undetected. A program may believe, on the basis of curriculum mapping done from syllabi, that ethical reasoning is taught in five courses across the major. If three of those courses dropped the ethics module due to time pressure, no one knows. The map says the coverage exists; the experience says it does not. Students graduate with a credential that asserts a competency they did not develop.

Accreditation evidence becomes performative. Accreditors increasingly ask for evidence that learning outcomes are taught, assessed, and demonstrated (Colleges & Education, 2003; International, 2020). When the only available evidence is the syllabus, the institution is reduced to asserting that outcomes were addressed because the document said they would be. This is a logically weak position, and accreditation reviewers have begun to notice.

Curricular interventions are based on plans rather than reality. Program revision conversations frequently begin with a curriculum map that reflects what was supposed to happen. Faculty then debate adjustments to a fictional baseline. The result is curricular change that addresses imagined problems while leaving real ones in place.

Institutional memory is fragile. When the syllabus is the record, institutional knowledge of what was taught dies with the instructor's departure. A retiring professor takes with them the actual content of twenty years of courses. The syllabi remain; the curriculum does not.

These costs are not hypothetical. They are routinely visible in accreditation site visit reports, where reviewers note discrepancies between mapped outcomes and observed assessments (Gaston, 2023). They are visible in transfer credit disputes, where receiving institutions discover that a course they accepted on the basis of the syllabus did not actually cover the expected material. They are visible in employer feedback, where graduates lack competencies their transcripts assert. For many faculty, they are visible in unevenness of students’ prior experience with concepts presumed to be prerequisites.

What Becomes Possible When We Measure Execution

Suppose, for the sake of argument, that an institution could observe what was actually taught in a course, not by surveilling the classroom, but by analyzing the artifacts the course already produces: lecture recordings, slide decks, assignments, discussion transcripts, assessments. What changes?

The first thing that changes is the basic unit of curriculum data. Instead of "this syllabus claims to address outcome X," the institution can say "outcome X was addressed in week 7's lecture, in the case study assigned in week 9, and in two of the four short essays graded across the term." The evidence becomes specific, located in time, and tied to artifacts a reviewer can actually inspect. This is the difference between assertion and citation.

The second thing that changes is the temporal grain of curriculum review. Coverage analysis no longer requires faculty surveys six months before an accreditation visit; it can be done in any week of any term, on whatever portion of the curriculum has been delivered so far. Program review becomes something a department can do continuously, for its own purposes, without waiting for an external occasion.

The third thing that changes is the relationship between faculty autonomy and institutional knowledge. The fear that has historically shaped this conversation, that measuring what is taught will reduce teaching to compliance with a script, is grounded in a particular technological assumption: that measurement requires standardization. If every course must produce data in the same form to be measurable, then yes, measurement disciplines teaching. But if measurement can derive from artifacts faculty already produce in their own voices and styles, the trade-off changes. Faculty teach as they always have; the institution learns from what was taught without dictating how it was taught.

This is not a small distinction. It is the entire ethical hinge on which the case for measuring execution rests.

The Honest Objections

A serious version of this argument has to take seriously the objections that any faculty member, dean, or AAUP officer would raise.

Surveillance. "If you measure what I teach, you control what I teach." The objection is real, and the historical track record of administrative monitoring of faculty work is not reassuring (Slaughter & Leslie, 1997). The response cannot be a promise that nothing will be misused; it must be a design commitment about what the data is used for and who has access to it. A system that exposes individual instructor data to administrators for evaluation purposes is a different system from one that aggregates content evidence at the program level for accreditation purposes. The technical capability is the same; the institutional choice about its use is not.

Academic freedom. "My judgment about what to teach in a given week is protected." It is, and it should be. Measuring what was taught is not a vote on whether it should have been taught. The argument here is for visibility, not for prescription. An institution that knows what was taught is in a better position to defend academic freedom, by showing accreditors and external critics what actually happens in classrooms, than one that does not.

Reductionism. "You cannot capture what happened in my classroom by analyzing my slides." This is true, and the case for measuring execution does not depend on capturing everything. It depends only on capturing more than the syllabus does, which is a low bar. A measurement that is partial but grounded in actual artifacts is still more accurate than a measurement that is comprehensive but grounded in pre-course intentions.

Faculty workload. "If this requires me to enter more data, I am not interested." Reasonable. The case for execution-based measurement only holds if it imposes no additional faculty work. Any system that derives its evidence from materials faculty already produce — slides, recordings, assignments — passes this test. Any system that requires new tagging, surveys, or metadata entry fails it.

These objections are not defeating, but they are shaping. They define what an honest version of execution-based curriculum measurement has to look like.

What This Hypothesis Is Worth

This essay opened with a hypothesis: that knowing what was actually taught is more valuable than knowing what the syllabus said would be taught. It is worth being clear about what that hypothesis claims and what it does not.

It does not claim that syllabi should be abandoned. They remain essential as contracts, as planning tools, and as student-facing documents. It does not claim that all curriculum data should be derived from execution. Some institutional purposes like course catalog accuracy, prerequisite enforcement, and faculty workload accounting are well served by intention-based data and have no need to know what actually happened.

It claims, more narrowly, that the institutional purposes that depend on knowing what students were exposed to such as accreditation evidence, coverage analysis, program review, and learning outcomes assessment, are currently served by the wrong data, and that better data is now technically achievable for the first time.

That last clause matters. The argument for measuring execution rather than intention is not new. Educational researchers have made it for decades (Goodlad, 1984; Shepard, 2000). What is new is that the cost of capturing execution data has collapsed. Course content is increasingly digital. Lectures are recorded. Assignments live in learning management systems. Large language models can read and structure that content with citation grounding. The capture problem that made this argument theoretical for thirty years has, fairly suddenly, become tractable.

When the cost of measuring something accurately falls below the cost of measuring it inaccurately, the institutional case for the better measurement becomes hard to refuse. We are at that point, or very near it, with curriculum.

The next essay in this series asks where this kind of evidence should live and why universities have systems of record for almost everything except learning.

References

Colleges, A. o. A. M., & Education, L. C. o. M. (2003). Functions and structure of a medical school: standards for accreditation of medical education programs leading to the MD degree. Liaison Committee on Medical Education. 

Eberly, M. B., Newton, S. E., & Wiggins, R. A. (2001). The syllabus as a tool for student-centered learning. The Journal of General Education, 56-74. 

Gaston, P. L. (2023). Higher education accreditation: How it's changing, why it must. Taylor & Francis. 

Goodlad, J. I. (1984). A place called school. Prospects for the future. ERIC. 

International, A. (2020). Guiding principles and standards for business accreditation. 

Maki, P. L. (2023). Assessing for learning: Building a sustainable commitment across the institution. Routledge. 

Shepard, L. A. (2000). The role of assessment in a learning culture. Educational researcher, 29(7), 4-14. 

Slaughter, S., & Leslie, L. L. (1997). Academic capitalism: Politics, policies, and the entrepreneurial university. In: Baltimore, MD: Johns Hopkins University Press.

Let's Talk

Fill out your information and a LearningClues representative will reach out to you soon!

You can also send us an email at: ideas@learningclues.com

LearningClues Logo

Secure, AI-Powered Support for Students & Faculty

ideas@learningclues.com

© 2026 • LearningClues, Inc.

All rights reserved.

LearningClues Logo

Secure, AI-Powered Support for Students & Faculty

ideas@learningclues.com

© 2026 • LearningClues, Inc.

All rights reserved.

LearningClues Logo

Secure, AI-Powered Support for Students & Faculty

ideas@learningclues.com

© 2026 • LearningClues, Inc.

All rights reserved.