Formative Assessment for Writing: Just the Facts

What the Research Shows and Why It Matters for Your Writing Instruction
In most classrooms, writing is assigned, collected, and graded with little attention to the observations made during the process. Students turn in their work, receive a score or a few comments, and then move on to the next task. The problem is that this cycle, on its own, rarely leads to real improvement. Students often repeat the same mistakes, struggle with the same parts of writing, and don’t always understand what to do differently next time. Teachers, meanwhile, spend significant time responding to writing but don’t always see that effort translate into stronger outcomes.
This is where formative assessment changes the equation. In writing, formative assessment acts as a diagnostic tool, focusing on observations made during the process rather than grading finished work. It is about using information during the writing process to guide what students do next. It includes targeted feedback, structured peer discussions, and support for students to reflect on their own writing. When done well, formative assessment turns writing into a process of continuous improvement rather than a series of completed assignments.
This blog breaks down what the research actually says about formative assessment in writing and what that means for classroom practice. Each section focuses on a specific study or body of research and translates the findings into clear instructional moves. The goal is not just to understand the research, but to make it usable—so teachers can see how formative assessment should shape what happens during writing instruction, not just after it.
One of the most important insights comes from a large-scale meta-analysis conducted by Steve Graham, Michael Hebert, and Karen R. Harris, FORMATIVE ASSESSMENT AND WRITING: A Meta-Analysis. Across multiple studies, they found that the type of feedback used within formative assessment makes a significant difference in student writing outcomes. Teacher feedback showed an effect size of 0.87, self-assessment 0.62, and peer feedback 0.58. In education research, effects above 0.40 are considered meaningful, and anything approaching 0.80 is rare and powerful. These findings make one thing clear: when formative assessment is used effectively in writing, it can significantly improve student outcomes. But they also highlight an important contrast: some commonly used practices produce little to no improvement at all.
The Graham, Hebert, and Harris Meta-Analysis: Feedback Drives Writing Growth
The meta-analysis conducted by Graham, Hebert, and Harris synthesized findings from multiple experimental and quasi-experimental studies examining formative assessment in writing across grades one through eight. The researchers focused specifically on how different types of feedback influenced student writing performance. What makes this study especially important is that it does not rely on a single intervention or context. Instead, it aggregates evidence across many classrooms, student populations, and instructional approaches. This allows the findings to carry much more weight than an individual study. When a pattern holds across dozens of contexts, it is much more likely to represent a reliable instructional principle.
The findings from this meta-analysis are both clear and challenging, often dispelling common misconceptions about what forms of assessment are most effective. Teacher feedback had the largest impact, with an effect size of 0.87, indicating that when teachers provide direct, actionable feedback, students significantly improve their writing. Peer feedback and self-assessment also had strong effects, but only when structured and intentional. In contrast, approaches that focused on general evaluation, such as scoring writing based on broad traits without targeted feedback, had minimal impact. This distinction is critical because it directly challenges a common practice in schools. Many systems prioritize summative scoring and evaluation, but the research shows that evaluation alone does not improve writing. Improvement comes from feedback that changes what students do next.
For classroom practice, this study shifts the focus from assessment as measurement to assessment as instruction, aligning learning goals with formative assessment practices. Teachers must think less about judging writing and more about guiding it with effective teaching strategies. Feedback must be specific, tied to a clear goal, and delivered in a way that students can act on immediately. This aligns directly with strategy-based instruction, where feedback is connected to what students are trying to do as writers to improve their learning outcomes. Without that connection, feedback becomes noise rather than guidance. The implication is simple but significant: if feedback does not lead to a change in student behavior, it is not functioning as formative assessment.
Black and Wiliam (1998): Formative Assessment as a Driver of Learning
The foundational work of Paul Black and Dylan Wiliam established formative assessment as one of the most powerful influences on student engagement and learning across disciplines. Their review of over 250 studies, Assessment and Classroom Learning, demonstrated that formative assessment produces consistent and substantial gains in achievement, particularly for lower-performing students. While their work was not limited to writing, its implications for writing instruction are profound. Writing is one of the most cognitively demanding tasks students engage in, making it especially sensitive to the presence or absence of effective feedback.
Black and Wiliam argue that formative assessment works because it reduces the gap between where students are and where they need to be. This happens when teachers continuously gather evidence of students’ understanding and use it to adjust instruction. In writing, this means paying attention not just to final drafts, but to how students are thinking as they plan, draft, and revise. It also means recognizing that students often do not know what quality writing looks like unless it is explicitly taught and reinforced. Without clear criteria and continuous feedback, students cannot close the gap between their current performance and expected outcomes.
For educators, the key takeaway is that formative assessment must be embedded in the instructional process, not added at the end. Teachers need systems for noticing what students are doing, interpreting that information, and responding in real time. This requires a shift in mindset. Instead of viewing assessments as separate activities, they become part of teaching itself. In writing instruction, this often means modeling thinking, guiding practice, and providing feedback during writing. When this happens consistently, students begin to internalize the criteria for quality writing and take greater control over their own progress.
Sadler (1989): Why Students Must Understand Quality to Improve
D. Royce Sadler provides a critical theoretical foundation for understanding formative assessment. His work, Formative Assessment and the Design of Instructional Systems, emphasizes that students cannot improve unless they understand three things: what quality looks like, how their current work compares to that standard, and what actions will close the gap. This framework may seem intuitive, but it has significant implications for writing instruction. Many students receive feedback without fully understanding what the feedback means or how to apply it. As a result, the feedback has little impact on their future writing.
Sadler’s research highlights the importance of making expectations visible and concrete. In writing, this often involves using models, exemplars, and clear criteria. Students need to see what effective writing looks like before they can produce it themselves. They also need opportunities to compare their work to those models and identify differences. This process helps students develop a more accurate understanding of quality. Without it, feedback remains abstract and difficult to apply. The result is a cycle where students receive comments but do not know how to improve.
For classroom practice, Sadler’s work reinforces the importance of explicit instruction and guided practice. Teachers must go beyond telling students what to do and instead show them how to do it. Feedback should be connected to clear criteria and accompanied by opportunities for revision. Students should be taught how to interpret feedback and use it to improve their writing. Over time, this builds their ability to self-assess and regulate their own work. This shift, from teacher-directed feedback to student-driven improvement, is one of the central goals of effective writing instruction.
Hattie and Timperley (2007): The Power and Precision of Feedback
John Hattie and Helen Timperley expanded the understanding of feedback by identifying what makes it effective in their study, The Power of Feedback. Their model emphasizes that feedback must answer three questions: Where am I going? How am I going? And what should I do next? In writing, these questions translate directly into instructional decisions. Students need to know the purpose of their writing, how well they are meeting that purpose, and what specific steps will improve their work. When feedback addresses all three questions, it becomes much more powerful.
Their research also distinguishes between different levels of feedback. Task-level feedback focuses on the writing itself, process-level feedback focuses on strategies, and self-regulation feedback supports students’ ability to manage their own learning. The most effective feedback often operates at the process and self-regulation levels. In writing, this means helping students understand how to plan, organize, and revise, rather than simply correcting errors. It also means encouraging students to monitor their own progress and make decisions about their writing.
For teachers, this study reinforces the need for precision in feedback. General comments such as “good job” or “needs more detail” do not provide enough information for students to act on. Feedback must be specific, targeted, and connected to a clear instructional goal. It should guide students toward the next step in their writing process. When feedback is aligned with strategy instruction and self-regulation, it not only improves the current piece of writing but also builds skills that transfer to future tasks. This is where formative assessment becomes a long-term investment in student learning.
Automated Feedback and Writing (Fleckenstein et al., 2023): What Technology Can and Cannot Do
Recent research by Fleckenstein and colleagues, Automated Writing Evaluation: A Meta-Analysis, examined the impact of automated writing evaluation systems on student writing. Their meta-analysis found that these systems can produce moderate improvements in writing, particularly when they provide immediate feedback and support revision. This is an important development, especially as schools increasingly integrate technology into instruction. Automated systems can help teachers manage workload and provide students with faster responses to their writing.
However, the study also highlights clear limitations. Automated feedback tends to focus on surface-level features such as grammar, spelling, and basic organization. It struggles to address deeper aspects of writing, such as idea development, argument quality, and coherence. These are the very areas where teacher feedback has the greatest impact. As a result, automated systems are most effective when used as a supplement rather than a replacement for teacher instruction. They can support early drafts and mechanical accuracy, but they cannot replace the instructional expertise required for the development of high-quality writing.
For educators, the implication is that technology should be used strategically and accompanied by regular checks for understanding. It can increase efficiency and provide additional practice opportunities, but it should not drive instruction. Teachers remain central to the formative assessment process because they can interpret student thinking and provide nuanced feedback. The most effective classrooms will combine the strengths of technology with the strengths of human instruction. This balanced approach allows teachers to focus their time and energy on the aspects of writing that matter most.
Bringing It All Together: Why These Point Directly to SRSD
When you step back and look across all of these studies, a clear pattern emerges. Formative assessment improves writing when it is specific, timely, and connected to student thinking. It works best when students understand what quality writing looks like, receive actionable feedback, and have opportunities to revise their work. It becomes even more powerful when students are taught to regulate their own writing through goal setting, monitoring, and reflection. These are not isolated findings. They appear consistently across decades of research and across different instructional contexts.
This is exactly where Self-Regulated Strategy Development stands out. SRSD does not treat formative assessment as a separate component of instruction. It integrates it into every stage of the writing process. Teachers model strategies through think-alouds, which reveal the thinking behind effective writing. Students practice those strategies with guided support, receiving feedback that is directly tied to what they are trying to do. Over time, students take increasing responsibility for their own writing, using self-regulation strategies to guide their work. This creates a continuous cycle of assessment and improvement.
What makes SRSD particularly powerful is that it aligns with major findings from the research. It provides clear criteria for quality writing, connects feedback to specific strategies, and builds students’ ability to assess and improve their own work. It also ensures that feedback leads to action, which is the defining feature of effective formative assessment. In this sense, SRSD is not just compatible with the research on formative assessment. It represents a direct application of that research in real classrooms. When implemented well, it turns formative assessment into a driving force for writing development, rather than an afterthought.
References and Key Studies
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
Fleckenstein, J., Liebenow, L. W., & Meyer, J. (2023). Automated feedback and writing: A multi-level meta-analysis of effects on students’ performance. Frontiers in Artificial Intelligence, 6, Article 1162454. https://doi.org/10.3389/frai.2023.1162454
Graham, S., Hebert, M., & Harris, K. R. (2015). Formative assessment and writing: A meta-analysis. The Elementary School Journal, 115(4), 523–547. https://doi.org/10.1086/681947
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. https://doi.org/10.1007/BF00117714

About the Author
Randy Barth is CEO of SRSD Online, which innovates evidence-based writing instruction grounded in the Science of Writing for educators. Randy is dedicated to preserving the legacies of SRSD creator Karen Harris and renowned writing researcher Steve Graham to make SRSD a standard practice in today’s classrooms. For more information on SRSD, schedule a risk-free consultation with Randy using this link: Schedule a time to talk SRSD.