Skip to main content

Rethinking Formative Assessment

We've seen an increased significance placed on formative assessment in the legal academy. Standard 314 of the ABA Standards requires that law schools use both formative and summative assessment methods in their curriculum. Its rational for doing so is "to measure and improve student learning and provide meaningful feedback to students." The ABA defines formative assessment methods as "measurements at different points during a particular course or at different points over the span of a student's education that provide meaningful feedback to improve student learning."

Those of us in the legal research instruction business are no strangers to formative assessment. We are leaders in this in the law school curriculum, with rarely a class going by in which students do not practice their skills. Lately, though, I've been wondering whether I'm going about formative assessment in the way that will best provide meaningful feedback to students. In the mandatory workshops we put on for our first year students, we focus intensely on Rombauer's method--research as a process, not a mere gathering skill. More often than not, however, our ungraded formative assessments, while disguised as an open problem because we start them off with a client-based fact pattern, are really designed to lead them from source to source--effectively a treasure hunt that takes them through the process. Now, I'm not entirely opposed to treasure hunts as a tool to teach the mechanical side of research. But, if we purport to be teaching our students process and analysis, we need to let them engage in that process with ungraded assessments where we are not directly telling them which sources to use and in which order. Otherwise, their first opportunities to truly engage in the research process openly is on their graded open memos--which, at least in my students' case, are being graded by their legal writing professors, not the legal information experts who taught them the four-step process in the first place. As such, the focus of the feedback is spread across any number of topics--technical writing, style, and more, in addition to which sources they found. Students are not getting meaningful feedback centering primarily on the research process they used.

Students need practice conducting open research problems without the pressure of a looming grade. Otherwise, students fixate on finding the "right" answer or sources, rather than engaging in and absorbing the process. Without being worried about producing a graded written product, students are able to take in the research process fully because their cognitive loads are lessened. This will also help students view research as more than a rote, mechanical task to gather authorities, as they're able to isolate the skills necessary for successful research from those needed for successful writing.

In my instance, this means creating assignments that students may not be able to complete in the allotted 50-minute time period we have--or at least that we don't have time to review in that short time. This may require buy-in from the legal writing professors to allow us to give students homework, perhaps in the form of participation points, because if an assignment isn't sanctioned by the professors who are responsible for their grades, the students may not take it seriously. We must be willing to have conversations with our legal writing colleagues about creative ways to incorporate ungraded, formative assessments into the curriculum (in those situations where we are not their "grading" professor). We need to be upfront with them about what exactly we are trying to teach our students--process and analysis, and why this particular type of assignment is a necessity.

This also requires us being willing to review assessments from our entire first year class, which may be a challenge depending on the number of instructional librarians you have and how much other for-credit and non-credit teaching they might be doing. One way to get around having to collectively grade ~140 1L assessments might be to create a video walking through the research process you'd use for a given problem. But this isn't a perfect solution, as there are often multiple ways to move through a research problem successfully. I always lean toward wanting to give individualized feedback based on students' attempts--even better if that feedback is in a conference so I know students are absorbing it. Still, the most important point is that students are 1) getting a chance to practice open problems and 2) they are receiving some kind of meaningful feedback. After all, meaningful feedback is our best way to ensure that our students will be able to conduct research successfully in practice. It's also our best way to demonstrate to our students that research is a process that requires critical thinking.

Popular posts from this blog

Why Experts Can Struggle to Teach Novices

This week in our Slack group on teaching , there was an interesting discussion about expertise and the amount of time needed to prep for instruction. I mentioned something that I recalled reading: that experts can be less effective in teaching novices because often the expert skips cognitive steps that the novice learner needs to understand.  I thought I'd dig into this a little more today on the blog. The fact is novices and experts learn very differently.  The major reason for this is that experts not only know a lot about their chosen discipline, but they understand how that discipline is organized. As such, what has a clear structure to the expert is a jumbled set of unorganized information to the novice.  The information presented to novices "are more or less random data points."[1]  In contrast, when the expert learns something new in her area of expertise, she just plugs it into the knowledge structure that already exists in her long-term memory. Because the new

Reflection in the Legal Research Classroom

Reflection is a critical component of experiential learning.  We see in ABA Standard 303 that experiential courses must include multiple opportunities for self-evaluation.  Self-evaluation is critically important to legal research.  Students must reflect on and assess their research methodology each time they research to continue becoming more efficient legal researchers and to determine what research strategies work best in which situations. [1] Reflection relates to several ideas found in cognitive theory that have been shown to result in stronger learning and retention: Retrieval : recalling recently-learned information;  Elaboration : finding a nexis between what you know and what you are learning; and  Generation : putting concepts into your own words and/or contemplating what you might do differently next time. I've been contemplating how to better incorporate reflection into legal research classes. At the beginning of this semester, at the recommendation of a works

Motivation in the Legal Research Classroom

Motivating students in the legal research classroom can be a challenge. As we know, there are many false narratives surrounding students' conceptions of legal research's importance, interest level, and ease, all of which can result in a decrease in students' motivation to engage in this subject matter. There are two types of motivation--intrinsic and extrinsic.  Extrinsic motivation occurs when students are motivated by an outside reward or punishment;[1] in instruction, this is often the grades students will get on research assignments or the participation points they might receive for actively engaging with in-class exercises.  Intrinsic motivation , on the other hand, occurs when students are interested in the topic for its own sake.[2] Due to legal research's false narratives, students entering our classrooms tend to be drive primarily by extrinsic motivation.  The problem is, as Julie Dirksen aptly notes in her excellent book Design for How People Learn , &qu

Helping With Student Focus & Motivation in the Remote Classroom, Part 3: Limiting New Technologies to Reduce Extrinsic Cognitive Load

A librarian colleague used to say to me, "Technology is great until it's not." This couldn't be more true in the classroom.  As many of us prepare for a fall entirely or partially online, there's a rush to familiarize ourselves with lots of new educational technology to teach our classes. There's this sense that if you're not using the best and newest ed tech in your class, you're doing something wrong. Fortunately, the science doesn't back this up.  Using too many different types of technology can be a contributing factor to cognitive overload in students . Cognitive load is a term cognitive psychologists use to describe the mental challenge that the limitations of working memory puts on a student's learning.[1] Basically, working memory is extremely limited in both time and duration. Humans can only hold on to between four and nine "chunks" of information at any given time,[2] and can only hold on to new information in their worki