Blog Archive

Sunday, March 13, 2016

I learned myself an answer.

This past week, I learned myself something new. And yes, I said "learned myself." I use this term for those moments when I learn something new and feel like I taught it to myself.

Anyway, this post is going to have some serious teacher-speak--I apologize in advance.

Thursday morning, I was talking to my student teacher and I had this realization about how to make our "Data Team" process a whole lot easier.

In our state, student data is a required portion of teacher evaluations beginning next year. We have spent this year as part of the Missouri Professional Learning Community (MO PLC) project learning how to use data systematically gathered in class to guide instruction and document it the way the state wants us to.

I struggle with this because we are being asked to use a really sophisticated spreadsheet called the "MAMA" to analyze this data. This spreadsheet automatically calculates SMART goal percentages, provides lists of students who need interventions, and compares pre and post test scores. It is actually pretty amazing.


The hard part is that the premise of this form is that the assessment used accurately measures understanding of a single, specific, concrete learning target. It presupposes an assessment of 8-15 questions with cut scores set to determine proficiency (see, teacher-speak! Sorry).

I have no idea how to do that in English. There are simply too many variables: reading level, text complexity, vocabulary, test length (fatigue), or other connected learning targets, to name a few.

Recently, we gave a figurative language assessment we built last year. It has 28 questions (so it is too long), several reading passages (so its too complicated), and asks three types of questions (so it is testing more than one thing). Based on the criteria of the MAMA form, it is not a good fit.

But it is a good test, a great test even. It can tell us who struggles with simple identification of the four major types of figurative language we focus on: metaphor, simile, personification, hyperbole. It can tell us who has trouble understanding the meaning of figurative language as used in a specific context. It can tell us who finds author's purpose in using figurative language perplexing.

The question is: How can we use it and the MAMA form?

I learned myself an answer.

As I was talking to my student teacher Thursday morning (we've come full circle now), it occurred to me that there is a way to make the two work together.

Basically, all we have to do is assign each of the 28 questions a point value based on the type of question: 1 pt for simple identification, 2 pts for understanding, 3 pts for author's purpose. Then, we can use Flubaroo (an amazing auto-grading extension for Google Forms).

When we set up Flubaroo, we simply enter the varied point values for each question and Flubaroo will then produce a percentage that is calculated to reflect the difficulty level of the questions. We can then take the percentage and convert it to a number out of 10 and enter that in the MAMA form and determine cut scores out of 10.

Ok, maybe it isn't as simple as I thought. Reading this back, it seems kind of complicated, actually.

But it will work!

The first time we do it, it is going to suck. It will be time-consuming and contentious. It will force us to think critically about the questions we are asking, the answer choices we are providing, and the learning target we are measuring. This will probably lead to better teaching as we develop a common mental model of the learning process and outcomes for the targets we are assessing.

Once it's done, we will have a strong assessment that measures something meaningful to us in our curriculum AND can fit the parameters of our data requirements.

I hate data and numbers. I am not math-minded. I don't want to say I am not good at math. That is inaccurate. I just never liked math enough to work hard at learning it and I don't have exceptional natural aptitude for it either. So, using quantitative data for instructional purposes makes me buggy, crazy, twitchy. My personal learning journey with the data team process has felt like bumpy ride down an unpaved country rode in a rain storm. It has not been easy, but I keep trying because I see the value and respect the requirements.

Figuring out a system for generating the numbers we need that will also tell us something we think is important about student learning made me really happy. I have been in a good mood for like three days.

That makes it a good thing.

1 comment:

  1. I love it that you are looking for a "good thing." In every case of your blog, you show the world what a GOOD teacher is all about.

    ReplyDelete

What do you think? Does this good thing remind you of a story of your own? Have a question or comment? Please leave a comment!