Wednesday, March 23, 2011

You Say It Best When You Say Nothing At All (about Hypothesis Testing)

I really like sharing things that work really well.  I don't know if it's a good idea on it's own, but this worked great considering we've done a lot of remediation (for students that needed it) and a lot of projects that delve deeper into statistics.  They are working with basics of statistics all the time, so tying it together into making statistical inferences is fairly easy when they have a good foundation.  I'd also like to think it has something to do with the way we've interacted with hypothesis tests in class.

Thumbtacks - Introduction
It begins with this form (Inference for Proportions) and handing out one thumbtack.  In their own brain, students decide what they think it is and what it would take to convince them it was wrong.  Then they toss the tack to be able to compare their observations to a model they've developed.  Sounds a lot like your entire hypothesis testing/inferential statistics unit.

The NCAA Basketball Tournament - A Basic Example
Kids then made predictions for the NCAA tournament and we tested just how good they were at doing so by comparing their proportion of correct first round picks to randomly guessing (p = 0.5).  A big question that came up was "Are we just doing this for the first round?" and in true teacher fashion I said, "Yes.  Now how come we're only doing it for the first round?"  Cue a killer discussion about large enough sample sizes and the Success/Failure condition.

Back to Thumbtacks - Put it into practice
Fire up the laptops and open up what the rest of your classmates thought (What They Thought).  Immediately they began to think "Why did this kid think they were incorrect when they got a lower proportion that what they thought they needed to be incorrect?"...a not so formalized way to think about a standard of proof and a low enough p-value to reject the null hypothesis.  This was one of those points in class where I said nothing and let their brains piece together what they were looking at.  I clarified what we were looking at, asked them to pick case that they thought was theirs and test the original hypothesis.

On the board, write your p-value and whether or not you rejected your original hypothesis.  As a class we'll have a look at everyone's p-values and decisions, then decide who has correctly rejected/not rejected.  They all argue about what p-value is considered "low enough" that you have to reject.  One of those moments where again, I say nothing and they develop an understanding of alpha-levels.  Not so formal...yet.

Pick another one of those contexts from the Inference for Proportions form and investigate it.  I think I'm going to add some more situations/contexts.  I'm also not sure that they ever need to fill out that form more than once...

What's Left to Do?
Sit back, relax, and let the 5's on the AP exam roll in.  Dress it up.  Put all the formal AP Exam terms/vocab/stuff to what they've already understood.  Then....
1.  So what really is the true proportion (Confidence Intervals)
2.  Is what we got really that different? (two proportions)
3.  Repeat procedure for sample means instead of proportions

Feel free to go to our class wiki for any supplemental exercises/materials.

No comments: