Reflections on maths, learning and the Maths Learning Centre, by David K Butler

Tag: research

  • Why mathematical induction is hard

    Students find mathematical induction hard, and there is a complex interplay of reasons why. Some years ago I wrote an answer on the Maths Education Stack Exchange describing these and it’s still something I come back to regularly. I’ve decided to post it here too.

    You can read the rest of this blog post in PDF form here. 

  • Four levels of listening

    Listening is one of the most important aspects – no, scratch that – the most important aspect of my work in the Maths Learning Centre.

    It is not obvious to people starting out tutoring in the MLC that this should be the case. To a beginning tutor, it seems that it’s their job to explain things to the students, and to show them how to do stuff. But even if the actual goal was to explain, you can be much surer which explanation to give the student if you first listen to their current understanding. More importantly, you can never improve as a teacher unless at some point you listen to the students to see how well your explanation has gone.

    But how do you go about doing the business of listening? This blog post is about my interpretation of a framework that describes different levels of listening for the purposes of teaching, which I read about in two papers:

    1. Davis, B (1997) Listening for Differences: An Evolving Conception of Mathematics Teaching, Journal of Research in Mathematics Education, 28, 255-376
    2. Yackel, E, Stephan, M, Rasmussen C and Underwood, D (2003) Didactising: Continuing the work of Leen Streefland, Educational Studies in Mathematics, 54, 101-126

    I spoke at a conference about this framework some years ago, and I have been meaning to write about it ever since. I am finally actually writing about it now (and you are reading it). My thinking has evolved a little since then, so you get the updated and extended version.

    The papers

    Davis 1997

    In the  first paper, Davis tells us about how he and schoolteacher Wendy reflected on the types of listening Wendy did in her classroom, and how they were related to her beliefs about what mathematics is and what the teacher’s role is in helping students learn it. It is a truly fascinating and powerful paper and I recommend everyone read it.

    Davis notes from previous research that “the quality of student articulations seemed to be as closely related to teachers’ modes of attending as to their teaching styles”, which is a very deep observation. Before, I said that at the very least a teacher needs to listen in order to figure out what to do next, but this says even more that the way you listen may change the very things the students say. Davis goes on to give three vignettes from Wendy’s classroom to display three types of listening.

    1. Evaluative listening

    When a teacher is listening evaluatively, their reason for listening is to evaluate the correctness of what the student is saying. Ultimately, they are “listening for something in particular, rather than listening to the speaker”. The vignette describes a whole-class discussion where student responses are dismissed until the exact right one was finally accepted. Even right responses that were perceived to be in the wrong form were dismissed. This reminded me of so many times when I had been a frustrated student in such class discussions (and several times I had been a teacher leading one).

    I had never quite put my finger on why this felt frustrating until reading this quote from the paper: “No one is attending to the answer in a way that will make a difference to the course of subsequent events”. The teacher in such discussions is waiting for the right response in order to continue on their pre-planned course. As Davis says of Wendy’s vignette, it was “a teaching sequence that seemed impervious to student input”. It sounds harsh, but Davis was more forgiving than that. He noted that Wendy was indeed seeking information from the students. She could see that the students were or were not able to give the responses she hoped for, and she could see that her lesson was more or less successful based on how quickly students could use her explanations to produce the right answers. The listening was doing exactly what she wanted it to do: evaluating.

    2. Interpretive Listening

    When a teacher is listening interpretively, their reason for listening is to interpret what ideas are actually happening inside the student’s minds. They are still usually seeking to bring students to the understanding they perceive as the correct one, but now they do it through figuring out how to talk about ideas in shared ways that move students forwards in their thinking.

    Davis notes that teaching sequences with an interpretive listening stance need to have materials that “serve as a commonplace for learners to talk about ideas, enabling the process of re-presentation and revision”. For example, in the vignette, Wendy used two-coloured chips to help her students talk about adding and subtracting negative numbers. In my own teaching in the MLC, drawings or play dough often play this role.

    3. Hermeneutic Listening

    When a teacher is listening hermeneutically, they are listening not only to interpret what their students are thinking, but also to understand how their own thinking relates to that, and how the group as a whole understands. This is my description of it, anyway. Davis has several long paragraphs discussing philosophical and theoretical standpoints, which are a bit heavy (though his style makes it much lighter than I’m sure it could have been). The two main takeaways for me are that understanding isn’t only something that lives in one person, but lives in the shared communication of many, and that teachers listen not just to help students grow their understanding but also to change the teacher’s own understanding. A relevant quote: “Instead of seeking to prod learners toward particular predetermined understandings, Wendy seems to have engaged, along with her students, in the process of revising her own knowledge of mathematics.”

    Davis notes that this type of listening seems to go hand in hand with a teacher’s conception of what mathematics itself is. You have to be prepared to believe that mathematics concepts have multiple valid ways to understand and describe them, and that mathematics is at least in part a construction of a community, all of whom (including novices) have a part to play in the construction. Otherwise you won’t be ready to listen in this way.

    A final comment on the terminology… The word “hermeneutic”, no matter how often I look up definitions, still remains more-or-less meaningless to me. It seems to refer to a type of inquiry that in itself seems difficult to describe and has different meanings in different disciplines, so I can’t borrow meaning from whatever it means elsewhere to make it meaningful in this context, like Davis seems to have done for himself. This makes it hard for me to hold onto the framework.

    Yackel et al 2003

    In the second paper, the authors are thinking about how teachers structure and restructure their instruction, a process they call “didactising” after Leen Streefland. The reason the paper is here in a post about listening is that the way that you get information about what needs reworking in your instruction is to listen to the students.

    I have used the word instruction, as opposed to teaching, because it’s the word the authors used. And they really do seem to be thinking about instruction, in the sense of a sequence of explanations and activities you do with students. The main theme of the paper is about how listening to students helps design these instructional sequences, which I do not question the importance of. It’s just that the overall feeling I get is that students aren’t quite real people but sources of data, and that making good instructional sequences is a good in an of itself, as opposed to being something for the students. I’ve been a bit too dramatic there, and it’s not really as bad as I’ve made it sound, but still my feeling is that it dances a little too far from viewing the students as people.

    Anyway, the most useful thing in the paper for me was a new terminology for what Davis called hermeneutic listening; these authors call it “generative listening”.

    3. Generative listening

    These authors decide to use the word generative rather than hermeneutic because it’s easier to process for their purposes. They say, “Listening in this way can generate or transform one’s own mathematical understandings and it can generate a new space of instructional activities.”  While Davis was more focused on the way that hermeneutic listening changes the listener and the community’s understandings, these authors are more focused on the way generative listening generates new instructional activities. I’m happy to have both in my life. I think it’s important to recognise that teachers still have to decide what to do each day and that listening can help them make those decisions!

    I’ll finish off with three questions the authors list to help people focus on generative listening, which really do bring it back to the students as people at the last moment: “How does student thinking suggest alternative ways of thinking about particular mathematical ideas? How does student thinking suggest what mathematical ideas are experientially real for them? How can the instructional sequence be redesigned to capitalize on the fresh points of view that students offer?”

    Some thoughts

    From these two papers, we have a framework with three levels of listening: evaluative, interpretive, and generative. The authors of those papers focused a lot on the mindset of the teacher, and how this makes a difference to how you attend to what the students are doing and saying. Davis talks a bit about the kinds of questions people ask when they have those mindsets. But it occurred to me that even if you have a particular mindset, if you ask the wrong questions, you still won’t get the information you need. So yes the kind of question you ask is evidence for the kind of listening you hope to do, but also the kind of question you ask can also dictate the kind of listening you have to do, because you will only get certain kinds of responses.

    For example, if you ask a yes-or-no question (eg Is this a subspace or not?) or a direct question about factual information (eg What is the definition of subspace?), you are unlikely to get much information about what a student is thinking, even if that’s what you hoped for. You will have no choice but to simply evaluate their response.

    And there is one question that is famous for giving you no choice for what to listen to: “Does that make sense?” If you ask this of a whole class, students will usually give no response at all. If you ask it of a single student, they’ll say “yeah ok”. So basically it tells you nothing at all: to ask this question is to give no opportunity for you to listen. So actually there is another lower level of listening: not listening.

    And if we’re talking about not listening, then there is something worse than asking “Does that make sense?”, which at least shows you think things ought to make sense, and theoretically has a chance of a student saying “no” and so giving you some information to work with. What’s worse is asking no questions of any kind. It is amazing how often a maths teacher, even one-on-one, will speak continuously for half an hour with no opportunity for the student to say anything. I always feel such a sense of shock and shame when I realise I’ve done this and that I have absolutely no idea how the student is going.

    I think sometimes the impulse to talk continuously comes from a belief that it’s your main job to provide explanations, and sometimes it comes from believing in the power of an explanation you’ve worked hard to perfect. However, even if maths teaching were transmission, that process can’t possibly be perfect, and so you really do need to check in every so often! As Davis says, “Implicit in the act of questioning is a certain lack of faith in the transmission process.” I think everyone needs to have that certain lack of faith.

    So it’s good not to have total faith in the power of a single explanation. But what should you have faith in? I think you need to have faith that students actually are thinking. Implicit in the interpretive listening stance is the assumption that there is something to listen to. You have to believe that students have ideas if you seek to interpret them. If you don’t believe they do have ideas already, then of course you don’t seek to listen to them. For me, this is a huge part of working in the MLC that changes the whole approach. The next level above this is to believe that students have ideas that can change your own, which is where generative listening lives.

    My version of the framework

    So, finally, this is my interpretation of the listening framework of Davis (with the third level renamed by Yackel et al). There are a lot more aspects to this, such as the nature of the teacher’s role, but this version helps me think about what I am doing with students on the fly. You can download a handout PDF version of the framework  if you want.

    Level 0: Not listening

    Goal:

    • Tell what the teacher thinks is important
    • Give clear explanations

    Types of questions:

    • Not asking questions
    • “Does that make sense?”

    Beliefs:

    • Faith in the power of the teacher’s explanation
    • Students are waiting for your ideas

    Level 1: Evaluative listening

    Goal:

    • Judge student responses against a standard
    • Get a specific response so you can continue the plan

    Types of questions:

    • Yes/no questions
    • Direct questions about raw information
    • Results of calculations

    Beliefs:

    • The teacher’s explanation is not perfect
    • Students are waiting for your ideas

    Level 2: Interpretive listening

    Goal: 

    • Decipher the sense that students are making
    • Understand student thinking
    • Create a shared language to describe thinking

    Types of questions:

    • Open-ended questions about thinking or process

    Beliefs:

    • Students are reasoning
    • Student ideas are worth listening to

    Level 3: Generative listening

    Goal:

    • Jointly explore ideas
    • Discover new ways to think about or to learn concepts

    Types of questions:

    • Open-ended questions about thinking or process
    • What-if questions and I-wonder questions

    Beliefs:

    • Students are reasoning
    • Student ideas are worth listening to
    • Student ideas can change yours

    Final thoughts

    I have deliberately numbered the types of listening and called them levels, because I wanted to explicitly say to myself that some are higher than others. However, I don’t want to say that you should never seek to provide clear explanations and never listen evaluatively. Of course you should explain things when you need to, and of course there are times when you need to know students can do things in a standard way. And I also don’t want to say you should spend all your time listening generatively. That would be exhausting for everyone. It’s just that the types of listening definitely do progress in how student ideas shape what happens, and it is definitely a good thing for students to feel that what they think and do makes a difference to the outcome.

    What I want is to always be open to the opportunity of finding out how students think and possibly having it change the way I think. I also know that while beliefs definitely guide actions, it also works the other way too. If I spend all my time talking, I may come to believe implicitly that the students have nothing to say. If I spend all my time evaluating against a standard, I may come to believe implicitly that the students have nothing wonderful to say. I need to actively work in opportunities to listen at the higher levels, so that I never go too long without them.

    In daily work, where I spend most of my time one-on-one with students, this is even more important. Because when you’re right there next to the student, what a waste it would be to never hear the wonderful things they have to share, or to never make something wonderful together.

  • The importance of names

    Three years ago, my university’s Student Engagement Community of Practice collectively wrote a series of blog posts about various aspects of student engagement. I thought I would reproduce my blog post here, since it is still as relevant today as then.

    There is a lot that staff can do to engage students in the university community and in their learning, and a lot of these things have to do with the staff being engaged with the students. One way that any staff member can show their own level of engagement with the students is to learn the students’ names.

    Names are important. Your name is a part of your identity, and not just because it is what you call yourself. Your name may tie you to the culture or the land of your ancestors, or it may speak of your special connection to those you love. You may prefer to be called by a different name than your official one because your chosen name is more meaningful to you. What all of these have in common is that your name is an important part of your identity.

    For myself, my name is David, and I don’t like to be called Dave. I grew up in a community with several Davids and other people were called Dave, so being David kept my identity separate to theirs. Yet many people give me no choice and call me Dave without asking for my permission, despite me introducing myself as David. I find it intensely rude that someone would choose to call me by a different name than the one I introduce myself. On top of this, I am a twin, which means as a child I was forever being called by the wrong name entirely. We are not identical twins, and yet this still happened, because we were introduced as PaulandDavid, without an attempt to give us a separate identity. The fact that I was called Paul, or “one of the twins”, meant that I had no identity of my own separate to my brother. Being called David means that I have an identity of my own and this is important to me.

    For many students, these and worse are their daily lives. Imagine a student who no-one at university knows their name. They have no identity at university, can feel very alone and can quickly disengage. Yet according to “The First Year Experience in Australian Universities” by Baik, Naylor and Arkoudis  , only 60% of first year students are confident that a member of staff knows their name.

    Not having your name known at all is one thing, but being called by the wrong name can be worse. An international student has to deal constantly with being different to other students, and in the community at large has to deal with a lot of everyday racism. To have your name declared “difficult to pronounce”, or to have it declared as not possible to remember, is just another one of these everyday racist events. The person doing so may not be meaning to be racist, but it adds up to the students’ feeling of not belonging, to their feeling that they themselves are not worth remembering. Similar to me and my twin brother (only worse), they may have the feeling that others believe all international students are the same, so why remember them separately. In “Teachers, please learn our names!: racial microagression and the K-12 classroom” by Rita Kholi and Daniel G Solórzano , there are many examples of the hurt that such treatment of student names can have.

    So what can we do to learn our students’ names? Members of the Community of Practice suggested several strategies.

    One idea is to spend time talking to them and ask them what they would like to be called. You can’t learn their names unless you find out what they are! Be visible in your effort to pronounce it correctly, be adamant that you want to call them by the name they ask to be called. If you get multiple chances to talk to them one-on-one, ask their name again if you can’t remember and try to use it as you talk to them.

    Another idea is to print out photos of your students and to practice remembering their names. If you don’t have access to their photos, then it should not be hard to find someone nearby who can. (Though of course it would be excellent if there were a simple system whereby anyone teaching a class — including sessional staff — could get photos of their students!) Even if you can’t get their photos, simply working your way down the roll and remembering how to pronounce those names, or what the students’ actual preferred names are, is good exercise. The students are likely to appreciate the effort you put in here, even if they can’t know how much time you did put in!

    You may have your own ideas on how we can make sure we know students’ names. I’d encourage you to share them in the comments, along with any stories of how it made a difference to student engagement.

    I would like to work in a university where 100% of the students are confident that someone knows their name. We have hundreds (possibly thousands) of staff in contact with students on a regular basis. If each of us only learns a tutorial-worth of names, then we can surely meet that goal easily!

  • Education research reading: effective feedback

    After warning months ago that there would be more posts about my research reading, but I didn’t follow through. Finally here is a “Research Reading” post. This one is about how feedback helps students learn. I’ll discuss several papers which list principles/challenges for providing effective feedback.

    Gibbs, G and Simpson, C (2004) The conditions under which assessment supports student learning, Learning and teaching in higher education, 1, 3-31

    In this paper, the authors put together 10 conditions under which assessment helps students to learn, as gleaned from the research literature at the time and their own experience with actual students. The point is that assessment does drive learning in the sense that many students won’t engage with a course unless there is some sort of assessment. However, assessment doesn’t always drive the sort of learning that you want, and sometimes actually prevents people from learning. The nature of the assessments themselves can affect the amount of study, the focus of the study and the quality of the study. Also, and more importantly, the nature of feedback on the assessments makes a huge difference to whether and what students learn. They collect together 10 conditions around these themes under which assessment helps students learn. (These are quoted verbatim from various pages across the paper, with my translations and paraphrases beneath):

    1. Sufficient assessed tasks are provided for students to capture sufficient study time
      Since students often don’t study unless there are assessed tasks to do, there need to be enough assessed tasks to make them study enough. One big one at the end will usually not be enough since they’ll only study nearby to it.
    2. These tasks are engaged with by students, orienting them to allocate appropriate amounts of time and effort to the most important aspects of the course.
      Students will glean what is important to learn from your assignments, so make sure the assignments allow them to engage with the most important things in the course.
    3. Tackling the assessed task engages students in productive learning activity of an appropriate kind.
      Many assessed tasks encourage students to do activities that either aren’t productive (like endless searching online) or aren’t appropriate.
    4. Sufficient feedback is provided, both often enough and in enough detail.
      Students need feedback often so they can use it to learn and improve. A numerical grade only, or a comment like “check solutions” are not enough detail!
    5. The feedback focuses on students’ performance, on their learning and on actions under the students’ control, rather than on the students themselves and on their characteristics.
      Too often we tell students about whether they are smart or lazy, especially when we do it face to face.
    6. The feedback is timely in that it is received by students while it still matters to them and in time for them to pay attention to further learning or receive further assistance.
      Feedback on Topic 1 after you’ve already moved onto Topic 2 is effectively useless. Not receiving feedback on Assignment 1 before they do Assignment 2 defeats the whole point of feedback!
    7. Feedback is appropriate to the purpose of the assignment and to its criteria for success.
      Too often we give feedback on things not actually listed in the assignment criteria, or which will not actually improve student marks in future.
    8. Feedback is appropriate, in relation to students’ understanding of what they are supposed to be doing.
      Students often don’t know what the assignment is for or what your expectations are. To say “give reasons” is meaningless if they thought they did, or if they didn’t realise that was part of the purpose! So sometimes feedback needs to tell them what the purpose actually is.
    9. Feedback is received and attended to.
      How you do this is tricky, but there is evidence to suggest that students will be more likely to read their feedback if you don’t put a grade on it.
    10. Feedback is acted upon by the student.
      The best-case scenario is if you let them fix up their assignment or do a followup task so they can actually use the feedback straight away.

    One thing I particularly like about this paper is its grounding in the experience of the actual student. Feedback is seen in the light of how the student responds to it and whether this response is producing the learning you and they hope for. This is an important perspective to hold on to when you are planning any teaching! I particularly like the idea that your feedback might be completely invalidated by the student’s own beliefs about what the purpose of the task is, and that therefore sometimes what they need is to be given feedback about what the task is actually for.

    Nicol, DJ and Macfarlane-Dick, D (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education, 31, 199-218

    Just as the title so clearly states, the author put forward a model of how students use feedback, and then list seven principles of good feedback.

    The big idea is that students already have their own internal feedback process. All external information, including our feedback to them, is processed through their existing understanding, their goals, their motivations and their beliefs, and then produces internal feedback on how to act. The key idea is that our feedback to them is processed in exactly the same way as any other external information — it has to be processed and turned into internal feedback before it produces action. When you think about it, this is pretty obvious, but it still sounds revolutionary!

    Their list of seven principles of effective feedback is very similar to Gibbs and Simpson’s paper above, but it is all presented through the lens of students learning to self-regulate. I’ll quote the list verbatim and put my translations and comments in between.

    1. Good feedback practice helps clarify what good performance is (goals, criteria, expected standards);
      Students already have their own thoughts about this, and need to a more accurate picture in order to evaluate their own performance. Moreover, the expectations for a task are usually rich and nuanced and so can’t just be expressed in a rubric or handout. The feedback helps to work through those nuances.
    2. Good feedback practice facilitates the development of self-assessment (reflection) in learning;
      We need to explicitly provide ways for students to reflect on their work, so that they practice the art of assessing their own work.
    3. Good feedback practice delivers high quality information to students about their learning;
      Quality is defined as helping students to take action to close the gap between their current standard and the goal.
    4. Good feedback practice encourages teacher and peer dialogue around learning;
      Like under 1, the dialogue helps to sort out the nuances in the expectations. It can be whole-class dialogue if there are logistical issues with talking to every student.
    5. Good feedback practice encourages positive motivational beliefs and self-esteem;
      In particular, it focuses on the growth rather than fixed model of intelligence and ability, because a fixed model has been shown to demotivate people.
    6. Good feedback practice provides opportunities to close the gap between current and desired performance;
      Tying in with number 3, it’s best if there is actually an opportunity to act on the advice given. For example, resubmitting work or using it for subsequent work.
    7. Good feedback practice provides information to teachers that can be used to help shape teaching.
      It’s best if the opportunity of giving feedback allows staff to change their own practices and learn from the students, so feedback is actually asked of the students too!

    I particularly like the continued focus in all of these on students learning how to manage the feedback process for themselves, which was mentioned as a condition for effective feedback by Gibbs and Simpson.

    Jonsson, A. (2013) Facilitating productive use of feedback in higher education, Active Learning in Higher Education, 14, 63-76

    This article is a review of research since 1990 into how students at university use feedback provided by teachers. About 100 studies were reviewed, mostly concerning student response to teachers’ comments on essays. Across all of them, there are many factors that might influence student use of feedback, but the authors identify five major themes common to most of the studies. They pitch them as challenges. Again, I’ll quote them verbatim, but with comments in between.

    1. Feedback needs to be useful.
      Here, “useful” means “able to be used”, funnily enough. If students are going to get the chance to resubmit the task, then they prefer the feedback to be about how to make this task itself better. If the feedback is on the final version of the task, then they prefer it to be about skills they can apply to future assignments.
    2. Students prefer specific, detailed and individualised feedback.
    3. Authoritative feedback is not productive.
      These two challenges are challenges because they work against each other. Students say they want lots of detailed individualised feedback. However, if there is a lot of detailed feedback, the students will often follow the instructions blindly, only making surface changes to the work in order to get incrementally higher grades. Indeed, feedback attached to grades will usually encourage students to use the feedback to guess how the grading was done, rather than to seek to improve qualitatively.
    4. Students may lack strategies for productive use of feedback.
      Students have many non-productive ways to use feedback: they might use it to tell them about their progress but do nothing to improve, they might simply delete the erroneous bit of their assignment, they might be motivated to “work harder” with no strategy for improvement. Basically, they need explicit guidance on how to use feedback to improve.
    5. Students may lack understanding of academic terminology and jargon.
      Students often don’t understand the terminology used to describe assessment criteria, or indeed the subject matter, which renders feedback meaningless. The authors suggest providing model answers with descriptions of why they are good/bad, and providing more opportunities to talk with students.

    The authors make the comment that much of the published research seems contradictory, basically meaning that the specific students, the specific teaching situation, and the specific discipline make a big difference to how feedback is used. They also note that almost all of the studies investigated student perception of feedback rather than asking them how they used it or observing them using it.

    Sadler, R. D. (1989) Formative assessment and the design of instructional systems, Instructional Science, 18, 119-144

    I didn’t actually read this paper, but it too has a list of conditions for feedback to be useful, and it was mentioned in all three of the above papers, so it seems incomplete to leave it out. Sadler lists three things that need to happen for students to close the gap between their current performance and the goal or expectation (this is my paraphrase):

    1. The student must know what standards they are aiming for
    2. The student must be able to assess their current performance in relation to the standards
    3. The student must have strategies to modify their performance

    What I find interesting about this list is that the success of feedback rests squarely on the skills of the student, which means the traditional method of telling students where they went wrong only has a chance of affecting the second point, and even then doesn’t help the student learn how to self-assess!

    Summary

    So, we have lists of 3, 5, 7 and 10 conditions under which feedback is useful for learning, with any number of specific recommendations. What do we make of all of it? Well it seems there are two main ideas. The first is that the feedback needs to be practically useable – it has to refer to things they can achieve, in a way that they can act on, and with opportunities to act on it. The second is that students need support to use feedback – they don’t know what assessment is for, or what we are looking for when we give them assessment, so we need to help them learn that. Also, interpreting feedback and putting it into action are specific skills that actually need specific training.

  • Research reading can of worms

    Today’s blog post is about my experience attempting to become better read in the area of education research, and I’m sorry to say I’m not going to be glowingly positive about it. As the title suggests, it just seems to get out of hand so quickly.

    Let me explain.

    The MLC’s job is to support all students in learning and using the maths they need or meet in their coursework. An important part of this job is to support the people teaching the coursework itself to do their teaching in ways that will most help students to learn.

    While I have many good ideas, I wouldn’t be doing my job properly or in a scholarly way if I didn’t check out what people already say about teaching. Moreover, there’s nothing like an academic to not take good advice unless it is backed up by peer-reviewed research!

    So I try to read education research literature about the courses and concepts the students I help are learning.

    And there is the first cause of the can of worms: the students I help come from all sorts of different disciplines and even within the one discipline they are learning all sorts of different concepts. Every day there is at least one new concept that I have to wonder about how it could be taught better. And so I have an ever-increasing list of things to look up in the education literature.

    Then, when I come to look up the education literature online, there are any number of papers which may or may not actually be about the concept I am interested in today. If they are related, then they usually introduce at least a few new terminologies or refer to other people’s work which I then need to look up. Alternatively, they aren’t related, but they are usually related to some other thing I am also interested in. So the list of papers I am interested in reading gets longer again.

    And then the final problem is that education research is not nice and neat but never fully or adequately answers the question, and usually leaves you with more questions than answers. (As I have discussed before: Frayed Research.) So the can of worms is fully open now and they are wriggling all over the place.

    I’m not sure how to deal with this problem. I may need to figure out a specific area of interest and just ignore everything else. (This is more-or-less what I did during my maths PhD.) It’ll be hard though, because I really am interested in a lot of different things, and I feel like I am letting people down by not looking into things as carefully as I would like.

    For now I’ll try to wrestle with the worms as they come. You’ll see a new category of post called “Education Research Reading”, where I talk about a paper or few that I have read and what I think about it. It may not be systematic or thematic, but I hope you’ll come along for the ride.

    (Don’t worry, though. You’ll still see the standard fare of object lessons, metaphors, teaching ideas and musings about the coolness of maths.)


    These comments were left on the original blog post:

    Sophie Karanicolas 5 December 2014:
    Dear David, don’t despair, there is some good stuff out there, they are just hard to find. We will find a good one for you to read! Have a great weekend.

    David Butler 5 December 2014:
    Thanks Sophie — but that’s part of the point. In some areas there is too much good stuff out there! Better than no good stuff I suppose…

    Maureen Coffey 16 December 2014:
    “… education research is not nice and neat but never fully or adequately answers the question …” Indeed, this is because pedagogy or more specifically didactics still lacks the underpinning of a scientific framework anyone can agree on. If nurtritionists wre split about the idea of whether intestines played a role and if chewing was truly necessary for digestion they’d probably be fired from faculty and rather be treated in mental homes. But if educators fail to accept research on how the brain functions and instead expound lofty theories they seem to still be admired … How would different subjects require different teaching if different foods do not require different “stomachs”?

  • Numbers don’t change the situation

    The coordinator of first year Chemistry had a chat to me the other day about how to support students in solving word problems. The issue is that students have trouble using the words to help them decide what sorts of calculations need to be done in order to solve the problem. This issue is not new – people have been solving word problems for thousands of years, and the maths education literature is littered with papers discussing the issue. No clear concensus has been reached, of course, because there are any number of factors that affect students’ ability to solve problems.

    One of these many factors I only learned about earlier this year when reading the following paper: A. Af Ekenstam and K. Greger (1983), “Some aspects of children’s ability to solve mathematical problems”, Educational Studies in Mathematics, 14, 369–384. It’s easiest to describe using the following two problems (slightly modified from those presented in the paper):

    Problem 1: A block of cheese weighs 3kg. 1kg costs $28. Find the price of this block of cheese.
    Problem 2: A piece of cheese weighs 0.923 kg. 1kg costs $27.50. Find the price of this piece of cheese.

    The paper reported how students aged 12-13 years were asked these problems, and specifically asked what sort of calculation they would choose to do in order to solve them. What would you choose for each one?

    All of the students in this study chose multiplication for Problem 1. However many of them did not choose multiplication for Problem 2, and some of them did not know at all what to do. To be clear, it wasn’t that the students didn’t know how to actually perform the calculation; it was that they didn’t know what sort of calculation to do. Even when the teacher explicitly pointed out how similar the two problems were, many students still did not know what to do for Problem 2. Upon discussion with the students they discovered that the students were choosing what calculation to perform based on the numbers they saw, rather than on the situation described.

    This was a big surprise to me. Of course, I experience students not knowing what to do and choosing the wrong thing to do all the time, but it had never occurred to me that they were making the choice based on the numbers they saw. To me the situation itself has always told me what to do, regardless of the numbers themselves — if every kilo is worth THIS dollars, then THAT number of kilos ought to be THIS times THAT dollars, regardless of what THIS and THAT actually are. But clearly not everyone thinks this way!

    The authors of the paper have a few theories for why students are confused when the numbers are different.

    One theory is to do with the students’ experience of word problems. For many students, the majority of problems they’ve seen before have involved whole numbers for at least one of the numbers involved, and so seeing decimals in both positions just doesn’t fit with their experience. Moreover, they have succeeded perfectly well on other problems by focussing on the numbers. This says more about the students’ schooling than the student themselves, really.

    Another theory is that their experience of numbers has led them to believe certain things about multiplication and division. With whole numbers, when you multiply the answer can only get bigger, and when you divide the answer can only get smaller. Other research confirms that these ideas are very strong in children and tend to impede them having a fuller picture of what multiplication and division mean for other types of numbers. In this experiment, some students talked about how in the second problem the cheese is less than a whole kilogram and so the answer ought to be smaller than $27.50, which is in fact a perfectly correct and quite sophisticated attack on the problem. But because the answer had to get smaller, they chose to do division, because this is how you make numbers smaller.

    The final theory is that many people view multiplication and division (and most other things in maths) as a procedure, partly because of the focus on procedural fluency in primary school. In this context, the procedure for multiplying decimals by hand actually is different from the procedure for multiplying whole numbers. With decimals there’s all this stuff about shifting decimal places back and forth which makes the procedure much more complicated. And working with fractions is wildly different again! So it’s hardly surprising that students, when faced with a problem involving decimals, will expect that the action to perform should be different.

    Regardless of the reason, one thing is clear: many students are not focussing on the right thing to help them solve the problem! So one way to help those Chemistry students is to help them focus on what the words tell them about the situation, and how the situation tells them what they should be doing, rather than the numbers themselves. Because it’s the situation that tells you what to do, not the numbers, and the numbers don’t change the situation.

  • Why don’t people bring me raw data?

    We often get research students visiting us to get help with analysing their data, even though it is not actually our job to help them and we are not formally qualified to help either. But I still sit with them and listen to their woes and give what advice I can, because I know how little support for statistics there is at this university.

    Anyway, a lot of them come with their data all neatly organised into summary statistics: they have gone to quite some effort to calculate the mean or percentage in each group and present it in a lovely little table. They then ask how they can get their statistical analysis program to compare the means or percentages. Unfortunately, it’s at this stage I need to tell them that those summary statistics are of little use and what their stats program needs is the original big list of data. Indeed, their stats program could have quite simply calculated those summary statistics for them from the raw data.

    The poor students are usually quite crestfallen at this point because they really believed they were being helpful, and because they feel that all their hard work has been in vain. After the sting has worn off they are actually very surprised to learn that it’s the original data that the statistician and stats program needs. I long ago ceased to be surprised by their surprise, but still I wondered why they were surprised.

    I think I’ve come up with two plausible explanations.

    One of the explanations might be the way that results are presented in published research. If you pick up a research article in almost any discipline and flick to the results section, what you will see is a table listing the means or percentages in each group, with a p-value attached to tell you how different they are. If you look at the analysis section, they will say that they used this or that procedure to compare the means or compare the percentages. It’s not that surprising then that they think that it is the means themselves or the percentages themselves that are being directly compared in the statistical procedure, and that these values are the inputs the stats program needs.

    Another reason might be that we are inadvertantly sending this message in our own traditional introductory statistics courses. Usually, when we teach hypothesis testing, we focus very strongly on the null hypothesis, making sure the students carefully define the parameters of interest. And many of us teach them that the best way to choose which statistical procedure to do is to look at the null hypothesis they have made.

    For example, in a situation where a numerical outcome might be different on average in two different situations, we make sure they always say “H0: μ1 = μ2” right at the beginning. Upon seeing this null hypothesis they are supposed to respond “Of course! The unpaired t-test.” And right there we have associated the statistical procedure to directly comparing means! And then the connection is strengthened because in this case the test statistic itself actually is calculated using summary statistics — the means and standard deviations of the two groups. So we teach them to think about means and proportions as the basis on which the statistical methods work.

    But the problem is that if you’re going to get a computer to do any part of the analysis then you might as well get it to do all of it. It’s much simpler to get the computer to calculate all of those summary statistics and the test statistics and p-values all in one go for you. In fact, most statistical programs do not even give you the option of starting with aggregate data. And worse than this, some of the most common procedures such as ANOVA most emphatically do not work on the aggregate data directly, but the calculation requires all the data. And let’s not even get into non-parametric procedures where the null hypothesis itself doesn’t even have parameters.

    It seems to me that by focussing so strongly on means and proportions in our research publications and statistics teaching, we are setting up a whole lot of people to waste their time finding aggregate statistics.

    Now we can’t control the way data is presented in published research – it really does make sense to report the means of each group! However, we can control where we put the emphasis when we teach. I think that perhaps we could teach them to decide what stats to do based on the raw data, how it is organised, and the variables it contains. It’s only a theory, but I think that then they might expect that it is the raw data they need to bring to their statistician, and not the aggregates.


    This comment was left on the original blog post:

    Paul Priz 19 November 2013:
    Thanks for thinking and posting about statistics. Having struggled through the first year statistics course (2011) I now find myself more confused not less. The biggest problem that I have is the lack of discussion around the limitations of statistics. There is also something fundamentally wrong with the way in which statistics is being taught. It almost feels like there needs to be an introduction to introductory statistics.

  • Statistics and Insomnia

    Some years ago, I saw a snippet on the ABC science show Catalyst about insomnia – in particular, the flavour of insomnia where a person has trouble falling asleep at all. They reported on a trial study investigating the effectiveness of a tortuous new treatment for chronic insomnia. (You can find the published research here: Click here to go to insomnia article .)

    The usual way to cure insomnia is to retrain your brain and your body to associate the bed with sleep rather than wakefulness. What they recommend is to only go to sleep when you’re really really tired, and if you don’t fall asleep in quarter of an hour, to get up and go to some other room until you feel tired enough to go to sleep again. Eventually, you’ll fall asleep in bed. Then you try again tomorrow night, and the next night, and the next night… Usually it takes a month.

    The big problem with it is that people just don’t have the stamina to put themselves through all this for four weeks. Here’s where the radical treatment comes in: you compress the month of practice into 24 hours. The poor participant is put in a windowless room and practises going to sleep, and when they finally do fall asleep, they only get four minutes to sleep before they are woken up to try and fall asleep again. In this way you fit a month’s worth of falling-to-sleep practice in one day. Imagine how desperate you would have to be to sign up for this sort of thing!

    Recently, it occurred to me that there are a lot of other skills that take a lot of practice to learn and this practice is usually drawn out over such a long period that people just don’t get through it all. One of these is statistics – in particular, the process of deciding which statistical procedures should be used to analyse your data.

    In your standard stats course, the approach to teaching students to make decisions is to get them to do a project. This gives them practice at making decisions a grand total of once. And so students need a whole degree’s worth of projects, and probably years of working as a statistician, to learn how to make decisions. Hence, very few people ever get very good at making them. It’s just like the poor insomniac trying to cure their own insomnia once a night.

    But what if you could, like the new insomnia treatment, compress all that practice into a short amount of time? What if you could pick out just the part where you make the decision and get students to make a lot of decisions all at once? Then they might get the necessary experience rather more quickly than the standard approach.

    I tried it out last year with the med students. I gave them a quick lecture about how you make the decision of which hypothesis test to use. Then, I gave them 30 research questions and got them to make a decision for each one. They seemed to get the idea of how it worked. So much so that they actually had intelligent questions to ask afterwards!

    I’m trying again this year, only this time the Medical School is letting me help design the whole stats teaching program, not just one lecture. Here’s hoping that a little bit of torture for a short time can alleviate months of pain later…


    Theis comment was left on the original blog post: 

    Richard Knowling 27 January 2012:
    This is an awesome idea David! I only wish Mike Roberts had still been alive to hear about it!

  • Frayed research

    Phew! I submitted our article for the MERGA conference last week and now I feel like I’ve come out of hibernation: I’m standing blinking in the sunlight wondering what happened to everything I was doing before I started work on the article. (One of those things was this blog, which is why I’ve been quieter than usual lately.)

    One thing that caused me to descend deeper into research-hibernation was when I stopped to check the word count after getting halfway through what I wanted to say, only to discover that I was already 1500 words over the limit. I had to sacrifice a lot of what I had planned to say, and was left feeling like my research had a lot of loose ends flapping about everywhere.

    This is not the feeling I get from maths research. I’ve published very short articles in maths journals before, but felt no qualms about them at all because they were all tied up. I don’t mean I had finished everything there could be to do. No, I mean that there was a proper result – something about which you could really say, “This is it. This is true. This is why it’s true. And that’s all I need to say.” It’s neat.

    Education research is not neat. I’m always left with the feeling that you haven’t said anything. It seems more like, “This is sort of it. This is what might possibly be considered reasonable. This is why I think I might be in some way justified by holding the belief that this might possibly be reasonable. A lot more could be said but I have to stop now.” See? Not neat.

    My experience researching maths has left me with the feeling that things ought to be neat, and I stress myself out trying to tie up the loose ends in my education research. What I’m learning to realise that loose ends are the way things are in education research and saying why you think something is possibly reasonable is actually enough.