Friday, January 21, 2011
I recently participated in a meeting sponsored by the US Department of Education for project directors and evaluators award i3 (Investing In Innovation) grants. I am the lead evaluator on one of those 49 projects. It is clear that the overarching purpose of the i3 program is to improve educational practice (in terms of learning outcomes and quality of instruction) in America’s schools. It is also clear that the grantees are expected to have and implement very high quality research and evaluation plans to support claims about improved learning and instruction. There were many references to the What Works Clearinghouse and its standards (see http://ies.ed.gov/ncee/wwc/). If one does a search on the word ‘learning’ in the category of ‘evaluation reports’ one finds only one entry since January 1, 2009. Perhaps this is why some people refer to this site as the “Nothing Works Clearinghouse.”Entries in the Clearinghouse must meet specific standards set by the Institute for Education Sciences (IES; see http://ies.ed.gov/).
It occurs to me that there is some tension with regard to what IES and some educational researchers might regard as clear and convincing evidence that a particular intervention (instructional approach, strategy, technology, teacher training, etc.) works well with certain groups of students compared with what some educational practitioners would be inclined to accept as clear and convincing evidence. The stakes are different for these two groups. IES and its researchers are spending federal dollars – often quite a lot of money – to make systemic and systematic improvements in learning and instruction. They are accountable to congress and the nation who want to see a certain kind of evidence that investments have been used wisely. These people do not have to take the implications of findings back into classrooms.
On the other hand, educational practitioners do have to go into classrooms and their primary responsibility is doing their best, given many serious constraints and limitations, to improve the learning and instruction that occurs in our schools. Teachers are the ones who will put new instructional approaches, strategies and technologies into practice. Teachers are not trained in experimental design and advanced statistical analysis. Teachers are trained in implementing curricula appropriate for their students. While the experimental research may show that using an interactive whiteboard rather than a non-interactive whiteboard has no significant difference in terms of measured student outcomes, a teacher may believe that such use does gain and maintain the attention of students and result in a more well organized lesson, or something else that is not so easily measured. One can imagine other cases where the experimental research suggests no significant difference in X compared with Y but teachers believe that there are significant differences of some kind involved.
Should we simply disregard these teachers and only support the very few things that appear to have clear and convincing evidence of effectiveness as determined by IES standards? Should we expect teachers to understand the sophisticated statistical analysis that supports that kind of clear and convincing evidence? If so, then perhaps we ought to expect those making policy and funding decisions to understand the realities of teaching in a classroom for six or more hours every day. It strikes me that what is needed is research that can be practically implemented in classrooms that has reasonable evidence of effectiveness which can be understood by teachers. These teachers must be properly trained and supported in implementing innovations, which means their schools and school districts must understand both the value and likely impact of an innovation and the need to properly support such innovations.
This now comes full circle, since such innovations typically require funds, which means that parents and the community must then be convinced of the value of properly supporting education and electing officials who will provide the necessary local, state, and national support. What matters in the end is not the quality of educational research findings but the quality of professional teaching practice. I would like to see much less distance between [federally funded] educational research and [locally funded] educational practice. I would like to see research aimed at promoting teachers in achieving their widely held goals rather than research aimed at promoting the careers of researchers and program officials at federal agencies. It is clear that conducting rigorous randomized control trials and quasi-experimental studies in school settings presents serious challenges for those collecting and analyzing data – much more serious than that associated with similar kinds of studies in other sectors, such as medical care or computer science (which are admittedly complex and challenging areas for research). The variations in students, teachers, schools, communities, subject areas, and more make classroom practice a very tough research area. As a result, increasingly sophisticated analytical techniques are emerging which only a few researchers understand and can implement. The worry I am trying to express is that we may be closing the door on evidence that should be considered and might prove quite practical and effective in the classroom. We may be creating a research area that is closed to all except for an elite few who do not have to put findings into practice in the classroom. Is this an unfounded worry?
Wednesday, January 12, 2011
It has now been some time since I have made an entry in this blog. Perhaps no one is listening. No matter. I am writing mostly for myself – to try to become more clear in my thinking. Being snowed in for three days in Athens, Georgia has helped. Lately, I have been thinking about false dichotomies and misguided distinctions.
There is a legitimate distinction between teacher-centered and learning-centered approaches to instruction. However, this distinction is widely misunderstood and misrepresented. Teacher-centered approaches tend to emphasize the activities that a teacher will use to promote learning. Learner-centered approaches tend to emphasize the activities that will engage learners and result in desired outcomes. Stated in this way, the two approaches are not mutually exclusive nor are they necessarily incompatible. Because the goals of most teachers and instructional designers involve actions and activities that will result in improved learning and desired learning outcomes, a teacher-centered approach is likely to take into account those activities that are likely to be engaging and meaningful for learners. Moreover, once learner-centered activities are identified and elaborated, it is quite natural to consider how teachers can best support those activities. Considered this way, one can say that the difference has to do with emphasis and where one begins analysis and planning to support learning. The optimum end result is likely to include both learner-centered activities and teacher-centered support.
Imagine a Venn diagram (see the figure below) with a circle for teacher-centered approaches and an intersecting circle for learner-centered approaches. This results in four distinct areas: (1) teacher-centered without any learner centering (quite rare), (2) learner-centered without any teacher-centering (also quite rare), (3) both teacher- and learner-centered (highly desirable), and (4) neither teacher- or learner-centered (e.g., some museum environments). Associated with these two approaches is a continuum from structured, directed learning environments to unstructured, open-ended learning environments. Evidence suggests that the extreme ends of this continuum are not likely to be especially effective for a great many learners. Rather, some structure and directed learning blended with some open-ended activities are likely to engage many learners and result in desired learning outcomes, including a desire on the part of learners to pursue further study in the subject area.
A challenge for instructional designers is to determine for which learning tasks and learners it is appropriate to include more emphasis on structured learning or open-ended learning. A challenge for teachers is to realize that the roles and responsibilities are different depending on the nature of the particular learning activity. A challenge for learners is to realize the value of the particular approach and activity in which they are engaged – their roles and responsibilities are also somewhat in these different kinds of activities.
The question is not which approach to always use. The question is which kind of approach is likely to be successful for the particular goals, tasks, and learners involved. A thoughtful and reflective teacher or instructional designer will see value in both kinds of approaches. A thoughtful and reflective student is likely to succeed if the approach is clear and appropriate for that learner’s particular situation. This is not intended to be a middle-of-the road response to the debate about teacher-centered and learner-centered approached. It is intended to be a muddle-elimination response that recognizes the value of significant evidence in support of both approaches in different situations.
For example, a person who is not familiar with structural equation modeling is likely to desire and benefit from a structured, directed learning approach from a highly qualified expert with feedback on representative tasks that gradually build up competence and confidence. However, a person who is somewhat familiar with meta-analysis is likely to desire and benefit from a more open-ended approach with a highly qualified expert on hand to guide and suggest improvements in various learning tasks and activities. In summary, the two approaches are not mutually exclusive nor are they incompatible. In effective instruction, they are more likely to be blended together with both directed and open-ended learning activities.