What works in science and math education? Until recently, there - TopicsExpress



          

What works in science and math education? Until recently, there had been few solid answers — just guesses and hunches, marketing hype and extrapolations from small pilot studies. But now, a little-known office in the Education Department is starting to get some real data, using a method that has transformed medicine: the randomized clinical trial, in which groups of subjects are randomly assigned to get either an experimental therapy, the standard therapy, a placebo or nothing. The findings could be transformative, researchers say. For example, one conclusion from the new research is that the choice of instructional materials – textbooks, curriculum guides, homework, quizzes – can affect achievement as profoundly as teachers themselves; a poor choice of materials is at least as bad as a terrible teacher, and a good choice can help offset a bad teacher’s deficiencies. So far, the office — the Institute of Education Sciences — has supported 175 randomized studies. Some have already concluded; among the findings are that one popular math textbook was demonstrably superior to three competitors, and that a highly touted computer-aided math-instruction program had no effect on how much students learned. Other studies are underway. Cognitive psychology researchers, for instance, are assessing an experimental math curriculum in Tampa, Fla. The institute gives schools the data they need to start using methods that can improve learning. It has a What Works Clearinghouse — something like a mini Food and Drug Administration, but without enforcement power — that rates evidence behind various programs and textbooks, using the same sort of criteria researchers use to assess effectiveness of medical treatments.“Without well-designed trials, such assessments are largely guesswork. “It’s as if the medical profession worried about the administration of hospitals and patient insurance but paid no attention to the treatments that doctors gave their patients,” the institute’s first director, Grover J. Whitehurst, now of the Brookings Institution, wrote in 2012. “But the “what works” approach has another hurdle to clear: Most educators, including principals and superintendents and curriculum supervisors, do not know the data exist, much less what they mean. A survey by the Office of Management and Budget found that just 42 per cent of school districts had heard of the clearinghouse. And there is no equivalent of an FDA to approve programs for marketing, or health insurance companies to refuse to pay for treatments that do not work. Nor is it clear that data from rigorous studies will translate into the real world. There can be many obstacles, says Anthony Kelly, a professor of educational psychology at George Mason. Teachers may not follow the program, for example. “By all means, yes, we should do it,” he said. But the issue is not to think that one method can answer all questions about education. In this regard, other countries are no further along than the United States, researchers say. They report that only Britain has begun to do the sort of randomised trials that are going on here, with the assistance of American researchers. As Peter Tymms, the director of the International Performance Indicators in Primary Schools center at Durham University in England, wrote in an email: “The wake-up call was a national realization, less than a decade ago,” that all the money spent on education reform “had almost no impact on basic skills.” Suddenly, scholars who had long argued for randomised trials began to be heard. In the United States, the effort to put some rigor into education research began in 2002, when the Institute of Education Sciences was created and Whitehurst was appointed the director. “I found on arriving that the status of education research was poor,” Whitehurst said. It was more humanistic and qualitative than crunching numbers and evaluating the impact. “You could pick up an education journal,” he went on, “and read pieces that reflected on the human condition and that involved interpretations by the authors on what was going on in schools. It was more like the work a historian might do than what a social scientist might do. At the time, the Education Department had sponsored exactly one randomized trial. That was a study of Upward Bound, a program that was thought to improve achievement among poor children. The study found it had no effect. So Whitehurst brought in new people who had been trained in more rigorous fields, and invested in doctoral training programs to nurture a new generation of more scientific education researchers. He faced heated opposition from some people in schools of education, he said, but he prevailed. The studies are far from easy to do. “It is an order of magnitude more complicated to do clinical trials in education than in medicine,” said F. Joseph Merlino, president of the 21st Century Partnership for STEM Education, an independent nonprofit organisation. “In education, a lot of what is effective depends on your goal and how you measure it.” Then there is the problem of getting schools to agree to be randomly assigned to use an experimental program or not. “There is an art to doing it,” Merlino said. We don’t usually go and say, ‘Do you want to be part of an experiment?’ We say, ‘This is an important study; we have things to offer you.’ As the Education Department’s efforts got going over the past decade, a pattern became clear, said Robert Boruch, a professor of education and statistics at the University of Pennsylvania. Most programs that had been sold as effective had no good evidence behind them. And when rigorous studies were done, as many as 90 per cent of programs that seemed promising in small, unscientific studies had no effect on achievement or actually made achievement scores worse. New York Times Service posted on September 23, 2013 at 12:00AM jtnng.blogspot/
Posted on: Sun, 22 Sep 2013 23:29:09 +0000

Trending Topics



Recently Viewed Topics




© 2015