The Case Against Block Scheduling

Part 2: The Debate on Academic Harm

by Jeff Lindsay for JeffLindsay.com

This is one of several pages on the problems of block scheduling, a major educational "reform" that is being implemented across the country in spite of serious evidence that it is harmful to education. These pages are the work of Jeff Lindsay. On this page, I assume that you have already seen my main page on block scheduling, Part 1.

Part 1 Part 2 (This Page) Part 3 Part 4 Part 5
The Nature of the Problem
(Main page + overall index)
The Debate on Academic Harm Pros and Cons, Alternatives Comments from Others Tactics and Resources
(And summary + links)

Index to This Page



What about the New Attacks on the Canadian Studies?

Finally, after years of ignoring research results from the Canadian experience with Block Scheduling (not to mention ignoring the failure of Modular Scheduling in the 70s and 80s), the proponents of Block Scheduling have been forced to acknowledge that such research exists. Enough parents, teachers, and students have been asking about the work of Bateson and others, often embarrassing and certainly irritating many block proponents who claim to have studied it for years without finding any negative data of any kind. Now the proponents of the block can no longer look their audiences in the eye and say that there are no known harmful effects of block scheduling. It's a sign of hope that they now must spend time dealing with the Canadian work. Of course, their response is predictable, but the analysis used can be downright surprising - and disappointing.

In the past few months there have been a variety of articles, letters, and speeches by prominent block proponents alleging that the Bateson studies are flawed. The attacks, however, fall short and often miss the point. In dealing with them, the first thing to remember is this: the main question is not whether scientific, peer-reviewed, longitudinal studies showing academic harm have defects, but whether there is any solid scientific evidence that block scheduling can cause academic gains. The proponents quibble on how serious the academic harm is, but are unable to demonstrate that there will be academic gains. Why should parents settle for any program that is likely to cause some loss in academic performance? If it won't clearly help learning, why adopt?

Now let's consider some of the attacks. Perhaps the most well known attack (based on my e-mail) is a memo written by a certain Mr. David Vawter, a graduate student of Dr. Robert Canady. Dr. Canady is a major proponent of block scheduling, perhaps the nation's leading proponent. Mr. Vawter tries to discredit Dr. Bateson's Canadian studies using arguments that I find intriguing enough to warrant a separate small web page giving My Response to a Rebuttal of Bateson's Work at "https://www.jefflindsay.com/Block_response.shtml". I find the memo to be inaccurate on many counts, even though I'm sure Mr. Vawter acted in good faith (and perhaps without approval of his advisor). Several of his arguments are based on apparent misunderstanding of the scope of the test or failure to consider the reported data. However, I suspect that some proponents of the block have a different view about academic performance than do most parents. Please don't tell me that any single factor which can reproducibly cause a 5-10% drop in academic performance is not educationally significant! And please don't tell me that a study of nearly all members of a population group (10th grade students in B.C.) is somehow invalid because it has too much statistical power.

Another example of attacks on the Canadian studies was an article by Clarence Edwards, "UBC Timetable Studies Questioned" (archived at Archive.org). (Edwards is also the author of "The Four-Period Day: Restructuring To Improve Student Performance," NASSP Bulletin, 77(553): 77-88 (1993).) An excerpt from Edwards' Web article as of Dec. 30, 1996 follows:

In May 1995, educational researchers at the University of British Columbia (UBC) conducted a second study of high school schedules/timetables and student learning. . . . Bateson began by sorting the math and science scores of 30,000 British Columbia 10th graders according to their schools all-year, semester, and quarter timetables. . . .

A close review of both Bateson timetable studies identifies serious design flaws in the selection of the May test date. The school year in British Columbia begins the first week of September and ends the last week in June. When the UBC team administered the 10th grade tests in May, all-year students had yet to receive three to seven weeks of instruction. For semester students that equates to six to fourteen weeks of instruction and for quarter students twelve to twenty-eight weeks. . . .

The pattern of scores in both Bateson studies is more the result of the May testing date than to schedules/timetables.

Even Bateson had to acknowledge problems with the May testing date when the '95 findings contradicted his '86 study. Findings that "students who took science [mathematics] in the first semester consistently outscored those in the second semester" invalidated the '86 conclusion about retention. Bateson discovered the "opportunity to learn" problem when trying to explain this surprising development. Apparently, second-semester students missed more instruction than the first-semester students had forgotten.

To determine if there is a correlation between achievement and schedule/timetable, factors such as "opportunity to learn" and "retention" must be equalized. . . .

Technically, the 1995 study was NOT a "study of high school schedules/timetables and student learning." It was an overall assessment of the junior secondary program in British Columbia. Many factors were considered, according to my correspondence with Dr. Bateson, just one of which was data on timetable patterns. However, the relationship between timetable pattern and academic achievement clearly yielded the strongest relationship in the study. The study was not designed to evaluate block scheduling, but the impact of block scheduling was unmistakable in the data.

The May test date may complicate the study, but it is not nearly as serious a factor as Edwards implies. The tests did not just cover material learned in the tenth grade, but examined the total junior secondary curriculum in math and science from Grade 7 to Grade 10. Only about 1/3 of the items in the tests were from the Grade 10 curriculum, according to Dr. Bateson.

Edwards' comments do not acknowledge that in the real world, major tests like the SAT, ACT, and AP tests are given well before the end of the school year. The same problems that May testing created for the 1995 Bateson study will exist in test results for U.S. high school students. Evaluating students with a May test arguably just makes the results more predictive of standardized test results that will affect many students' futures. As a parent, I want my children to do well on those tests. Most parents do. If a new program may lower their test scores, it's going to take a lot more than reduced stress in the school to outweigh that harm.

The experimental design of the 1995 study was almost identical to the that of a 1991 assessment in B.C. by Dr. Bateson which won the annual publication award of AERA for the best program evaluation report of the year, worldwide. Bateson has a reputation for thorough and careful experimental designs. It is true, however, that the study was not designed specifically to test block scheduling, but the block scheduling effect was evident in the data and has raised an important red flag about block scheduling.

The issue as to whether first or second semester block students do relatively worse is much less important than the fact that BOTH do worse than full-year students. Impaired performance in first-semester block students cannot be explained by the May test date.

I have previously mentioned Gordon Gore's analysis of 1996 British Columbia data for Grade 12 (published at Drexel University as archived in 2001 at Archive.org) showing diminished performance in all subjects for students on block scheduling. Previously, Mr. Gore also did a related study of UBC students (still available via Archive.org) that examined scores from the Provincial tests given AT THE END of Grade 12. Again, the data show that students taking year-long courses outperformed block schedule students. For example, 8.2% of full-year students got scores of A on the exam, while only 5.9% of the semester students did and only 3.9% of the quarter system students. Mean scores also reflected a negative impact of block scheduling. But administrators in Canada claim block scheduling is a success - based on the higher grade point averages given by block scheduling schools relative to year-long schools. The block schools give their students higher grades, leading to the claim of "success," while the standardized test scores are lower. Can you spell "grade inflation"? Higher grades (from the school, not on objective tests) is one of the common selling points by the pro-block lobby. Beware!

Empirical data, the obvious loss of total class time per topic in most block systems, and the well-known problems of learning in big chunks versus small chunks (tied to the limitations of attention span) all point to the potential for academic harm under block scheduling. The reasons offered for it are not based on genuine academic gains, but on largely administrative or curricular benefits and opportunities for different, unproven teaching methods. The burden of proof is not on concerned parents to defend the Bateson studies, but on proponents of block scheduling to show that it will help actual learning. That burden of proof has not been met.

In a news bulletin from September 1996, the National Council of Teachers of Mathematics said that "the observational and student achievement studies done there [Canada] so far show mixed results, but a trend is emerging showing lower mathematics achievement in students' ability to retain information when the gap between one mathematics course and the next one could be more than one year under the semester schedule" (as quoted by Charles T. Stamos in the article "The Case Against Intensive Block Scheduling," sent to me by e-mail, 1996, and published in the Fair Oaks Guardian, Fair Oaks, California, Nov. 7, 1996).

A very helpful reality check for those ignoring the Canadian studies is offered in an outstanding paper from Professor Reginald D. Wild of the Department of Curriculum Studies, Faculty of Education, University of British Columbia. The paper, "Science Achievement and Block Schedules," was presented at the Annual Meeting of the National Association for Research in Science Teaching, San Diego, California, April 20, 1998, where it was nominated for best paper. With permission from Dr. Wild, it is now available as a Microsoft Word document at my site (the URL is https://www.JeffLindsay.com/wild.doc). This paper sorts through extensive data and various arguments on both sides and, in my opinion, demonstrates the failure of proponents of the block to support their case in the face of real data. Concerning some common attacks on the Canadian studies, Wild writes:

A typical (Lockwood, 1995), and often repeated (Kramer, 1997a, Eineder et al, 1997) criticism of this research [Raphael et al(1989) and Bateson (1990)] is as follows:
The studies did not allow for the fact that students on the semester schedule were tested months after completion of the class. (Lockwood, 1995, p. 103)
and
One key limitation of Marshall et al.'s study is the timing of the assessment. All students were tested in May 1995. Students had not yet completed the course, and those taking coursework in the second semester or the fourth quarter had yet to finish, respectively twice or four times as much content as those studying in the all-year format. (Kramer, February, 1997a, p. 30)
A bit of simple arithmetic suggests these arguments are incorrect. Approximately 50% of the semester/block students finish 100% of the course half-way through the year, and if, for example, the other 50% have finished 80% of the course in May, on average semester/block students will have finished 90% of the course material. 100% of all-year students will also have finished 90% of the material at the same point in May. The same argument applies to quarter students (about 75% of quarter students have finished 100% of the course and 25% have finished 60% of the course- the average: 90%!). Similar arguments can be presented on the issue of retention between the time of completion of a course and the writing of the assessment instrument.

References cited in the above quotation:

Bateson,D.(1990). Science Achievement in Semester and All-year Courses. Journal of Research in Science Teaching, 27, 233-240.

Eineder, Dale V., Bishop, H.L. (1997). Block Scheduling the High School: The Effects on Achievement, Behaviour, and Student-Teacher Relationships. National Association of Secondary School Principals Bulletin, 81, 45-53.

Kramer, S. (1997a). What We Know About Block Scheduling and Its Effects On Math Instruction, Part I. National Association of Secondary School Principals Bulletin, 81, 18-42.

Lockwood, Susan L.(1995). Semesterizing the High School Schedule: The Impact on Student Achievement in Algebra and Geometry, National Association of Secondary School Principals Bulletin, 79, 102-110.

Marshall, M., Taylor, A., Bateson, D., & Brigden, S. (1996). The British Columbia Assessment of Mathematics and Science: Technical Report. British Columbia Ministry of Education, Victoria, B.C.

Raphael, Dennis, Wahlstrom, Merlin, & McLean, L. D. (1986). Debunking the Semestering Myth. Canadian Journal of Education, 11:1, 36-52.

Aug. 2002 Update: More complete information from Steve Kramer
As discussed above, Steve Kramer has been cited in efforts to dismiss the Canadian studies (see the above quote regarding the Marshall et al. study). Steve Kramer contacted me to offer the proper context to what he has said, offering a more complete quotation:
One key limitation of Marshall, et. al.'s (1995) study is timing of the assessment. All students were tested in May, 1995. Students had not yet completed the course, and those taking Mathematics 10 in the second semester or in the fourth quarter had yet to finish, respectively, twice or four times as much content as those taking Mathematics 10 in the all-year format. However, the following overall pattern of scoring somewhat mitigates this limitation: according to Bateson (personal communication, November, 1995) top scorers were always the all-year students, followed first-semester students, then second-semester students, then by third-quarter students (who had recently completed the entire course), then by first-quarter students, then by second-quarter students, and last by fourth-quarter students (who were probably the most strongly impacted by not having completed the class). If timing of the test were the sole explanation for observed differences, then third-quarter students would have been expected to outscore semestered and perhaps all-year students.

A second limitation to Marshall, et. al.'s (1995) study is a possible volunteer effect. Schools elected which timetable to adopt, and it is possible that variables such as prior student performance could have caused them to make the change to a block schedule and in turn could account for the differences reported. Despite these limitations, Marshall, et. al.'s (1995) results are strong enough to conclude that as implemented in British Columbia, block scheduling appears to have had a negative effect on mathematics.
(Kramer, 1997a, as cited above, p. 30, emphasis mine)

Kramer notes that there are limitations to the Bateson's work, but nevertheless indicates that the results are still strong enough to show a negative effect of the block on mathematics.

The Impact on Music [index]

I have recently learned of the only known widespread survey on the effect of block scheduling on music programs. This survey has not yet been published in a peer-reviewed journal, but offers some information on a topic that many people ask about. The survey was completed by Kevin Meidl, a music teacher at Appleton West High School in Appleton, Wisconsin. He surveyed music teachers in 32 high schools in 13 states that had adopted block scheduling. He used a "Likert-type survey instrument" for the design. He found that 69% of the music programs saw a decrease in student enrollment in choir, band, or orchestra after a move to block scheduling. Most of the teachers in the programs with a decrease in enrollment attributed the decrease to scheduling conflicts caused by block scheduling. 81% of those teachers surveyed believed that students rejoining music classes after a semester's or year's absence were significantly behind others in skill development and were slow to regain their original proficiency (the problem of retention again). 71% of the music teachers found that students had trouble staying focused during the longer classes in the Block system. Only 31% of the music teachers believed that they could teach more and achieve a greater level of student skill under the Block. 84% found that under the Block, it was more difficult for students to enroll in two or more music classes. Copies of his work can be obtained from Kevin Meidl by writing him at Appleton West High School, 610 North Badger Avenue, Appleton, WI 54914. His fax number is (414) 832-6239. (I suggest you offer to reimburse him for the cost of sending a copy to you.) His e-mail address is <[email protected]>. (Update: This work has now been published. The source is Kevin Meidl, "The Problem with Block Scheduling," Music Educators Journal, v. 84, July 1997, p 11.)

Another recent publication on this topic is "The Block Scheduling Gimmick: Still Growing, Still Unproven," The Instrumentalist, May 1999, pp. 12-19 (no author listed). This article reports on a Survey of directors of bands and orchestras showing only 19% think block scheduling improved or has helped music programs, while 65% report adverse effects. The following editorial statement is made:

The strongest and fairest criticism of block schedules is that they are untested and unproved, and this is the latest in a long string of reckless experimentation by school administrators. Common sense suggests that 40-45 minutes per class is as long as most teenagers can concentrate without a break; for some this is too long. Common sense suggests it is better to take math, English, foreign language, and music classes in an unbroken sequence." (p. 12)
Examples of problems reported in the survey: a program with over 100 students dropped to only 4 after block scheduling because of schedule conflicts. Or from William Minton of Connecticut: "Every system I know of that has gone to block scheduling in almost any form has a long list of horror stories to tell about lower student participation in music."

Here is an insightful message from a concerned Minnesota parent, received in 2001 and used with permission:

The St. Paul Public Schools are coming close to destroying music education for our children. For example; one junior high school hosts a trimester schedule. Band, Choir and Orchestra are scheduled for only ONE trimester in the 7th and 8th grade. So the music students move into their high school bands with a total of only 6 months of lessons over a two year period. These junior high music students cannot even get close to performing music at the high school level. And as urban students, never be able to compete at the state music contest against any MN suburb high school band.

Highland High School in St. Paul holds a similar story. The school is on the basic block schedule....9 week quarters. Each quarter the high school choir has all new students. Again, with such a design, this high school choral program cannot mature. And again, no urban music student will be able to compete at the state music contest against the suburb choirs.

Here is a note from a parent received Nov. 1999:
Our school system went to BS two and a half years ago. At the time of the initial discussions we the parents were given only the positive "possibilities" of the plan. We were told that our students academic scores would improve. I was told that my gifted (IQ 150) but low GPA child would improve his grades because he would be much more challenged and more interested in learning. We believed.... Actually he is more bored than before and is struggling just to be able to graduate in spring after failing an English course....

Although I have really seen no positive results from the block scheduling, I have seen many negative results. Our award winning jazz band which presents a major jazz concert each year is struggling to even put together a group for the year and for the first time in many years may not put on this celebrated event. The regular band has declined to half its membership and will probably lose more with a scheduling problem that has just hit the sophomore class. We fear that eventually we may lose our band director to another school. A neighboring school has used our band problems as a reason not to implement this in their own school....

Unfortunately our BS program makes it almost impossible for our students to take one [music class] the entire four years of high school. My daughter ... is devastated that she will not be able to take both band and choir.

Thank you for your informative web site. I wish that I and many other parents had not accepted what the school board and others had to say without asking more questions. Fortunately there may be hope yet to discontinue this program before any more damage is done.

And here is a note received in 2000, quoted with permission, discussing problems with music education under an A/B block system before the school returned to a traditional schedule (as many schools have after facing the problems with the block):
I am a middle school band director in Central Texas. This district recently went from the block schedule back to the traditional schedule.

The return to the traditional schedule was rocky on the part of the band directors who are taking the heat for the change... many teachers have accused the seven (yes, seven) MS band directors of talking our superintendent into this major schedule change!

Beginners last school year would take their instruments home only to forget them the next day we had band. In this instance, they could miss as many as five days of instruction. This is not good for any musician, but especially for the beginning student. These need instruction everyday.

These classes also met for about 80 minutes... isn't the average human attention span about 25 minutes? The beginning students can't handle the rigors of playing the full 80 minute time period, and then not have the instruction the next day.

However, the traditional schedule we now use allows 48-49 minutes per period (remember, this is the one we band directors are accused of?) which lets the beginning students receive the benefits of daily instruction. We are halfway through the Essential Elements 2000 after five and a half weeks of playing.

Also, students are not forgetting their instruments because of the A/B block.

Another problem I had was preparing a band for contest. One band last year (and it happened at one other school in the district I know of) ended up having one rehearsal eight days before the contest. How? We met Monday, Wednesday was some type of assembly (involving this band period), Friday was another assembly, Tuesday was contest. This group did poorly because of lack of preparation time... we would have at least had 4 days of rehearsal on the traditional schedule which would have allowed for a better prepared group.

Hope this helps shed light from personal experience.

Finally, a Midwest public school band director studying block scheduling wrote a response to an article claiming that the block could work with music programs. Part of her response is quoted here:
In my readings, I have found that when large urban school districts implement block, it usually seriously downsizes the band and choral programs at those schools. And in some states high school programs have been eliminated completely because of scheduling conflicts. Music teacher turn- over in many areas of the country has been obscene. The 8x2 is specifically notorious for creating this event....

[Many of the published] positive comments from music teachers [about the block] come from "beginners", who have little or no experience and are not yet tenured, so they need to be politically correct. These sources to me are not reliable....

Last summer I was calling around the country to better understand how the block schedule could be used to support secondary music education programs. As I spoke directly to the principals at all three middle schools mentioned in Canady's 1995 report, all needed to revise the 8x2 [block system]. They realized that the students could not retain the material with 90-minute classes every other day, and gave the students two reading and two math classes, therefore having 90 minute classes meeting daily. Because this change needed the time slot in the schedule, the band program was more than cut in half, with no vision of rebuilding in sight until the state was satisfied that the reading and math scores were higher. One of these schools didn't even have a qualified instrumental music educator teaching band. (There are many studies around, if you believe them, concerning the relevance of reading, math scores and reading music before the age of 12).

These schools were reported as being successful, but it is not what I would call success. It was easy to concur that any district contemplating block scheduling should ask more in-depth questions to the faculty and administration on site, before making an assumption that block is working effectively anywhere in the US.

Obviously, the block can pose challenges for these programs. Are there solutions or possible advantages? Perhaps. But the impact on music better be carefully considered before jumping on the bandwagon.

Harm to Advanced Placement Classes [index]

Advanced Placement (AP) tests can be used for college credit by high-school students who pass these tests with sufficiently high scores. They are of great importance for students interested in going to college. Success on AP tests requires in-depth mastery of material. If block scheduling improved learning, one would think that AP courses would benefit, and that students on the block would get higher AP test scores. So how does the block fare in terms of AP success? Poorly.

The College Board, the group that oversees the AP tests, has published an investigation on the impact of block scheduling. One report, "Block Schedules and Student Performance on AP Examinations," was published in Research Notes (RN-03) by the College Board's office of research and development in May 1998. It is available online at https://www.collegeboard.org/research/html/rn03.pdf as a PFD document (requires Adobe reader). This document shows that the block adversely affects AP test performance in 15 of 16 comparisons that were made. Read the study for yourself. The impact was especially bad for calculus and biology. For calculus (the AB test), the 1998 report states that "students completing a year-long calculus course with extended session (61-90 minute sessions a day) obtained a higher score that students completing a year-long course with traditional length sessions (30-60 minute sessions each day). Both groups performed significantly better than students in semester block schedule instructional sessions" (p. 6). The mean score for the year-long, 30-60 minutes-per-day group was 2.92. For the fall and spring block groups, the scores were 2.70 and 2.68, respectively. Over 0.2 points out of a mean of about 2.9 were lost for those on the block - roughly a 7% loss in raw performance. SImilar results were found in biology, with 0.2 points again being lost for those on the block. For U.S. history, the difference between a year-long course with 30-60 minutes and a Fall block course was 0.47 points - over a 15% difference. For English literature, the difference was small, only 0.1 points (the block did "significantly" worse from a statistical point of view, but not from a practical point of view since the difference was small). This may be because performance on the English test is not as strongly linked to material acquired from in-class learning as it is for other courses where specific skills and facts must be acquired, but depends more strongly on a students outside reading and background. But again, for each of the four tests considered, the effect of the block shows up as a statistically significant harm to academic performance.

The College Board also released a statement entitled "AP and January Examination" on September 19, 1996. This letter addresses requests to offer the AP exam in January to accommodate those taking first semester block courses. In the course of turning down such requests, the College Board makes the following observations about block scheduling:

Program staff are in the process of gathering and formulating strategies used by AP teachers who teach in block scheduling situations. It appears that the most successful by far are those who have convinced their administrations to schedule their course over a full year - apparently the majority of AP teachers. . . .

Students who completed year-long courses offered only in the fall or only in the spring tended to perform poorly on AP examinations in 1995 and 1996. Of the thirteen examinations in which there were 100 or more semester intensive block scheduled students, those who took the course over a full year averaged higher scores in 77% of (20 of the 26) cases. In calculus, history, and the sciences, mean grades for block scheduled students were about 0.6 (about half a standard deviation) lower than the mean for students who took the course over the full year.

In several surveys and meetings, AP teachers, coordinators, readers, and test development committee members overwhelmingly opposed both semester block scheduling and January examinations. The opposition appeared to be strongest among teachers in block scheduling schools.

Under the block, AP exam scores for calculus, history, and the sciences drop significantly. The 1996 results were much more alarming than found in the 1998 report, perhaps because some schools on the block had listened to the 1996 warning from the College Board and were taking corrective action (more reviews, more homework, outside lectures, extra time, and so forth) to help reduce the harmful impact of the block. But whether we consider the 0.6 point lost in 1996 or the 0.2 points lost as reported in 1998, the effect of the block is clear: ACADEMIC HARM. This is nothing to ignore! AP instructors are rightfully concerned about the harm caused by block scheduling. Is there any reason to believe the "less is more" theory of block proponents who claim (without any real data!) that students will actually learn more by covering less? Not when actual academic material needs to be covered and mastered - as is the case for AP courses, perhaps more so than in any other course.

AP classes are unmistakably focused on academic achievement, and block scheduling hurts rather than helps. The potential for harm to AP classes seems to be widely recognized. In some cases, the recognition is implicit, as when block schedule schools such as Wasson High adjust the schedule for AP classes to give extra class time to cover the material and ensure adequate learning. The College Board appears to have explicitly recognized the risk. Since block scheduling is associated with lower AP scores, we have prima facie evidence that it can HURT rather than help academic achievement. The harm can be due to retention problems for courses taken in a previous semester, or for "opportunity to learn" problems when the AP tests are given in May for a semester that ends in June. In either case, the inherent problems of learning at a rushed pace rather than in regular, spaced apart intervals can be expected to play a role.

An example of the deleterious impact of block scheduling on AP courses is illustrated in the following e-mail I received from a student in February of 1997:

My school is about to go into block scheduling next year. Right now, we are doing our schedules and all of us have just found out how limited block scheduling is going to be. We are going to have 85 minute classes with only 4 classes each semester (makes up 8 "periods"). Students will pick 4 classes for the first semester and then another 4 classes for the next semester. There is a huge problem with our AP classes, though. Since the AP testing administration refused to change their date of testing (around May), our school has made AP classes run all year! So I can at the most take 3 AP classes and that will be it. Gym and Lunch will be mandatory, too. That means I will be taking 3 AP classes, Gym, and Lunch for the first semester, and then be taking the same 3 AP classes, Gym, and Lunch for the second semester.

Is there some way to get around this? And is there some way the students can have a say in the block scheduling decision?

And here is one from a frustrated parent received February 2001:
AP scores are horrible in our district now. SAT scores have fallen every year since block went into effect. All the teachers but the AP science teachers HATE it. AP science courses are all year long. Still, no one listens to them or the parents....

We were so disappointed in our son's AP scores [after our district went to the block]. A young lady confirmed that she had only heard of one 4 and one 5 on the AP Modern European exam out of roughly sixty! This high school had traditionally turned out young people scoring nothing but 4s and 5s. The teacher is one of the best but given only one semester...repeated school trips (remember a two day trip equals one week of missed school)...athletic events...fine arts competitions...academic competitions...many of the best students missed the equivalent of two to four weeks of school in the spring!

Yet, just this week, another glowing report on block scheduling appeared in the local news. All new administrators have come from Florida school districts where block reigns king. Our schools are terribly overcrowded and many parents are convinced that block is more about crowd control (keeping students out of the halls and restrooms) and less about academics.

Unfortunately, these problems have affected many advanced students who want to take AP courses, especially those who want to take multiple AP courses. In my own community, the Neenah, Wisconsin District has adopted a trimester version of block scheduling for Neenah High. While most courses are handled in two trimesters, AP Math has convinced the administration that they need three trimesters (a full year of longer classes) to cover the material. Other classes want a full year, too, and advanced physics may get it next year. Advanced English wants a full year, but there just isn't enough flexibility in the schedule to allow many courses to extend. For students that want to take more than one advanced course, there are now fewer opportunities to take other courses.

The trimester system at Neenah High has created serious problems for many advanced students. One well-educated parent of a college-bound Neenah High student put it this way (personal communication, used with permission, March 1, 1997):

In Neenah, the teachers of advanced courses have tried to cover so much material at a rapid pace that they buried everybody with homework. It's hard for the kids to keep up. In terms of learning, they've been behind from day one. . . . Advanced students are relying on caffeine to get their homework done, commonly staying up until 1:00 a.m. . . .

The real problem with fewer periods and longer classes is that if you extend a course by one trimester, it's a 50% increase in class time that can only come at the expense of something else like English or band.

In terms of the fundamentals like math, science, and English, I haven't seen anything to suggest it's a good deal. . . .

The only class [that my son is taking] without a frantic pace has been calculus [AP math], which is proceeding at an orderly rate - but that's because it's got an extra trimester.

There are students who love block scheduling, but I don't know any of them. [Less ambitious] students may love it, but those planning to go to college hate it.

If advanced students are being hurt in their ability to take and excel in college-prep and AP courses, then block scheduling must be considered as detrimental to academic achievement. Reducing options for college-bound students, hindering their achievement, or making there lives miserable is a heavy price to pay for the alleged "benefits" of block scheduling.

The burden of proof is on the proponents of block scheduling to show that it actually improves learning before we adopt it for all students. It may reduce some problems and may make things easier for some students and administrators, but we must not let non-academic benefits lead us to sacrifice any degree of academic achievement for the majority of students.

Case Studies Don't Show Real Academic Gains [index]

Several specific schools or school districts are sometimes cited to show that block scheduling "works." One interesting example is the Dothan Alabama school district, which has been studied by Susan L. Lockwood (Semesterizing the High School Schedule: The Impact on Student Achievement in Algebra and Geometry in Dothan City Schools, Doctoral dissertation, University of Alabama, Tuscaloosa, 1995). A recently published article by Susan Lockwood, "Semesterizing the High School Schedule: The Impact of Student Achievement in Algebra and Geometry" (NASSP Bulletin, Dec. 1995, vol. 79, no. 575, p. 102) summarizes her work. She compared results on geometry and algebra tests at a semestered school offering 8 courses per year (140 hours per Carnegie unit) to previous results at the school under a traditional six-period, year-long program (175 hours per Carnegie unit).

The test scores for the block schedule (semestered) students were clearly worse than for the traditional timetable students. The mean algebra score for minority block students was 36.99, compared to 42.18 for the six-period students. The scores for white students were 49.88 under the block and 51.86 under the six-period system. But when she ran her statistical analysis (ANOVA), the p-value of the "schedule" factor was 0.055, meaning that there is a 5.5% chance that the difference was due to natural scatter (random error) in the results. Her cut-off for statistical significance was 5%, so, properly speaking, her results fell slightly short of showing a "statistically significant" harm due to block scheduling, at the commonly used but arbitrary level of 95% confidence. Now what does this mean? It means that in spite of the lower scores associated with the block, we can't quite be 95% confident that they are due to the difference in scheduling since there is a 5.5% chance that they were due to chance alone. Failure to establish "statistical significance" does not prove that no effect exists, as anyone familiar with statistics should know. But listen to how Dr. Lockwood interprets the results - and you will hear the disturbing "logic" that I have heard from several other proponents of the block:

"There are no significant differences in the achievement of students in algebra or geometry on the two schedules. Therefore, the semesterized block schedule is a viable option for school districts interested in providing more opportunities for students....[It] should be considered for adoption by high schools. High schools can implement a semesterized block schedule with no overall decline in student achievement."
Excuse me, but it's hard to read that and not be outraged.

Several other schools are touted as success stories, and some schools may have had good results, but those case studies don't always shine so brightly under scrutiny. I think the most commonly mentioned success story may be Wasson High School in Colorado. Perhaps the teachers and students enjoy the increased time for planning and less hectic schedule, but objective test scores do not show a clear benefit from block scheduling. After implementing block scheduling, math SAT scores are down 11 points at Wasson and verbal SAT scores dropped 17 points, based on their before and after data as of 1995. ACT scores, however, were up slightly. The apparent excuse offered in Wasson High publications for the downward SAT scores is that the minority population has increased from 22% to 34% over the past 5 years and that there are more at-risk students - but since the SAT is only taken by the college-bound, I'm not convinced this excuse is worth much. Even if it is valid, the Wasson data hardly constitute a case for block scheduling. AP scores may have improved, but the AP courses weren't given the same block treatment as other courses, but were deliberately given 50% more time, in contrast to the standard effect of block scheduling of reducing net class time per course by about 10%. Why the special treatment for the AP classes? I think it's because you simply can't cover as much material under block scheduling, but AP courses demand thorough coverage and therefore need exceptional treatment. An article about Wasson in NEA Today (April, 1993) said, "Courses cover less. Many teachers have found they can't cover as much in one 90 minute session as they could in two 55-minute sessions." However, the school in general feels that the program has been successful - as have many of the teachers. But the hard data are not compelling.

While Wasson is often cited as a possible success, schools with poor results are rarely mentioned in the materials given to school districts by the proponents of block scheduling. As one counterexample, consider Parkland High in North Carolina. They went from a six-period schedule to a four-period day beginning in the fall of 1992. SAT scores had been over 840 the previous two years. After the first year of block scheduling, SAT scores dropped to 772 - a huge decrease. They came up to 807 in the following year. (As of 1995, scores were not back to normal, but a decade later, scores are up, with scores fluctuating from 851 to 904 from 2000 to 2003. Changes in the SAT as well as demographics, etc., may play a role in all this.) Many teachers with block experience have noted that the first year of adopting block scheduling may give very poor results because teachers' past experience is of little value in the new system, and they have to relearn teaching strategies to cope. I would be hesitant to adopt any educational program that could bring a significant drop in academic performance, even if the drop is alleged to be for "only" one or two years.

Some schools recognize the problems with the block after they adopt it and choose to go back to their previous schedule. On example is Allegany High School in Cumberland, Maryland, which went on the 4-block schedule from 1993 to 1995 and then went back to a seven-period day. According to Kathy Mell's article, "Caution Advised on Block Scheduling," "Staff evaluations voiced concern over burn out, less time for labs, and less overall time to cover curriculum." It may have taken some real courage to admit that the system was not working and go back to the old way. Many schools seem unwilling to face that reality, and there are schools that make it work. No single program will always cause test scores or other metrics to decline, and any change, no matter how unjustified, may result in an apparant boost due to the

As mentioned above, some schools appear to show gains after block scheduling was implemented. Friendswood High School in Texas has a report comparing results after the block with those from two years before the block ("A True Evaluation of Block Scheduling," Summary Report for 1995-96, Friendswood Independent School District, Friendswood, Texas). Though there are areas showing both gains and losses after the block, some of the gains look impressive. However, in two measures relating to math, the school claims a gain by comparing the data for the year after switching to the block with scores from two years before. What the report excludes is data from the previous year, which, had it been used in the comparison, would have shown that there was a loss instead of a gain after switching to the block. Thus, if post-block results are compared to results from the year before the block instead of results from two years before, some of the alleged gains vanish or become unimpressive. In the SAT, the mathematics score was unchanged and the verbal score dropped by 9 points, while the ACT showed a 5% decrease in math (21 before the block, 20 after), a smaller decrease in science, a slight gain in reading, and no change in English.

Perhaps excluding or downplaying the previous-year data was an oversight, or perhaps it was a predictable result of intense political pressure to make block scheduling appear successful. In the Friendswood School District, the decision to implement block scheduling was very unpopular and at least one board member was voted out, allegedly for supporting the block. The school district is said to be under great pressure to prove that the plan they had implemented over popular protest was a winner.

So what's really happening in Friendswood? One parent sent me this e-mail message in 1997 (can someone verify the data given here?):

[You] indicate that Friendswood High School may be doing well on the block schedule. In Texas, we have an assessment test called the TAAS (Texas Assessment of Academic Skills) which measures Reading, Writing, Math and a combination of the three. Schools are rated by their performance on the TAAS, in additional to dropouts and attendance. At Friendswood, the percentage of Grade 10 students passing these tests (referred to as TAAS scores) for the 1994-95 school year (prior to the block) were: Math-79.6, Reading-94, Writing-100 and Passing All-78.2. For the 1995-96 school year, scores were as follows: Math-78.5, Reading 93.9, Writing-98.7 and Passing All-77. [Webmaster's note: if these data are correct, they show a DROP in every area after the block, at least in terms of percentage of students passing.] As you can see, the scores do not indicate block scheduling has had a positive impact on their TAAS tests.

Last fall I sent you some information on Lexington High School, listing their TAAS and SAT scores. I asked at the time that you not publish the school's name on your webpage; however, I have no qualms about doing so at this point. . . . Lexington High School posted scores well above the state average in 1992-93, before they adopted the A/B block schedule. Since then their scores have declined. The unofficial numbers are out for 1996-97 and they record reductions in math and reading. Lexington has been unable to return a single score to the [level] posted before block scheduling. Students show little initiative and want to waste the first 20 minutes of class as well as the last 30. The school has two classes that are 45 minutes each and meet daily. The rest of the day is 95 minute classes which rotate three one day and another three the next. Teachers, even those who continue to support block scheduling, find that the students in the daily 45 minute blocks out consistently outperform the students in the 95 minute blocks. The 45 minute classes stay well ahead of the 95 minute ones and more learning is going on. . . .

I haven't yet been able to get a suitable answer from a supporter about the homework issue. At Lexington, there is seldom assigned homework - students do any they have in class. How then, I wonder, will my son perform when he goes to college and has to put out the suggested and expected two or three hours of homework for every hour in class?

Texas educational data show that In 1992-93, prior to adopting the block, Lexington High scored above the state and the national average on their SAT exam scores. Within three years, the scores dropped to 105 points below the state average. Lexington has had a net loss of 169 points in SAT scores since switching to the block. As always, a drop or gain by a single school proves nothing, but block scheduling at Lexington High has strongly affected learning in the past few years, apparently in negative ways based on reports of an observer, and academic performance dropped substantiality after the change.

Two model schools for Block Scheduling are in Minnesota's Anoka-Hennepin School District (Blaine and Champlin). A report published by that district compares its two block schools in 1993 and 1994 to two schools on a traditional schedule. The report appears to be written by a proponent of block scheduling (based on choice of material to present, the tone of the writing, the selective use of quotation marks, etc.). The report claims success based on a District Criterion Reference Test, in which the block schools did slightly better. However, the published ACT results show the traditional schedule schools did better in every area - math, English, reading, and science - though the two-year averaged difference in each area was only about 2 to 3%, with a composite difference of 2.55% (all calculations are mine - I find district and school reports rarely present data in a useful manner). Whatever differences exist may be due to block scheduling, but there could be many other factors accounting for the slight differences in scores. The data appear to be too limited to lead to solid conclusions. Nevertheless, proponents of block scheduling will boldly claim that the Anoka-Hennepin School District provides strong evidence of the success of block scheduling. More skeptical parents might ask why the highly funded, much-publicized block schools still performed more poorly on the ACT.

Reports from parents and teachers in other districts suggest that block scheduling may dilute education, although the reported results are widely mixed and certainly some schools are claiming success. However, a widely recognized challenge is that teachers can't possibly cover twice as much material in a doubly long class, so they often resort to "fun" games, doing homework in class, movies, and other things to maintain interest. Yes, it's fun, less stressful, and leaves time for "innovative" programs - but it may weaken real education. Even proponents of block scheduling note that less material will be covered, but they suggest that "less is more," meaning that students will come away with more from the class in spite of less content being covered. Is there really any hard data to support this contention? Especially for the better students who are already frustrated with the lack of solid content in many courses, less can only be less.

In addition, a number of reports indicate that music and drama programs may be harmed when each class only lasts half the year. (Your top choir could only perform for Christmas or graduation, not both.) The actual impact, of course, will depend on the details of the program being implemented. Certainly the impact on these programs should be carefully evaluated before adopting a new schedule.

The North Carolina Report [index]

A large study involving several hundred North Carolina schools has been released by the Department of Public Instruction in North Carolina, an institution which has been aggressive in promoting block scheduling. While it does not appear to have been peer reviewed, this publication is quite instructive in understanding the block. The most recent version, including results up to 1999, is now at https://www.ncpublicschools.org/accountability/evaluation/evalbriefs/vol1n1block-.htm.

Before discussing the recent results, let me provide some background about the North Carolina effort. In 1996, the Department of Public Instruction in North Carolina (hereafter NC-DPI) prepared a Web page claiming that their analysis of block schedule results in North Carolina showed benefits to block scheduling. Advocates of block scheduling began using that Web page to suggest that there is hard evidence of success. The May 1996 version of the NC-DPI report compared end-of-course (EOC) test results in schools with and without block scheduling. While the raw scores for both sets of schools were about the same, after statistical adjustments are made to account for differences between the schools in homework time, parental economic level, and the previous year's EOC results for the school ("starting point"), the block scheduling schools appeared significantly better than those on a traditional schedule. Thus, it would appear that block scheduling at least isn't all bad, and some advocates would say that this proves block scheduling is great for academics. However, as we now know, the celebration of pro-block advocates was premature.

In fact, the 1996 data actually contradicted the claims of success that were being made so vocally by B.S. supporters. The May 1997 Evaluation Brief from NC-DPI would conclude that the 1994-1996 data showed "few significant differences on EOC Test scores between students in block and nonblock scheduled schools" (see "Review of the Previous Studies" in the 1999 NC-DPI report). But after a questionable, unreviewed process of "statistical adjustment," NC-DPI would claim that the block showed an improvement in 1994 and 1995, followed by no difference in 1996.

Results from the 1999 version of the report are not as favorable for the block as they appeared in 1996. The general tone of the publication is that there is no significant difference between blocked and unblocked schools. And now there is an interesting admission about the effect of "adjustment" of the data:

"If EOC test scores for schools are compared based only on the type of schedule used, any difference found might actually be due to other factors. Thus, the "unadjusted" score comparisons of blocked and nonblocked schools, which showed that blocked groups scored significantly lower than nonblocked groups in most subjects, are not discussed here."
That's right - without the proper "adjustment," the block results look worse than the non-block results. More on this later. Note, though, that proponents of the block may overlook the fact that the raw data are unfavorable for the block. In fact, one pro-block site, The 4x4 High School, made this errant claim:
[The] North Carolina 4x4 Study of end-of-course exams finds no difference comparing raw scores of semester block and non-block schools.
This error is probably based on the early 1996 results. As of 1999, the raw data are definitely unfavorable for the block.

The methodology of the NC study was not properly documented. There is no information about the content and the design of the end-of-course tests. Were the tests designed to only cover material that could be covered in the "less-is-more" block schools? This may be the case, for the nature of the tests is said to have changed in the same year that North Carolina schools began switching to the block. If so, the tests would not reflect the "opportunity loss" suffered by the students.

The analysis appears to have been done by the NC-DPI, which may not be an objective source for the evaluation of educational programs being implemented in the state. If the NC-DPI has been actively promoting block scheduling, then it is natural that they would want the results of the study to reflect block scheduling in the most positive possible light. Such desires can guide the way statistics are gathered and interpreted, even if the unnamed authors sincerely want to be objective.

At least for the 1996 report, an important problem was that the study included a classic example of the improper use of statistics. The data had been massaged or adjusted to control for three covariables: starting point, parent education level, and homework time. (The 1999 report does not mention homework time, so it may not be included anymore - but it is unclear given the incomplete description of the adjustment process.) While the raw data do not show statistically significant benefits of block scheduling (and the 1999 raw data appear to show decreased scores from the block), the adjusted 1996 data showed significant gains for the block. The method of adjusting the data needs to be carefully scrutinized. It was noted in the 1996 report that block schools have less homework time on the average. The adjusted scores for block schools were thus increased to "control for" the lower homework time, though the magnitude of the adjustment due to homework is unclear. Now decreased homework time is a classic result of block scheduling and is one of the reasons block scheduling is not likely to improve actual learning. Academic performance and homework time are presumably linked, and block scheduling appears to cause homework time to drop. There is no justification for controlling for homework time. By eliminating the effect of homework time, much of the effect of block scheduling may have been eliminated, and the "adjusted" performance of block schools is inflated.

It appears that homework time is no longer part of the adjustment process in the 1999 report, which may be why the block now appears less advantageous in spite of other adjustments to reverse the conclusions one might draw from the raw data.

Inadequate information is given about the schools and the effect that the block had on schools after switching to it. Independent analysis of the raw data is needed to see if the analysis of the NC-DPI is valid, or if it disguises important trends.

The NC-DPI Web page is not a peer-reviewed publication. The use of statistical adjustments in the data raises serious questions about the claim of academic benefit. The lack of information on the tests themselves is a serious concern. Further, the statistical issue of power is not addressed: if block scheduling really did cause a difference in raw scores, what is the probability that it would be observed given the natural variability in the results?

A parent in North Carolina sent me the following note in Dec. 1997:

You should be aware that the North Carolina DPI no longer adjusts for 'lack of Homework' as they did in their MAY 1996 release. A parents committee in Marshalltown, Iowa has been fighting the Block and apparently successfully. They talked to the head psychologist who is doing the evaluating of North Carolina's massive push to the block (over 50% of High schools in NC switched to BS in the last 4 years). Their unadjusted EOC results now show a slight advantage for non-blocked schools. The MAY 1997 evaluation (which I have a copy of) continues to adjust for parental education levels and previous year's EOC results, but NOT for homework (they must have received too much flak on that), which results in a very slight advantage to block schools. The NC DPI has now changed their official position to 'NEUTRAL' on block scheduling, as a result of the current EOC test results. The EOC tests, were changed in 1993, the year they started pushing the Block, though they deny any attempt to modify it, to fit what can be taught in a blocked school.
I look forward to a more thorough review of the North Carolina school system and the effect of block scheduling. I still would like to see a peer-reviewed study showing clear academic gain caused by block scheduling across a large number of students.

In conclusion, school districts should make sure that new programs have been proven effective in objective controlled studies before they throw them at our students. Block scheduling might be an interesting experiment for school officials, but students don't have a time machine to go back when the experiment fails. I challenge school districts to show with controlled, longitudinal studies that block scheduling is better than normal scheduling for educating students.

Other Studies Claiming No Harm to the Block [index]

One of the biggest challenges in studies of human response to any factor is the wide variability that can occur due to numerous other factors. In education or many other fields, natural variability can make it difficult to fairly assess the effect of a treatment. A large sample size can help, but does not guarantee meaningful conclusions. When the sample is size small, it's entirely possible for the apparent results to have little relationship to reality. Small sample sizes and weak experimental designs have plagued a number of recent studies on the block.

Education Policy Analysis Archives, an online peer-reviewed education journal, has featured a couple of articles on the block where sample size may be an important issue. For example, Charles W. Lewis et al. published "The Effects of Full and Alternative Day Block Scheduling on Language Arts and Science Achievement in a Junior High School" in Education Policy Analysis Archives, Vol. 11, No. 41, November 11, 2003. This study appears to provide impressive support for block scheduling. The abstract follows:

The effects of a full (4 X 4) block scheduling program and an alternate day (AB) block scheduling program in a junior high school were under investigation in this study through the use of an ex post facto, matched sampling design. Measures investigated were standardized achievement tests in science and language arts. Both forms of block scheduling had been in place for several years, and one teacher in science and one teacher in language arts had taught students under both forms of scheduling. Because the sampling designs and analyses were different for the science and the language arts areas, two studies are reported here--each examining the effects of 4 x 4, AB, and traditional scheduling with attribute variables of gender and student skill levels in each analysis. Results consistently show students in both forms of block scheduling outperforming students in traditional scheduling, and that AB block scheduling has the largest positive impact on low-achieving students. [emphasis mine]
The study sounds like a boon for block scheduling advocates, until you read the details and realize that the authors are comparing block students from a single teacher in a single school with allegedly similar students in another single school in the same town. In some cases, conclusions are drawn by comparing two subgroups of about 20 students each. Have they shown that the block improved education, or just demonstrated that there was a good teacher somewhere in Colorado teaching under the block? Unfortunately, in spite of the heavy statistical language in the publication, the microscopic scope of the work and the limiting assumptions that had to be made give it no more than anecdotal value in addressing the efficacy of the block.

Another example comes from R. Brian Cobb, Stacey Abate and Dennis Baker, who published "Effects on Students of a 4 X 4 Junior High School Block Scheduling Program" in Education Policy Analysis Archives, Vol. 7, No. 3, February 8, 1999, available online. The authors reported that the effect of the block on junior high school students was "generally positive." The study looked at a few hundred students from a several schools in one quadrant of one city. Results drawn from such a limited scope of subjects cannot be relied on for general conclusions. The authors note that "Very little that is definitive can be inferred from this study. As mentioned earlier, its most positive contributions would be that it begins to fill a significant void in the middle/junior high school literature on effects of block scheduling."

Neither of the two above-mentioned studies cited the works of Bateson. Unfortunately, a great deal of the literature on the block seems to be unaware of the most significant and largest scale longitudinal, scientific studies that have been attempted on the subject, just as much of the literature on educational methods seems utterly unaware of the world's largest long-term study on the topic, the study known as Project Follow Through. Ignorance of the literature seems pandemic in many aspects of modern educational "studies."

Comments for My Block Scheduling Pages (via Facebook) To the index at the top


Part 1 (Main Page) | Part 2 (This Page) | Part 3 (Next Page) | Part 4 | Part 5 (with links)

What's new at JeffLindsay.com? | Home (JeffLindsay.com)


Curator: Jeff Lindsay Contact:
Last Updated: April 13, 2013

URL: "https://www.JeffLindsay.com/Block2.shtml"