cpa-sm.gif (1004 bytes) A Joint Position Statement by the Canadian Psychological Association and the Canadian Association of School Psychologists on the Canadian Press Coverage of the Province-wide Achievement Test Results

Marvin L. Simner, Ph.D.
Chair, Canadian Psychological Association,
Professional Affairs Committee Working Group on Test Misuse

 

Executive Summary | Background | Position Statement | Recommendations | References

Executive Summary

Each spring teachers throughout Canada are required to administer a series of provincially-mandated tests to the students in their classes. Approximately six months later the results are made available to the public. It is now common practice for the press to report the results, to rank the schools according to the results, and to invite the public to engage in a school-by-school comparison of the rankings. It is also common practice for the press, in commenting on the poor performances displayed by certain schools, to place the blame for these performances largely, if not solely, on the schools themselves.

Our concern over this practice is with the failure on the part of the press to acknowledge the many other factors aside from schooling that are known to influence test performance. These include, but are not limited to, family stability, parental involvement and expectations for student success in school, early and ongoing home stimulation, and student motivation, student absenteeism, as well as student capacity for learning. Because students are not randomly assigned to schools and because schools have little or no control over the majority of these factors, any attempt to place the blame for poor test performance on the schools alone without giving proper consideration to each of these other factors is problematic at best and misleading at worst. Hence, by taking such a narrow position when dealing with an issue as complex as the cause of students’ test performances, we believe the public is being misinformed on a matter that is extremely important to the operation of schools and therefore to the education and well being of children.

Our greatest concern, however, is over the possibility that this singular emphasis on the schools as the cause of students’ test performances could generate considerable harm by placing unwarranted pressure on teachers, administrators, and ultimately on the students themselves to increase test scores or risk losing status within the community. Indeed, there is growing evidence that such harm is already occurring in the United States from similar comments in the US press concerning the poor performance of certain schools on the state-mandated tests.

With the hope of preventing similar situations from occurring in Canada, and in line with the view expressed by the Ontario Education Quality and Accountability Office, which is responsible for the development and scoring of the Ontario exams, it is our position that it is improper for the press to invite the public to compare schools based solely on the outcome of the mandated test results. We also recommend that in any future articles that deal with the results, in order to avoid misleading the public, the press should ensure that the public is fully informed of the various factors, in addition to schooling, that are likely to account for differences that may exist among schools.

Background

Province-wide achievement testing of elementary and high school students is now the norm throughout Canada (Canadian Federation of Teachers, 1999). Although the nature of the tests and the grade levels tested varies from province to province, each province, including the territories, either administers or plans to administer some form of standardized test to all of its students on an annual or biennial basis. Whereas we share many of the concerns being voiced today over the appropriateness of such mandated system-wide testing (see, for example, Froese-Germain, 1999; Haladyna, Nolen, & Hass, 1991; Herman & Golan, 1993; Herman, Abedi, & Golan, 1994; Jones & Whitford, 1997; Lomax, West, Harmon, Viator, & Madaus, 1995; Madaus, 1985; Paris, Lawton, Turner, & Roth, 1991; Rotberg, 1995; Salganik, 1985), our immediate concern is over the harm that might result from the misleading statements that are now appearing in the press regarding the findings.

In recent years it has become common practice for the press to report the test scores for individual schools, to rank the schools according to the scores, and to invite the public to engage in a school-by-school comparison of the rankings. The purpose of this invitation was perhaps best expressed in the following passage from a feature article in The Calgary Herald by Peter Cowley, coauthor of the 1999 Report Card on Alberta’s High Schools compiled by the Fraser Institute.

Just as competition in sports improves the caliber of play, competition in academics has the potential to improve every school’s performance. That’s why the Report Card on Alberta’s High Schools is specifically designed to make school-by-school comparisons easier.

Of course, the results will mean different things to different people, but everyone—parents, students, teachers and counsellors, administrators, and taxpayers—can put it to good use. For parents choosing a school, the Report Card provides information that can help them make an informed decision. Should parents make a decision solely on the basis of this Report Card? Of course not. But, when its contents are used with information from a variety of other sources, parents can get a clearer picture of what each school will deliver. Informed decisions are better decisions.

Because the Report Card identifies successful schools, it serves as a guidebook to best practices … Such public acknowledgement of a school’s success is a powerful incentive for all schools to do better (Cowley, 1999).

A similar message appeared in the Vancouver newspaper The Province on March 24, 1999, as part of a three-day feature story on the rankings of the British Columbia high schools (also prepared by the Fraser Institute). An even more forceful rendition of this message appeared in a lead editorial in The London Free Press on November 27, 1999.

If it takes a public pillorying by way of poor test results to force schools to admit their failings, and to correct them, then let the school day be filled with exams.

The London Free Press is releasing today school-by-school test results for Grades 3 and 6 students in the Thames Valley District school board and London District Catholic school board. They are results many school boards and teachers would prefer the public not see.

Mushy liberal sensitivities about low scores causing hurt feelings and leading to the stereotyping of schools as good or bad were a longstanding impediment to province wide testing. While those bleeding hearts were in the right place, their legacy has been to institutionalize underperformance at some schools. Having no standard against which excellence can be measured has even perpetuated the myth that a school is a school is a school, that all schools are equal. Savvy parents know better—they move to be near better schools.

This no-publicity reasoning serves but one master—school officials who can escape their responsibility to students in underperforming schools.

Thus, according to these statements, it would seem that it is the schools that are largely responsible for the students’ test performances. Moreover, unless the schools with the poorest performing students are forced, through competition, to provide their students with a quality education, the students will continue to receive only a mediocre learning experience. Our concern over this material is with the failure on the part of the authors to acknowledge the many other factors aside from schooling that are known to influence test performance. These include, but are not limited to, family stability, parental involvement and expectations for student success in school, early and ongoing home stimulation, and student motivation, student absenteeism, as well as student capacity for learning (Christenson, Rounds, & Gorney, 1992; Grolnick & Ryan, 1989; Jeynes, 1999; Keith, Keith, Quirk, Sperduto, Santillo, & Killings, 1998; Lytton & Pyryt, 1998; McLean, 1997; Scarborough & Dobrick, 1994). Because students are not randomly assigned to schools and because schools have little or no control over the majority of these factors, any attempt to place the blame for poor test performance on the schools alone without giving proper consideration to each of these other factors is problematic at best and misleading at worst. Hence, by taking such a narrow position when dealing with an issue as complex as the cause of students’ test performances, we believe the public is being misinformed on a matter that is extremely important to the operation of schools and therefore to the education and well being of children.

The same can be said of statements appearing in articles that comment on the year-to-year changes in test scores obtained by individual schools. The Globe and Mail, for example, highlighted a single school in the Toronto Catholic School District that reported a 48% gain in reading performance and attributed this gain solely to "a concerted effort by the entire school to improve" (Galt, 1999). Similarly, the Readers Digest, in an article entitled "When teachers sell kids short" (Nikiforuk, 1997), attributed the gains made by one school in the North York district of Ontario to an "action plan [that] included more direct and sequenced teaching of phonics, spelling and grammar." Needless to say, by singling out specific schools that showed considerable improvement from one year to the next and by attributing this improvement only to efforts undertaken by the schools themselves, other schools that failed to show the same level of improvement, by implication, are assumed to be at fault for not engaging in similar efforts. The problem here too is that any gains or losses over time displayed by individual schools are difficult if not impossible to interpret without access to further information. The following example from The London Free Press illustrates the nature of this problem.

In the lead editorial mentioned above the Free Press claimed that 12 schools in the London and District Catholic School System "leapfrogged ahead" between 1998 and 1999 as a result of a "SWAT team of roving teacher specialists." If the editorial writer had considered the actual data, as well as some demographic findings available from the Catholic Board on these 12 schools, a somewhat different picture would have emerged. Take for instance reading. Aside from the fact that, according to the data, only seven of the 12 schools made gains large enough that they could be attributed to something other than chance, close inspection of the demographic findings revealed that these seven successful schools may have differed in important ways from the five other schools. In particular, among the successful schools there was a decrease between 1998 and 1999 in the average class size per school, whereas the opposite was true among the other schools. Thus it could be that, due to a smaller teacher-to-pupil ratio, the teachers in the successful schools had more time to spend with their students. Furthermore, in 1999 there were fewer males in the seven successful schools and more males in the five other schools. Also, between 1998 and 1999 there was an average drop of 12% in the number of children for whom English is a second language (ESL) among five of the successful schools, whereas in the five other schools, this drop was less than 2%. We mention these last two points because males as well as nonnative English speakers typically perform more poorly on reading exams than do females and native English speakers. In essence, this combination of smaller class sizes, fewer males, and fewer ESL children among the seven successful schools may have been sufficient to account for the differences between the two groups of schools independently of the presence of the "SWAT team."

As a further illustration of the need by the Free Press to have gathered more information prior to reaching any conclusions, consider what happened to the province-wide scoring of the exams between 1998 and 1999. According to a memo dated November 15, 1999, from the Education Quality and Accountability Office (EQAO), which is responsible for the development and scoring of the Ontario exams, the scoring procedures used in 1998 differed from the scoring procedures used in 1999. In 1998 the exam markers employed a four point scale (Level 1, 2, 3, 4) to determine each student’s achievement level, whereas in 1999 they used a seven point scale (Level 1, 1+, 2, 2+, 3, 3+, 4). The reason for this change was that the seven point scale permitted greater latitude in scoring when there was some uncertainty over whether a student deserved, for instance, a score at level 2 or 3. However, when EQAO calculated the final mark for each student in 1999, "the seven-point scale was collapsed into a four-point reporting scale (and) the levels assigned to students’ work were determined by combining the marks in each of the "plus" categories with those in the next higher level." In other words, whenever a "plus" appeared on a student’s record in 1999, the student automatically received a final score that was rounded up to the next higher level. Because rounding up did not take place in 1998, and because some schools may have profited more than others from this procedure, it is difficult to know whether the gains displayed by certain schools in the London and District Catholic School System between 1998 and 1999 should have been attributed to efforts undertaken by the schools alone (as stated in the editorial) or, instead, should have been attributed to an artifact that resulted from this change in the scoring procedure.

In summary, as is the case when attempting to determine the cause of differences that occur between schools within a given year, any attempt to determine the cause of differences that occur within schools from year-to-year is equally problematic without giving proper consideration to further information. The RAND Institute of Education and Training had the following to say about this matter.

Comparisons of simple, unadjusted test scores from one year to the next or across different schools or districts [within a given year] do not provide a valid indicator of the performance of the teachers, schools, or school districts unless the differences in scores are very large compared to what might be accounted for by changing demographic or family characteristics. This is rarely the case; so, any use of unadjusted test scores to judge or reward teachers or schools will inevitably misjudge which teachers and schools are performing better (as cited in Rotberg, 1995, p. 1447).

Beyond this issue of test interpretation however, our greatest concern is over the harm that might result as a function of these misleading statements. As professional associations dedicated to the improvement of the mental health of children as well as adults, we are especially troubled by the possibility that this singular emphasis on the schools as the cause of the students’ test performances could place unwarranted pressure on teachers, administrators, and ultimately, the students themselves to increase test scores or risk losing status within the community. This possibility is clearly illustrated in the situation that is now unfolding in the United States. Consider Michigan, for example, where rankings similar to those that are appearing in Canada have been reported in the press for many years.

In Michigan schools, the MEAP rules. Michigan Educational Assessment Program (MEAP) is the state’s 30-year-old school testing program. After years of media and political hype, MEAP has become a much-feared test of academic standards. Its impact is felt beyond the schools and throughout entire communities.

Despite perennial protests from Michigan educators, the annual list of numbers inevitably produces a horse race mentality that has evolved into a simple way to evaluate a community. MEAP scores from local schools are routinely used by real estate agents as a selling point for homes. "It’s the most-requested piece of information we give out," says Linda Miller of Dearborn, Mich., who prepares community profiles for real estate companies.

Lawyers have used MEAP scores in custody battles to argue children would be better off with a parent near a higher scoring school.

Many Michigan schools routinely hold MEAP pep rallies, weekend tutorials and summer courses, and offer rewards aimed at getting kids geared up to tackle the tests … The school puts MEAP questions on the plates of children at lunch hour and awards prizes for correct answers. The school holds weekend tutorials to teach test skills. During the tests, pupils are plied with treats, but the stress on them is obvious. Even though MEAP results never show up on report cards, bad scores are seen as [a] mark of shame (Daniszewski, 1999).

Consider also the following press reports that deal with recent situations in New York, Texas, Massachusetts, Maryland, Ohio, and Connecticut. In each instance school personnel were indicted, suspended, or are under investigation for improper behavior, and in each instance the responsible factor was the increased pressure on the schools that resulted from the rankings.

New York

School investigators charged Tuesday that dozens of teachers and two principals across New York City’s public school system had given students the answers on the standardized reading and mathematics test … The cheating, over five years, involved more students and more educators than any recent cheating case in American public schools, the investigators said. At some schools, teachers and principals let students mark their answers on scrap or notebook paper then told them which answers to correct when they filled in the bubbles in the official test booklets … At others, they directed students to erase incorrect answers, or even changed the answers themselves. The investigators said that in a particularly egregious case, Evelyn Hey, the principal of Public School 234 in the Bronx, not only told third graders to fix wrong answers, but gave them practice tests beforehand with questions from the actual exam. Students’ performance on the tests improved remarkably at many of the schools, and two were even removed from a state wide list of failing schools … At P.S. 234, reading scores jumped 22 percentage points over the years that Ms. Hey and other educators were using the cheating techniques, the report on the investigation said (Goodnough, 1999).

Texas

The county attorney in Austin, Tex., said Wednesday that indictments against the city’s school district and a top administrator for manipulating state wide assessment tests scores were a hard but necessary measure to prevent tampering. Ken Oden, the Travis County Attorney, said the indictments handed up by a grand jury on Tuesday were the first against a Texas school district … "The reality is that as we rely more on standardized testing and standardized ratings for schools there is greater and greater pressure and emphasis on obtaining the best possible rating for your school and school district," Oden said. In Austin, he said, district officials succumbed to that pressure last spring, when they scored the Texas Assessment of Academic Skills tests, which is used to rate the performance of both students and schools. The rankings provide a gauge for how schools are performing, and poor performance can be an embarrassment to a district, and a temptation to manipulate data … While no charges have been brought in other school districts in the state, problems have occurred. In the Houston Independent School District a teacher was fired for allegedly using an answer key to correct student forms … In the neighboring Fort Bend School District, a principal and teacher resigned in the midst of a test-tampering investigation (Whitaker, 1999).

Massachusetts

In the latest example of cheating on high-stakes standardized tests, administrators, teachers and students in at least 19 Massachusetts schools violated the rules when taking state wide exams last spring … Some educators returned students’ test booklets and urged them to revise their answers, others gave pupils two days instead of one to complete their essays, and one teacher circulated copies of test questions via e-mail to her colleagues at other schools, noting that what she was doing was "probably illegal" … [although these improprieties were not noted in the publication of schools’ scores, the scores themselves were still] used by real estate agents and others to promote certain neighborhoods (Wilgoren, 2000).

Maryland

The latest incident arose Wednesday at Montgomery County’s Potomack Elementary School, where Principal Karen Karch resigned after angry parents alleged that students were pointed to the correct answers and helped in rewriting answers to essay questions. "We’re all shocked and flabbergasted. What the school feels is disbelief," said Krysti Stein, Potomac Elementary’s incoming PTA president … At a time when superintendents are under pressure to increase test scores and hold principals and teachers accountable for student achievement, talk of cheating dominates the conversation in education circles. "What we are seeing is what comes from the pressures of these high-stakes tests," said Vincent L. Ferrandino, executive director of the National Association of Elementary School Principals, who saw a similar scandal in Fairfield, Conn. (see below), while he was state education commissioner … Teachers decry the breaking of testing rules but say they understand how colleagues might behave irrationally to ensure good results on tests that could dictate whether and where they will teach … In the past two years alone, schools in New York, Texas, Florida, Ohio, Rhode Island, Kentucky and Maryland have investigated reports of improper or illegal efforts by teachers, principals and administrators to raise test scores. The rise in incidents of coaching has matched the upward surge of standardized testing, as states move to punish schools and students who fail new high-stakes achievement tests (Mathews and Argetsinger, 2000).

Ohio

An elementary school recently visited by President Clinton, held up as an example of the success of his administration’s education policies, is now under fire following allegations by several students that staff members and tutors helped the students cheat on fourth-grade state proficiency tests. In addition, after reporting the student confessions to school officials, an award-winning teacher who blew the whistle now says her career may be jeopardy after she rejected requests by administrators to recant her story … The allegations of cheating were reported in December by Eastgate fifth-grade teacher Barbara McCarroll … In an interview with the Columbus Dispatch, McCarroll said that the cheating was not unknown to school officials … But the state’s investigation is unlikely to help McCarroll, who says the pressure from Eastgate administrators and teachers to recant her story prompted her to take disability leave in February. "It got so bad, I started bleeding internally," she said (during the interview). "I couldn’t sleep; I couldn’t eat. Every day, I went there and people wouldn’t speak with me." McCarroll, who won the district’s 1995 Good Apple teaching award, now faces the prospect that blowing the whistle on her colleagues might have brought her career to an end (Poole, 2000).

Connecticut

In Fairfield, Conn., an investigation by district officials into results on the Iowa Test of Basic Skills (ITBS) revealed that answers were five times more likely to be erased at Stratfield School than at other schools. Of the thousands of changes made at the school, the investigation found, about 90 percent turned wrong answers into correct ones. A follow-up investigation of older tests by forensic scientist showed tests had been tampered with for years. When the kids were retested under tight security, the school’s scores—usually among the highest in the state—slipped below those of other schools. The school’s popular principal said he knew nothing of the cheating but retired soon after the scandal (Bushweller, 1997).

On a more personal note, the following quotation from a recent article in The New York Times by a fifth grade teacher in California no doubt captures the dilemma that many educators in the United States are now facing as the result of these rankings.

I hate to admit it, but I no longer see the students the way I once did—certainly not in the same exuberant light as when I first started teaching five years ago. Where once there were "challenging" or "marginal" students, I am now beginning to see liabilities. Where once there was a student of "limited promise," there is now an inescapable deficit that all available efforts will only nominally effect.

The pressure is on. Any number of eyes will examine my test results this spring, not the least of them the public’s since in California, scores are posted on the Internet. No apologies or arguments about extenuating circumstances are going to shield me from the new state edict: Improve, or expect us at your doorstep (Hixson, 2000).

Additional commentary and evidence bearing on the harm that these rankings are causing in the United States can be found in articles by Lomax, West, Harmon, Viator, and Madaus (1995), Paris, Lawton, Turner, and Roth (1991), and Smith (1991). In summarizing much of this evidence, Rotberg (1995) had the following to say about holding educators accountable through test scores in order to bring about change in the educational system.

Testimony before the U.S. House of Representatives put it this way: "[Test-based accountability] has been tried many times over a period of centuries in numerous countries, and its track record is unimpressive … It was the linchpin of the educational reform movement of the 1980s, the failure of which provides much of the impetus for the current wave of reform … Holding people accountable for performance on tests tends to narrow the curriculum. It inflates test scores, leading to phony accountability. It can have pernicious effects on instruction, such as substitution of cramming for teaching … It can adversely affect students already at risk—for example, increasing the dropout rate and producing more egregious cramming for the tests in schools with large minority enrollments (Rotberg, 1995, p. 1446-1447).

Position Statement

In light of the situation that is now taking place in the United States, and with the hope of avoiding a similar situation in Canada, we are in full agreement with the following statements by the Ontario Education Quality and Accountability Office. The first statement is from a handbook intended for parents; the second statement is from a handbook intended for educators.

Should test results be used to compare schools?

How students performed in one assessment does not provide enough information to make accurate or responsible comparisons. Comparisons based on how students performed on one assessment, administered at one point in the year, could be misleading.

Moreover, numerical results alone do not necessarily provide the whole picture for any school. A school could have good results but its students may not be achieving their full potential. On the other hand, some schools, in pure numerical terms, may not have done so well as others, but their progress, measured over time and against specific targets, may in fact be substantial.

REMEMBER: Province-wide tests are not about passing or failing students, or about comparing schools. The primary purpose of the tests it to improve students learning—to identify areas of strength and to address areas where improvement is needed (EQAO, 1998a, p. 19).

 

Reporting The Board Results

The results of these assessments should not be used to rank schools or boards, for that would entail reducing the results to a single score or number. With the comprehensive nature of the data provided to schools and boards, ranking would be misleading. Ranking does not contribute to the well-being of Ontario students and is therefore inconsistent with EQAO’s mandate and core values (EQAO, 1998b, p. 15).

In short, according to EQAO, the major purpose of the province-wide exams is to provide individual schools, and boards, with an opportunity to gauge their own year-to-year progress in meeting the standards established by the Ministry of Education. The exams are not intended to be used as yardsticks for measuring the performance of one school in relation to another. Hence, in supporting this position by EQAO, we consider it improper for the press to invite the public to contrast the performances of one school against another solely on the basis of the exam scores.

It is also our view that if the press continues to report, as well as interpret the findings from the exams, this action by itself automatically makes the press a user of educational test results. Therefore we feel that the press should abide by the same sets of ethical provisions that pertain to other users of this material. These provisions appear in such documents as the Principles for Fair Student Assessment Practices for Education in Canada (1993), Standards for Educational and Psychological Testing (AERA, APA, NCME, 1985, 1999), and Guidelines for Educational and Psychological Testing (CPA, 1987). Among the provisions are at least two that apply directly to the present situation.

Test users should be alert to probable unintended consequences of test use and should attempt to avoid actions that have unintended negative consequences.

When test results are released … those responsible for releasing the results should provide information to help minimize the possibility of misinterpretation of the test results.

Recommendations

In keeping with these provisions we recommend that in any future articles that report or interpret the findings, the press should ensure that the public is fully informed of the various factors, in addition to schooling, that are likely to account for differences that may exist among schools. As mentioned above these include, but are not necessarily limited to, family stability, parental involvement and expectations for student success in school, early and ongoing home stimulation, student motivation, student absenteeism, and student capacity for learning.

As a further means of avoiding any misunderstanding on the part of the public we also recommend that a note similar to the following should appear in a prominent position together with any school-by-school breakdown of the results.

Because children are not randomly assigned to schools, it is impossible to determine the cause of any differences in test results that occur between schools without access to further information. Although (name of the paper) publishes the following results as a service to the public, we strongly discourage the public from making unwarranted comparisons that may lead to erroneous conclusions.

In making these recommendations we also wish to make clear that it is not our intention to interfere with the freedom of the press. Nor is it our intention to discourage public debate over how best to educate children. We do feel, however, that the public is not well served when it is exposed to misleading information and unsupported speculation.

Acknowledgement

Appreciation is extended to the following members of the Working Group who offered many helpful comments on an earlier draft of this document: Sampo Paunonen, Nicholas Skinner, P. A. Vernon. Appreciation is also extended to Brian Reiser, Gary Rollman, and Phyllis Simner who, in addition to providing helpful comments, also supplied a number of key references.

References

AERA, APA, NCME. (1985). Standards for educational and psychological testing. Washington, DC: American Psychological Association.

AERA, APA, NCME. (1999). Standards for educational and psychological testing. Washington, DC: American Psychological Association.

Bushweller, K. (1997). Teaching to the test. The American School Board Journal, September, 20.

Canadian Federation of Teachers. (1999). Province-wide Assessment Programs. Available on line at http://www.ctf-fce.ca/e/what/other/assessment/testing-main.htm.

Christenson, S. L., Rounds, T., & Gorney, D. (1992). Family factors and student achievement: An avenue to increase student’s success. School Psychology Quarterly, 7, 178-206.

Cowley, P. (1999, June 20). Comparing schools made possible. Calgary Herald, p. A10.

CPA. (1987). Guidelines for educational and psychological testing. Ottawa, ON: Canadian Psychological Association.

Daniszewski, H. (1999, May 4). Upping the score. London Free Press, p. A10.

EQAO. (1998a). Parent handbook 1997-1998. Toronto, Ontario: Queen’s Printer of Ontario.

EQAO. (1998b). Educators handbook. Toronto, Ontario: Queen’s Printer of Ontario.

Froese-Germain, B. (1999). Standardized testing: Undermining equity in education. Ottawa, ON: Canadian Teachers’ Federation.

Galt, V. (1999, November 26). Scarborough school’s grade 3 posts sharp rise in test scores. The Globe and Mail.

Goodnough, A. (1999, December 9). New York city teachers nabbed in school-test cheating scandal. National Post, p. B1.

Grolnick, W. S., & Ryan, R. M. (1989). Parent styles associated with children’s self-regulation and competence in school. Journal of Educational Psychology, 81, 143-154.

Haladyna, T. M., Nolen, S. B., & Haas, N. S. (1991). Raising standardized achievement test scores and the origins of test score pollution. Educational Researcher, 20, 2-7.

Herman, J. L., Abedi, J., & Golan, S. (1994). Assessing the effects of standardized testing on schools. Educational and Psychological Measurement, 54, 471-482.

Herman, J. L., & Golan, S. (1993). The effects of standardized testing on teaching and schools. Educational Measurement: Issues and Practice, Winter, 20-42.

Hixson, B. K. (2000, January 25). How tests change a teacher. The New York Times.

Jeynes, W. H. (1999). The effects of children of divorce living with neither parent on the academic achievement of those children. Journal of Divorce & Remarriage, 30, 103-120.

Jones, K., & Whitford, B. L. (1997). Kentucky’s conflicting reform principles: High-stakes school accountability and student performance assessment. Phi Delta Kappan, 70, 276-281.

Keith, T. Z., Keith, P. B., Quirk, K. J., Sperduto, J., Santillo, S., & Killings, S. (1998). Longitudinal effects of parent involvement on high school grades: Similarities and differences across gender and ethnic groups. Journal of School Psychology, 36, 335-363.

Lomax, R. G., West, M. M., Harmon, M. C., Viator, K. A., & Madaus, G. F. (1995). The impact of mandated standardized testing on minority students. Journal of Negro Education, 64, 171-185.

Lytton, H., & Pyryt, M. (1998). Predictors of achievement in basic skills: A Canadian effective schools study. Canadian Journal of Education, 23, 281-301.

Madaus, G. F. (1985). Test scores as administrative mechanisms in educational policy. Phi Delta Kappan, 66, 611-617.

McLean, R. (1997). Selected attitudinal factors related to students’ success in high school. The Alberta Journal of Educational Research, 43, 165-168.

Mathews, J., & Argetsinger, A. (June 2, 2000). Teachers say pressure to do well on tests is enormous. Washington Post, p. B1., A18.

Nikiforuk, A. (1997, April). When teachers sell kids short. Reader’s Digest, 119-120.

Paris, S. G., Lawton, T. A., Turner, J. C., & Roth, J. L. (1991). A developmental perspective on standardized achievement testing. Educational Researcher, 20, 12-20, 40.

Poole, P. (2000, May 31). Brave new schools: Clinton’s model school accused of cheating. WorldNetDaily (on-line). Available: www.worldnetdaily.com/ bluesky_exnews/ 2000531_xex_ clintons_mod.shtml.

Principles for fair student assessment practices for education in Canada. (1993). Edmonton, AB: Joint Advisory Committee. (Mailing address: Joint Advisory Committee, Centre for Research in Applied Measurement and Evaluation, 3-104 Education Building North, University of Alberta, Edmonton, AB, T6G 2G5).

Rotberg, I. C. (1995). Myths about test score comparisons. Science, 270, 1446-1448.

Salganik, L. H. (1985). Why testing reforms are so popular and how they are changing education. Phi Delta Kappan, 66, 607-610.

Scarborough, H. S., & Dobrick, W. (1994). On the efficacy of reading to preschoolers. Developmental Review, 14, 245-302.

Smith, M. L. (1991). Put to the test: The effects of external testing on teachers. Educational Researcher, 20, 8-11.

Whitaker, B. (1999, April 8). Prosecutor says indictment of Austin schools will help deter test tampering. The New York Times.

Wilgoren, J. (2000, February 25). Cheating on state wide tests is reported in Massachusetts. The New York Times.


 

Approved by the Board of Directors, Canadian Psychological Association, April, 2000, and by the Executive Committee, Canadian Association of School Psychologists, July, 2000.

Copyright © 2000

Canadian Psychological Association
Société canadienne de psychologie

Permission is granted to copy this document.

Canadian Psychological Association
Société canadienne de pschologie
151 Slater St., Suite 205
Ottawa, Ontario
K1P 5H3

Title: A Joint Position Statement by the Canadian Psychological Association and the Canadian Association of School Psychologists on the Canadian Press Coverage of the Province-wide Achievement Test Results

ISBN # 1896638630

earth1.gif (2262 bytes)

CPA Homepage | Publications

buttonr.jpg (864 bytes)  buttonb.jpg (786 bytes) buttonr.jpg (864 bytes)