How to understand academic disadvantage in PISA assessments in Latin America region?

3 May 2017 | 18:33

How to understand academic disadvantage in PISA assessments in Latin America region?

Nenes jugando. Photo: Juan Pedro Diez.

 

Latin American and the Caribbean region is still falling behind of the advanced economies in educational outcomes. Ten countries of the region participated in the sixth Programme for International Student Assessment (PISA), which assessed scientific, mathematical and reading literacy: Argentina, Brazil, Chile, Colombia, Costa Rica, Dominican Republic, Mexico, Peru, Trinidad and Tobago, and Uruguay. Additionally, five cities were included: four from Colombia (Bogotá, Manizales, Medellin and Cali), and the city of Buenos Aires, from Argentina, whose national results are not included, due the changes reported in the national sample.

 

Globally, the top performers, in turn, are a diverse group of countries. These countries differ on the institutional setting of their educational system. For instance, from strictly academic selection of students in Singapore and Japan, to geography-based selection in Estonia, Finland Canada. Despite the general lower position of the region in the international benchmark and its higher levels of socioeconomic inequality, we can highlight some remarkable achievements. Brazil school system has the highest level of equity in both social background and gender of the region, and  the increase in the overall performance of Peru, Colombia, Trinidad and Tobago, Uruguay. The city of Buenos Aires, which increased 50 points, one half of the standard deviation, between year 2012 and 2015, is a special case whose results are seem to be associated to improving in teachers training and labour conditions since year 2010, in the direction of developing a professional community, in line with reports show. However, is worth to ask how can we understand the differences in academic performance between school systems, regarding the learning standards set up by international organizations, the average performance of developed countries and particularly, the higher performing countries? To answer this question, it is required a policy approach that helps to link results with local, contextual and cultural boundaries in which school system operates. I distinguish three different approaches.

 

 

Differences in performance as qualitative differences in skills

One of the key issues in the use of large-scale assessments to inform policy is to be able to translate their results in a meaningful interpretation of the learning levels that students within school systems have attained. PISA assessment, whose scores are scaled to have a mean of 500 and a standard deviation of 100, includes six proficiency levels that divides the score distribution in specific knowledge level and abilities for each subject. Level 2 is defined as the baseline level of proficiency in all subjects, which helps to understand the learning levels that 15-year old student population have attained on each country.

 

 

In this respect, most of Latin American students’ science and reading literacy is placed at level 2, the basic level, with the exception of Brazil, Peru and Dominican Republic, which are below the baseline. In Mathematics, only Chile and the City of Buenos Aires (Argentina) have reached the baseline, whereas all the rest are below the basic level. Although its aim is to inform education policy decision making at system level, within countries, the actual score indicates the relative position above, within or below certain level of skills. Empirically, the definition of proficiency levels is done on each assessment wave, linking skills with cut-off points that define skill levels, easily understood by all actors involved. However, as policy-oriented assessment, the magnitude of PISA scores is not easily interpreted. Policy-makers and related stakeholders in any school system are interested in the practical consequences of the quantum of the change in the score, in real-life terms, or its effects associated.

 

 

Difference in performance as arithmetic ratio

For policy audiences, although relatively familiarized with statistical concepts, measures of changes in standard deviations or effects size, may be only partially informative, or rather obscure as indicator for both the design of actionable policies, and thereby, the orientation of school communities in taking actions for improvement and innovation.

 

 

One common way of interpreting PISA scores is to model academic performance as a linear function of a number of indicators, commonly time (i.e.,: years). The scores’ rate of change during a certain period of time is certainly informative to understand the main trend of performance among countries (OECD 2016a:85). Moreover, it could be misleading, if not fairly wrong to extrapolate this indicator to assess the region performance and how far it is from OECD average or higher performing countries,  in years. For example, Colombia and Trinidad and Tobago would require around 30 years to reach the OECD average, as IADB informs. This metric harbours a definition of improvement based in a simplistic logic. On the one hand, it represents learnings in a linear fashion, ignoring the fact that PISA assesses not only learnings that were acquired in the previous year, but cumulative learnings since birth day. Therefore, actual scores represent also the progression in skills acquisition of those students who fell off-track in their academic development over schooling years.  This also represents the weakness of proficiency levels (Willms 2001). On the other hand, it yields to the implication that the rate of score changes between assessments, it can be isolated from the context -replicating strategies- to reach the OECD level, or high performing countries (Harris et al 2015.

 

 

 

Difference in performance as years of schooling

An alternative and more sophisticated approach, virtually adopted by the OECD and other international organizations (e.g., IADB), is the so-called “years of schooling”, which establishes that around 33 PISA points equals approximately to one year of instruction (Willms 2004, OECD 2016a: 37). The definition is appealing as it renders a less abstract measure than the more statistical approach, and the score rat change per wave. For example, Latin American countries between two and three years of schooling below the OECD average, with the exception of the cities of Buenos Aires and Bogotá, placed one year of schooling below, in this case, equating 30 points. Thus, besides the differences in equalisation of the measures -a problem for itself. It can arguably be proposed that extra schooling hours may help in closing the gap, which is misleading. PISA reports do not present information about the way in which this indicator was constructed. There is a short reference to the relative grade, or grade effects, which indicates the difference in score between 15-year old students that are above or below the modal grade, net to selection effects and contextual factors, but not an explicit link both indicators, which was equal to 41 points for OECD countries in previous assessment . One problem is that this indicator does not represent academic progress between grade, or the academic advantage, because besides different students were assessed, the contents of PISA are not curriculum-based, so they cannot be interpreted as solely learnings acquired in the preceding year.

 

 

Hence, we should revise the source. Willms (2001) proposed an innovative metric to interpret the magnitude of the change in PISA scores for Canada. It describes how much schooling is associated with  PISA scores, based in  12 OECD countries in which was possible to identify the precise date of birth of students to determine their likely grade place at age of 15, that is in northern hemisphere school year, including Canada provinces. Data from Switzerland, France and the United Kingdom were excluded due problems of identification. Then, the effect grade was calculated for the difference between students who have reached 9 or 10 grade on schedule in predicting PISA score (reading), excluding those who have repeated grade, yielding and average of 34.3 points. By taking the average school year in hours for PISA students, based in a specific number of hours of the school day, the author calculated that each PISA score equates nearly to five schooling days. Thus, any PISA score difference can be calculated using this metric. However, Willms stresses that, once again, these metrics should not be interpreted too literally as they are based on an assessment of cumulative learnings and skills developed over the life of students, and the fact that learning occurs inside as much outside schools. One also can wonder about the extent in which these calculations reflect the relative disadvantage, in terms of years of schooling, of countries other than those considered in the calculations, and particularly of those countries placed at the bottom of the performance distribution, that includes all the countries of the Latin America region.

 

 

Final remarks

No doubt, PISA provide us a rich and varied source of information at the national and international level, alongside its huge influence in shaping education policy. However, the results and all kinds of knowledge we derive from PISA depends on those social responses that intend to improve learning of student population within school systems. And it is represented in the design of specific policies. In this regard, researchers, policy analysts and policy-makers should be cautious in developing a work that may imply simplistic implications, like those of replicating strategies from “high performing school systems”, as advice in the form of de-contextualized, empirically fragile packages of solutions (Harris et al 2015).  The development of adjusted indicators of relative improvement for countries and, in our case, Latin American region, requires both theoretically driven and applied research. To shape better educational outcomes, this research may evaluate the interplay that may exist between country specific contextual factors, characteristic of the institutional setting and its differences.

 

 

 

Francisco Cerón Acevedo is a Ph.D. candidate at the Amsterdam Institute for Social Science Research (AISSR), University of Amsterdam. His Ph.D. research examines the social process in Chilean school system that generate inequality of educational opportunity over time. Previously he was a researcher and research coordinator at the National Assessment Office, Chilean Ministry of Education. He has taught undergraduate courses in data analysis and research design/methodology. He holds a master’s degree in social policy (research) from the London School of Economics and Political Science and a master’s degree in sociology from the Pontifical Catholic University of Chile.

0 Comments

avatar
You not yet registered?