Trend Reporting in International Education: WES Survey Results Can Mislead
by: Anthony O’Donnell and Aleksandar Popovski
In its recent report, “Not All International Students Are the Same: Understanding Segments, Mapping Behavior” (“the WES Report”), World Education Services (WES) proposes a model for segmenting international applicant pools based on a student’s financial resources and levels of academic preparedness.
The WES Report [covered here on The PIE News] is based on survey data taken from 1,600 international students during their application for credential evaluation, one of the services offered by WES.
Although 2,500 students started the survey, 36% “dropped out when asked about their experience with agents,” resulting in the 1,600 results used for the report. The previous two sentences hint at three common statistical biases briefly acknowledged by WES in the section “Data Limitations.” These biases make most of the claims in the report statistically invalid.
The first limitation is sampling bias. Ideally, we would like to use reports such as the WES Report to guide substantive policy decisions. To do so, the means of gathering data must conform to the basic statistical requirement of random sampling. If we perform random sampling, the data we collect should be representative of the population we want to study, and should not over-represent, under-represent, or distort the “real” population in question.
WES does not randomly draw from the population of international students; it draws from its own clients. With the data collected, we cannot answer questions about prospective international applicants in general but only answer questions about prospective international applicants who also use WES services. We cannot make reliable statistical inferences about the overall population of international students based on this data.
Another data limitation is self selection bias, where some respondents are more likely to take the survey than others. Since the survey was conducted in English, only students with a working knowledge of English could have taken it. Hence, the group that volunteered for the survey is not representative of all potential students.
Students with low English proficiency may have been excluded from the analysis because they could not complete the survey on language grounds. Incidentally, these students would need the most help to navigate the complicated process for applying to a U.S. university, and are most likely to utilize agent services.
The third limitation is that of missing data. With missing data, WES’ claim that one-sixth of respondents used agents cannot be a reason to conclude that “the use of agents might not be as widespread as previously indicated”. As noted above, 36%, or 900 respondents, dropped out of the survey when asked about their experience with agents. WES notes that those students may have dropped out because they “perceived agent-related questions to be sensitive.”
The only way for missing data to not be an issue is if only one-sixth of the 900 that dropped had used an agent. However, it is possible that anywhere from 0 to 900 used agents. Of the 2,500 original respondents, from 10% to 46% could have used agents. This, coupled with the possible omission of weak English speakers from the survey (self-selection bias), casts serious doubt on WES’ claim that agents are sparingly used by international students.
Having addressed the statistical issues in the WES Report, we turn to the claim that 62% of agent users “are not fully prepared to tackle the academic challenges of an (sic) U.S. education.” This claim is only valid under random sampling. Without random sampling, we cannot conclude that there are proportionately more academically prepared students among non-agent users than among agent users. Missing data compounds this problem, because of the 900 students who did not answer the survey there may have been a large proportion of highly-qualified agent users. Considering the many problems posed by statistical biases, we should discard this claim.
Finally, we should be cautious of any study that discusses academic preparedness from an a priori perspective. Whether a student is academically prepared to tackle the academic challenges of a U.S. education may be more a function of the admissions/academic standards of universities and less a function of the academic quality of students. What is a good student for some universities may not be a good student for others. Given that U.S. HEI’s form a wide spectrum of institutions from community colleges to big research centers, students may find a place at U.S. HEI’s with huge variations in academic preparedness.
WES presents an interesting strategy for international market analysis. However, given the statistical deficiencies with the survey, the conclusions drawn are of limited value for purposes of policy making in the area of international recruitment.
Aleksandar Popovski (popovski@binghamton.edu) is Assistant Dean of Admissions and Recruitment for the Graduate School at Binghamton University. Anthony O’Donnell (anthony.odonnell@binghamton.edu) is a graduate assistant and data analyst at Binghamton University.
Leave a Reply