Worth it? How to evaluate the effectiveness of academic admissions assessment tools
October 15, 2019
Anyone working in academic admissions knows how overwhelming it can be to sift through hundreds — or tens of thousands! — of applications, looking for the very best. Academic admission assessment tools can simplify the process, by ranking applicants based on a defined set of desired characteristics. But these tools vary in quality, and not all are suitable for high-stakes admissions.
At best, poor quality assessment tools are no better than a lottery system; they give the impression of intelligently sorting applicants when really, they’re potentially overlooking qualified prospects. At worst, they can make biases in the admissions process more pronounced, hurting applicants that admissions teams want to help most. To ensure that the best people are being selected, and that the time and efforts of both the applicants and the admissions team aren’t being wasted, assessment tools need to be of sufficient quality and properly assessed.
So how can you decide whether an academic admission assessment tool is of high quality? By considering the following.
Is it an assessment tool or a management tool?
Not all tools are meant to assess candidates; some simply help manage the applicant pool. If the tool doesn’t provide a way of differentiating between applicants — by providing a score, percentile or ranking, for instance — then it is not a true admission assessment tool.
Is it reliable?
Let’s say you give a standardized test to four people. Archie does better than Betty, who does better than Veronica, who does better than Reginald. Then you give the test again. This time, it’s Veronica who does better than Reginald who does better than Archie who does better than Betty.
Those inconsistent results indicate that your assessment tool has low reliability — a serious problem. For life-changing decisions, like academic admissions, assessment tools need to have sufficient reliability. Surprisingly, many commonly used admission assessment tools don’t meet this requirement, meaning you’ll get different results from putting the same candidates through the same process.
Does it have predictive power?
If you have a reliable selection tool, then the next step is to assess whether the tool predicts for some future performance of interest. A tool should predict outcomes relevant to your program’s and/or institution’s goals. A good assessment tool will have a decent correlation score between the tool score (the “predictor” variable) and the some future “outcome.” This is known as “predictive validity.”
Is it acceptable to those using it?
You need to consider whether your institution and your applicants will be content to use the tool. Evaluate whether it could be deemed excessively laborious, counter-intuitive, off-putting or off-brand.
Is it fair?
The academic admission assessment tool you choose needs to fairly accommodate all people: applicants with disabilities, from far-flung locations, of different racial and cultural identities, from less affluence, and so on. While some bias already exists within most selection processes, it would be difficult to accept a tool that exacerbated the problem. If it does make these problems worse, you need to continue your search.
Is there a significant coaching effect?
Popular admissions assessment tools have a cottage industry of test preparation companies, which coach applicants on how to take the test. Consider the wealth of SAT prep services, for example. If only affluent applicants have the means to use these services, and if the services are highly effective, then the playing field isn’t level for all candidates. Look for tests with negligible coaching effects or, for ways that the effects can be mitigated.
Coming up:
In future posts, we’ll dive into some terms like reliability and predictive power and then review how current admissions tools (interviews, personal statements, reference letters, etc.) stack up against these elements.