The Good, the Bad, and the Ugly of AI and Admissions

Human + AI > AI

By Troy Lowry

I’ve talked in previous posts about AI in admissions, and here I will dive deeper. Many of the AI tools used in admissions today automatically read through applications and sort candidates into “accept” and “reject” piles. This is not unlike the process at many law schools where applicants are put into presumptive “admit” and “deny” piles largely based on their credentials.

This makes processing applications quicker and more efficient, but I fear it loses some of the “holistic” review that many institutions claim to provide and most applicants hope to have. And while it might remove some human bias, the way the AI is trained might also reinforce existing bias as well as add new types of bias. Finally, the fact that we don’t really understand how AI makes its decisions means close human oversight is essential.

The Good

AI excels at speed, consistency, and understanding ways in which things are alike.

For high volume admission departments, speed is a vital factor in handling applications efficiently. Anything that can make that process faster, while still allowing for a full holistic view of each applicant, is highly valued. In fact, in a new survey from External link opens in new browser window and reported by Insider Higher Education External link opens in new browser window, 50 percent of education admission offices say they are already using AI somewhere in their process.

AI can scan transcripts checking for minimum thresholds on GPAs or red flags in recommendations and alert reviewers to anything out of the ordinary. AI is also widely used to transcribe interviews with applicants.

Anything that allows admission professionals to get right to the parts of the application that most tell them about the individual, their likelihood of attending if admitted, and their ability to succeed at the institution is valuable time saved which can be used to give holistic reviews to more applications than might have been possible given the same volumes without such automation.

In addition, 90% of survey respondents who use AI to make decisions (as opposed to just flagging relevant data) believe that AI is “somewhat” (22%) or “very likely” (68%) to reduce bias in the admission process. In my opinion, reducing bias alone would be a major reason to implement AI.1

The Bad

The same survey shows that 66% of admission professionals are “somewhat” (35%) or “very concerned” (31%) about the ethical implications of AI. Although I did not participate in this survey, count me in the “very concerned” category, especially when it comes to AI making the admission decision without close human supervision.

As I’ve written about before, AI is not some magic genie. Rather, AI is statistics. Even neural networks are just sophisticated statistics. To train an AI admission model, we feed it all of the data for past applicants, telling it which were admitted or denied. From this data, it calculates advanced statistics to figure out similarities of admitted and denied students. Once it figures out the similarities, the AI applies that knowledge to new applicants. Those who are similar to past admits are admitted. Those who aren’t are denied.

In other words, it uses past results to determine future results using advanced statistics. The idea that an AI which is taught using biased past results will somehow be unbiased seems untenable to me.2  

The Ugly

I wrote previously about how we don’t know how AI makes its decisions. Although AI is adept at identifying that a picture contains a cat, we don’t really know how. Oh, we know that AI identifies features and relationships between features in a picture, compares them to what it knows about features of cats, and then decides if the picture is close enough to those features to be labeled a cat.

The problem is, we don’t know what those features it picks are. Humans can articulate the features we look for: whiskers, pointed ears, fur, etc., but the features AI chooses are far more complex and represented by statistics3 — complex to the point that we don’t really know what features it uses.

Similarly, when AI parses applications and makes admission decisions, we don’t really know what the decision is based on. It’s not entirely out of the realm of probability4 that the AI is simply admitting everyone who uses the word “akin” in their personal statement. The problem is, despite extensive research External link opens in new browser window, we don’t really know what features of applicants the AI is looking at.


While using AI in admissions can streamline the process and potentially reduce some sorts of existing bias, it is imperative to approach its use with caution. The allure of efficiency should not overshadow the importance of maintaining a thorough and fair review of applicants, a promise of holistic assessment many institutions stand by.

Human oversight remains essential; professionals must scrutinize the outcomes of AI to ensure it isn’t adding new sorts of bias. The balance between human intuition and AI's analytical prowess can lead to a more equitable and efficient admission system. AI's role should be to assist, not replace, the nuanced judgment of human reviewers. This partnership ensures that we preserve the personal touch that honors the diversity and individuality of each applicant, leading to admissions decisions that are as just as they are efficient.

  1. Assuming, of course, that AI actually reduces bias. My in-depth knowledge of AI makes me quite certain that AI is very biased, albeit differently than humans.
  2. Imagine you are training AI to determine if a picture has a cat in it. If you train it using pictures of cats and dogs, and some dogs are labeled as cats, then you would not reasonably expect the AI to tell cats from dogs. The bias in the data the AI is trained with inevitably is reflected in the AI’s results.
  3. An AI “feature” might be something like: the cube root distance between the ears + nostril length divided by the square root of the amount the eye shape deviates from a perfect circle. This “feature” is pure conjecture, but what is clear is that the “features” are incredibly deep and complex to the point that they are virtually unknowable.
  4. Albeit incredibly unlikely

Troy Lowry

Senior Vice President of Technology Products, Chief Information Officer, and Chief Information Security Officer

Troy Lowry is senior vice president of technology products, chief information officer, and chief information security officer at the Law School Admission Council. He earned his BA from Northeastern University and his MBA from New York University.