The EU AI Act: A Game-Changer for Law School Admission

Human + AI > AI

By Troy Lowry

The European Union has made a groundbreaking move in the realm of technology and law: the agreement on the European Union’s Artificial Intelligence Act (EU AI Act). This is not just big news; it's a seismic shift in the legal landscape, mirroring the transformative impact of the EU’s General Data Protection Regulation (GDPR) on data privacy. Expected to be ratified by next spring, the EU AI Act, while still open to some modifications, paints a clear picture of what the future holds.

For U.S. law schools, particularly those recruiting European applicants, the ripple effects will be significant. Drawing parallels with GDPR, we can anticipate a direct impact on admission processes, with the potential for some U.S. states to adopt similar frameworks. The financial implications are stark: fines of up to €35 million or 7% of annual turnover for violations related to banned AI applications, and up to €15 million or 3% for misuse of high-risk AI applications.

As a technologist, my perspective is to offer insights, not legal advice. The regulations are still in the pipeline, but it’s crucial for admission professionals to understand the potential implications. The goal is to prepare you for what’s coming and to prompt discussions with legal counsel before investing in AI tools that could later pose regulatory challenges.

New Rules on the Horizon

Two likely new regulations should be on every admission office’s radar. Firstly, an outright ban on social scoring, which involves classifying individuals based on their behavior, socio-economic status, or personal traits. Secondly, a high-risk classification for AI systems used in education, employment, and essential services. Of particular note for law schools are regulations concerning AI assistance in legal interpretation and application of the law.

The high-risk classification explicitly includes AI systems used for educational admissions or recruitment, directly impacting admission decision-making processes.

Understanding Social Scoring and Its Implications

A ban on social scoring stands as a key element in the new EU AI. Recital 17 External link opens in new browser window of the proposed version of these regulations elaborates on AI social scoring. It defines it as the use of AI to evaluate or classify the trustworthiness of individuals based on a multitude of data points and time occurrences relating to their social behavior across different contexts or based on known or predicted personal or personality characteristics. This comprehensive definition highlights the intricate and extensive nature of social scoring, underscoring its potential implications for individual privacy and autonomy.

From a technologist’s perspective, it seems that certain AI-driven techniques presently employed in admission processes could potentially conflict with these imminent regulations. Practices like tracking website visits or email interactions, when augmented with AI, should be approached with careful consideration. More complex applications, such as the use of AI to analyze applicants’ social media behavior or the deployment of social media monitoring tools, demand even greater scrutiny. A consultation with legal experts to fully comprehend these regulations is strongly recommended. Understanding the regulation’s detailed application to specific AI-based tools in admission is vital to ensure both legal compliance and the protection of applicants' rights.

“High-Risk” AI in Admission

Specifically called out as “high-risk” AI systems are “systems to determine access to educational institutions or for recruiting people.” This will likely include any system related to admission that uses AI, including peripheral systems such as those that examine transcripts or personal statements.

Exactly what will be required when using “high-risk” AI is still being worked out, but an article from Georgetown External link opens in new browser window indicates, “Under the proposals, developers of high-risk AI systems must meet various requirements demonstrating that their technology and its use does not pose a significant threat to health, safety and fundamental rights.”1

My Perspective on the Proposed Regulation

Regulation, in my view, is a necessary balance between innovation and responsibility. It can prevent misconduct and market inefficiencies, as seen in the largely unregulated cryptocurrency External link opens in new browser window space. However, regulation also tends to favor larger entities capable of absorbing additional costs, potentially stifling smaller competitors.

Clear, early regulations in the burgeoning AI industry should spur responsible use and innovation. While the specifics of these regulations are open to debate, their clarity, timeliness, and enforcement are paramount. The absence of regulation, as seen in the cryptocurrency world, often leads to dominance by unscrupulous players, deterring legitimate businesses.


The impending EU AI Act is a significant development with far-reaching implications for law school admission and educational institutions at large. It underscores the importance of balancing technological advancement with ethical and legal considerations, particularly in the context of AI in admission processes. The act’s focus on prohibiting social scoring and regulating high-risk AI applications signals a paradigm shift, emphasizing the need for a human-centered approach in decision-making processes. Law schools, therefore, should be proactive in understanding and adapting to these regulations to avoid legal pitfalls and foster a fair, responsible use of AI in admissions. This approach not only aligns with legal compliance but also upholds the values of fairness and equity in educational opportunities. As technology continues to evolve, staying informed and prepared is crucial for educational institutions to navigate this new regulatory landscape effectively.

  1. “Under the proposals, developers of high-risk AI systems must meet various requirements demonstrating that their technology and its use does not pose a significant threat to health, safety and fundamental rights. These include a comprehensive set of risk management, data governance, monitoring and record-keeping practices, detailed documentation alongside transparency and human oversight obligations, and standards for accuracy, robustness and cybersecurity. High-risk AI systems must also be registered in an EU-wide public database.” - External link opens in new browser window

Troy Lowry

Senior Vice President of Technology Products, Chief Information Officer, and Chief Information Security Officer

Troy Lowry is senior vice president of technology products, chief information officer, and chief information security officer at the Law School Admission Council. He earned his BA from Northeastern University and his MBA from New York University.