In observance of Thanksgiving, LSAC offices will be closed November 28 and 29. Representatives will be available via telephone at 1.800.336.3982,  email at LSACinfo@LSAC.org and chat on Friday, Nov. 29 from 9 a.m. to 5 p.m. ET and from 12 noon to 4 p.m. ET on Nov. 30 and Dec. 1.

Due to scheduled maintenance, most services on LSAC.org, including individual account access and services on Unite, will be offline from 5:45 a.m. until approximately 7 a.m. ET on Wednesday, December 4. Services on LawHub will be available during the maintenance window.

ChatGPT, Law School Application Personal Statements, and the LSAT Writing Sample

By Troy Lowry

Worries about ChatGPT being used to write law school personal statements are growing. This generative AI program can easily craft high-quality personal statements, leading some schools to be concerned. Should they be?

Law school applicants have often sought help with their personal statements, and an industry has even grown from this need, with some consultants charging thousands of dollars per applicant. Some argue that ChatGPT and other generative AIs might simply level the playing field, aiding those who can’t afford these high-priced consultants.

In the end, efforts to stop applicants from using ChatGPT might prove futile. The same difficulties that make preventing applicants from using consultants nearly impossible — such as the challenge of determining if the writing is truly original — will likely make it extremely tough for schools to stop the use of AI like ChatGPT.

What Does “Using ChatGPT” Mean Anyway?

The question of how to define ChatGPT usage is riddled with complexity and ambiguity. Having ChatGPT write an entire personal statement, including fabricating facts, is clearly unethical. Yet a survey by Best Colleges revealed that 20% of respondents don’t believe that “using AI tools to complete assignments and exams constitutes cheating or plagiarism.”

Are one-in-five students sincerely convinced that submitting work generated by ChatGPT is permissible? Does this statistic suggest that one-in-five prospective lawyers are so unscrupulous as to claim someone else’s work as their own?

I believe that the issue is more nuanced, and the question itself is too broad.

ChatGPT’s functionality is vast. It can not only draft an entire personal statement but also:

  • Generate a range of ideas suitable for your personal statement
  • Evaluate a list of ideas you’ve created, highlighting the strongest ones
  • Analyze your personal statement and suggest improvements
  • Identify logical inconsistencies in the personal statement
  • Proofread the personal statement, correcting punctuation and grammar errors

While preparing this blog post, I inquired of ChatGPT how it might assist me, aside from writing my personal statement. The AI suggested it could aid applicants in understanding their own motivations for pursuing legal studies and identifying long-term goals, forming a robust foundation for the personal statement. Anything that helps people understand their own motivations, even a computer program, counts as a benefit in my book.

To many, these uses of ChatGPT seem entirely reasonable. If friends can review your personal statement and provide feedback, why not employ ChatGPT? While a (hopefully extremely small) segment of the applicant population is unethical, the majority of the one-in-five respondents who disagreed with the statement about ChatGPT and cheating likely have a more nuanced understanding of the tool. They recognize legitimate uses that most would not consider dishonest.

However, what if a law school decided to ban all use of ChatGPT for personal statements? Would they be able to tell who used ChatGPT?

Can ChatGPT-Generated Content Be Detected?

Many products in today’s competitive market profess the ability to detect whether text was written by ChatGPT. My personal exploration of these tools has yielded results that are unreliable.

On the positive side, they were quite proficient at identifying texts generated by ChatGPT without alterations, marking them with a high rate of accuracy.

Curiosity led me to task ChatGPT with mimicking particular styles, such as those of Maya Angelou, Ernest Hemingway, and Yosemite Sam. Surprisingly, most of the detection products maintained a high rate of accuracy.

However, a comprehensive test should not only flag texts authored by ChatGPT but also accurately identify those that were NOT. I found myself scandalized when one tool repeatedly and erroneously identified my own work as likely written by ChatGPT. While I admit my writing may not be flawless, I like to believe it’s far from robotic! (Of course, the ultimate judgment lies with you, the reader.)

Taking the experiment further, I used a process known as fine-tuning, in which I trained ChatGPT with 100 paragraphs of my writing style. After running it through the training, I instructed ChatGPT to “write like Troy” for a law school personal statement. The result? A well-crafted statement that evaded all ChatGPT detectors. This process required significant expertise, work, and some expense, so it’s unlikely the average applicant would endure such an ordeal. But tools are already appearing on the web to do similar training at less cost and with no expertise in generative AI needed.

This experiment revealed a crucial insight: the development of AI tools is accelerating at a pace that outstrips the advancement of detection tools.

Ultimately, I halted my testing prior to completing the entire planned examination. While the majority of results were correct, the inconsistency was alarming. I grew concerned that even one false positive could lead to unwarranted consequences.

I pondered ways to render this technology useful for law school decision-making. LSAC prides itself on being the “gold standard” and only delivering to schools products used to evaluate applicants that we can verify are of the highest quality. While they have their uses, these detection tools did not appear to meet that high standard.

Then I came across an article detailing how OpenAI had discontinued their product designed to detect AI-written text, citing “low accuracy.” If even the experts behind ChatGPT can’t reliably discern what was written by their creation, then the possibility that anyone else can seems a distant hope. It seems to me that the implications of this realization extend beyond mere curiosity, touching on broader questions about the evolving relationship between technology and authenticity.

LSAC’s Writing Sample: A Proctored Way to See How the Applicant Writes

All is not lost in this quest for authenticity, however! As part of the LSAT, every applicant is required to complete the LSAT Writing sample. This unscored essay, given under timed, proctored conditions, presents an ideal opportunity to assess an applicant’s writing ability.

Recognizing the inherent value of the LSAT Writing sample in assessing an applicant’s writing skill, I became intrigued at how AI might further contribute to this assessment. Could the power of artificial intelligence be harnessed to compare and contrast different writing samples? This sparked an experiment using ChatGPT’s ability to evaluate the authenticity of authorship between two different pieces of writing.

ChatGPT can take two pieces of writing, compare them, and give a confidence level as to whether they were written by the same author. As an interesting bit of research, I used this method to compare personal statements and writing samples from the same author and from different authors to see if AI could accurately tell the ones written by the same author.

The results? Not so hot. One major issue is that these are written under very different conditions. The LSAT Writing sample demands quick thinking within a 35-minute timeframe, requiring the applicant to read a prompt and then rapidly organize and articulate their thoughts. Conversely, the personal statement is typically crafted over weeks or even months and often undergoes numerous revisions.

Moreover, the tone between these two pieces is strikingly different, with the personal statement being intimate and reflective, while the writing sample is a more detached and analytical argument.

Despite these challenges, the AI managed to predict correctly better than two-thirds of the time whether the author was the same or not and provided reasons to support its predictions.

However, while impressive, this is not good enough when evaluating applicants. Both law schools and applicants depend on our products to be extremely precise. Mere “impressive” falls short when stakes as high as admissions are in play. Consequently, while this was an enlightening experiment and the insights may contribute to the development of future products, the technology is not yet accurate enough to be integrated into LSAC’s current offerings. This experience serves as a sobering reminder that even the most advanced AI tools must be approached with caution and clear understanding of their limitations.

Conclusion: Might the Carrot Work Better Than the Stick?

As we’ve seen, ChatGPT can be utilized in various ways, some more controversial than others. The reliability of detecting items penned by ChatGPT, without mistakenly identifying an applicant’s work as machine-generated, remains dubious. Even a detection system that is 100% accurate today could soon be rendered obsolete by the rapid advancements in AI, making it unreliable.

In short, a complete ban of ChatGPT would be challenging to enforce and justify.

Might schools find more success with a different approach instead of attempting to ban ChatGPT? They could state that while applicants are free to use ChatGPT or other generative AI, personal statements written without such assistance tend to feel more authentic, and this authenticity could influence admissions decisions.

Applicants, always eager for an edge, would then have a compelling reason to use their own voice. This approach not only offers a practical solution but also has the advantage of being true.