A Family Is Suing a School Over a Bad Grade. How Did We Get Here?
Given the pace at which generative A.I. tools have flooded the market and the much slower rate at which school districts and universities develop new policies, it was only a matter of time before a case focused on A.I. and cheating made its way to court.
That day came on Tuesday, when an attorney asked a federal judge to require Hingham High School, in Massachusetts, to raise the AP U.S. History grade of a student who had been penalized for allegedly using A.I. to research and outline a class project. The attorney for the student argued that because the school had no A.I. policy in the student handbook, using A.I. wasn’t cheating—and that the low grade the student received in that course would unfairly prevent him from applying to selective colleges. Hingham school officials have argued that the use of A.I. was clearly prohibited by policies laid out in class and by existing policies against plagiarism.
The case against the Hingham school system turns on the question of whether what the student did constituted cheating, according to the existing school policies: Were students allowed to use A.I. tools as these students did, or not? And is it, in fact, plagiarism to use research and an outline generated by a chatbot? But the ruling in this case won’t change the tricky truth about A.I. tools, which is that in most cases teachers don’t know or can’t prove that students are using A.I. tools when they’ve been told not to.
https://www.msn.com/en-us/money/careersandeducation/a-family-is-suing-a-school-over-a-bad-grade-how-did-we-get-here/ar-AA1sZoeE?ocid=anaheim-ntp-feeds&pc=HCTS&cvid=6949280ce5c7436988d778ceb07df16f&ei=32