Home226

Thou Shalt Not Steal, or Shouldst Thou?

Is the use of GAI equivalent to plagiarism? If so, what can/should we do about it?

Is the use of GAI equivalent to plagiarism? If so, what can/should we do about it?

“Mr. Fitzgerald, I believe that is how he spells his name, seems to believe that plagiarism begins at home.” (Zelda Fitzgerald)

In 1884 the US Supreme Court was called upon to adjudicate a dispute between a photographer, Napoleon Sarony, and a lithographic company, Burrow-Giles. The case involved a photograph of Oscar Wilde taken by Sarony that Burrow-Giles had marketed without authorization. At issue was the copyright status of the particular photograph, Oscar Wilde, No. 18, and, beyond that, of photography in general. The company maintained that since a machine had made the image, there was no author or human artistry involved, and thus the copyright act of 1862, which had extended copyright laws to photographs, was unconstitutional. The justices conceded that the company might well be right about most photographs but insisted that  “by posing the said Oscar Wilde in front of the camera, selecting and arranging the costume, draperies, and other various accessories in said photograph, arranging the subject so as to present graceful outlines, arranging and disposing the light and shade, suggesting and evoking the desired expression, and from such disposition, arrangement, or representation, made entirely by the plaintiff,” Sarony was the “author of an original work of art” and thus within the “class” of things for which the Constitution intended Congress to grant him exclusive copyright.1

This essential connection between copyright and human creativity was recently (in August 2023) reinforced by a lawsuit in the US, when a federal court refused to grant copyright to Stephen Thaler, the owner of an artificial intelligence machine that had produced a photograph entirely on its own: “United States copyright law,” the judge declared, “protects only works of human creation.” In a newspaper article discussing the two cases, John Naughton, a writer in the Guardian, observed that what is particularly interesting about them is the difference between composing photographs in the 1880’s and snapping pictures nowadays. Modern digital cameras and computer editing programs provide so much assistance that, as Naughton remarks, “it’s rather difficult to take a ‘bad’ photograph.”2

That being the case, how does one evaluate the copyright of an image whose production has involved a certain amount of human creativity along with a significant contribution by computer technology, including the usual assistance provided on digital cameras as well as the further assistance of artificial intelligence? Is it possible to establish a firm line between technical assistance and human creativity, especially when the gap between them seems to be growing increasingly narrow? And what exactly is human creativity? How do we recognize it?

I’m no expert on the psychology of creativity, but confronted with such questions, I find the following (probably apocryphal) story instructive. When Pablo Picasso was being interviewed, he was often fond of doodling on small scraps of paper. On one such occasion, when the interview was over, he gathered up the scraps of paper, examined them quickly, and presented one of them to the interviewer as a gift. The journalist looked at the drawing and remarked, “I could have done that.” “Yes,” replied Picasso, “but you wouldn’t have known which one to choose.”

One might also mention, in passing, that such cheating is not confined merely to students–plagiarism is apparently rife among the professoriate, as well.

Picasso’s point is worth thinking about: whatever else the artistic process involves, the ability to make a series of inspired choices from a host of different possibilities is an essential part. The source of the choices is irrelevant. What matters is the way in which the creator’s selections shape the final product. To take a very simple example: While walking on the beach, I notice among all the rocks lying scattered everywhere one I consider interesting. So I take it home, position it on my mantelpiece, and call it Opus IV. Is that a creative act that entitles me to claim copyright on my rock? I have not made any part of the object, but I have chosen it, placed in position, and named it. Would it make any difference if I had decorated the rock in some way, for example, by painting one side of it? Surely if Piero Manzoni can, in the name of conceptual art, spoon his own fecal matter into ninety small cans (30 grams per can), name the collection Merda d’Artista, and sell one can to the Tate Gallery in 2000 for the price of an equivalent weight of gold (22,350 pounds sterling), I may claim artistic status and, if necessary, copyright for my rock.3

This notion of choice as an essential part of the artistic process may help to clarify some of the arguments concerning generative artistic intelligence (GAI), that is, the mechanical production of text, images, and sounds from a computer on steroids that has been, like a Strasbourg goose, force fed on millions of samples of whatever the owner/creator of the machine wishes to generate (the fact that the material fed into the machine is, as often as not, under copyright has so far not bothered the designers of GAI, but there are legal cases pending).4 Such a machine is an obvious asset to anyone who wishes to produce an original work, because it can supply an endless stream of possibilities for the creator to play with and enable her to select whatever her imagination prompts. Her choices will make whatever she creates hers, even if it is simply a random sample of what the machine can produce, without any significant editing by the creator.

Or will it? The 2023 legal case mentioned above would seem to challenge that reasoning, as would the fuss created by Boris Eldagsen, a German artist who recently confessed that the photograph he had entered in a prestigious competition had been generated by artificial intelligence. He made this confession after winning first prize and then withdrew his entry from the competition: “AI images and photography should not compete with each other in an award like this,” Eldagsen remarked. “They are different entities. AI is not photography. Therefore I will not accept the award.”5 But why should AI images and photography not compete with each other? Eldagsen, I assume, activated the AI program, gave it suitable instructions, and (crucially) chose the result from the possibilities on offer. Surely that makes it a sample of human creativity and thus eligible for the competition?

Most of us, I think, resist this conclusion, simply because when we see a person’s name on a work of literature, art, photography, music, or science, we insist that everything in that work must have come directly from the author’s imagination, unless she indicates otherwise, and that for a work to be considered original it must bear the imprint of the author’s creativity. When those conditions are not met we dismiss the work or accuse the author of plagiarism, in many quarters a career-ending experience. We place an enormous importance on the person’s originality and can be very unforgiving if that quality is compromised in any way.

That is one reason why we are so unprepared for the sudden appearance of GAI. Six months ago (in March 2023), shortly after the first appearance of ChatGPT on the internet, the Future of Life Institute published a letter with more than a thousand signatures (including ones by Elon Musk and Steve Wozniak) requesting a six-month moratorium on advanced AI, and all summer long there were numerous newspaper and magazine articles by turns welcoming or damning the new technology. Nothing much came of the request to delay its sudden onset. It’s obvious that we have entered a new technological age in which homo sapiens sapiens is discovering enormous new powers, without acquiring any additional wisdom to apply them justly. And it’s equally clear that we have no clue about where this revolution is going to take us. The past few months have reminded me of the early days of the internet (official birth date January 1, 1983) when debates about that technology were equally polarized.6 I don’t remember anyone back then predicting this new technology would reveal that, among other things, human beings are creatures hopelessly addicted to cheap consumer goods, gambling, hate speech, and pornography.

When I started writing this article three weeks ago, all of sudden without any advance notice I found a Bing AI link in the toolbar at the bottom of my browser, and a couple of days later, whenever I asked my word processing program to produce a blank page, I was invited to write “@AI-Enter” to access a program that would help me brainstorm, organize, and compose a blogpost, newsletter, essay, or poem. Within minutes my machine had produced a short story in the style of Kafka, a rather inept limerick about my wife, a poster for a production of Hamlet in the style of Gustav Klimt, the outline for a lecture on capital punishment, and a series of very lame jokes. I was genuinely astonished by this new power to generate all sorts of content on demand, especially since GAI is still in its infancy and, by all accounts, improving (and self-correcting) rapidly.

These first trials of GAI did not always produce an original text, but offered an amalgam of sentences produced from different web sites. Here for example is the answer I received to the query “What is the meaning of Robert Frost’s line “And miles to go before I sleep”?

The phrase “Miles to go before I sleep” is from the poem “Stopping by Woods on a Snowy Evening” by Robert Frost. In the poem, the speaker repeats this phrase twice in the fourth stanza, emphasizing its importance. The full stanza reads as follows:

The phrase has both literal and symbolic meanings. Literally, it suggests that the speaker is on a journey and has a long distance to cover before reaching home. Symbolically, the word “sleep” can be interpreted as death or darkness. Therefore, the line implies that the speaker has many responsibilities to fulfill before resting eternally or simply that they have much work to do before going to bed or dying.

The sentences with footnotes have hyperlinks to the websites from which they are taken (the latter are listed at the end of the machine’s response), so that anyone who uses the material in this response in his own work can easily enough provide specific references to the original sources, without assigning any credit to the machine which cobbled the sentences together. He could also, of course, have tracked down these sources without the help of Bing AI. As one would expect, the style is clear, correct, and logical. The literary interpretation is not inspired, but there are a number of useful suggestions that a writer who used this material might develop.

Here is a second response from Bing AI in answer to the question “Can you write a detailed outline for an episode of Blue Bloods?”

Episode Title: “Cyber Attack”

Synopsis: A cyber-attack on the NYPD forces Danny and Baez to release an evasive suspect. Baez questions her place with the NYPD, while Eddie is torn over a workplace dilemma. Jamie wrestles with a family secret.

Act 1: The episode opens with Danny and Baez interrogating a suspect in a cybercrime case. However, before they can get any useful information, the NYPD’s computer systems are hacked, and the suspect is released due to lack of evidence. Danny and Baez are left frustrated and angry.

Act 2: As the investigation into the cyber-attack continues, it becomes clear that the hacker is targeting the NYPD specifically. Meanwhile, Baez begins to question her place with the NYPD after feeling like she’s not making a difference. Eddie is also struggling with a workplace dilemma that puts her in an awkward position.

Act 3: The hacker releases sensitive information about the NYPD online, causing chaos and confusion throughout the city. Jamie is dealing with his own family issues as he struggles to come to terms with a long-held secret that threatens to tear his family apart.

Act 4: As the situation worsens, Danny and Baez are forced to release their original suspect due to lack of evidence. However, they soon realize that he may have been involved in the cyber-attack all along. Meanwhile, Eddie makes a difficult decision that could have serious consequences for her career.

Act 5: In the final act, Danny and Baez track down the hacker and bring him to justice. Baez decides to stay with the NYPD after realizing how much she cares about making a difference. Eddie’s decision pays off in unexpected ways, and Jamie finally comes clean about his family secret.

I have never written for television, but I imagine an outline like this (which carries the following note “Please note that this is a fictional outline and not based on any specific episode of Blue Bloods”), would be very useful for a screen writer in a hurry (although the outline makes no mention of a scene that is de rigueur in every episode of this particular television show, the family Sunday dinner).

To say that accelerating access to  GAI is unsettling to working writers, visual artists, designers, and musicians, among others, is a massive understatement, especially given that a machine can work extremely quickly (20,000 words in 10 seconds for the Ai-Da machine programmed to produce poetry in the style of an English translation of Dante’s Divine Comedy).7 And there is little doubt that GAI is already threatening the jobs of a great many professionals, especially those who produce art to routine specifications for public consumption, for example, writers of television scripts for formulaic shows (like Law and Order or the Big Bang Theory). Some crystal ball gazers are already announcing the imminent demise of certain professions, like architecture.8 And the entertainment industry will be transformed. Netflix, after all, knows that I like noir police procedural filmed in Scandinavia, and will soon be able to produce these on demand on my home page, perhaps even starring actors I particularly enjoy. After all, who owns the copyright to Humphrey Bogart’s or Faye Dunaway’s screen image and voice? No wonder the writers and actors are on strike.

To appreciate more fully some of the problems this new technology poses, one might well consider how much college teachers in almost all subjects have for generations relied on written work (essays, research findings, laboratory reports, seminar notes, examinations, and so on) as the basis for assessing a student’s grade. For many years now, the trend has been to move away from final examinations as the sole or major determinant of that grade (since final examinations tend to place undue emphasis on the student’s ability to write final examinations), towards continuous evaluation, a series of assignments over the course of a semester (often including a final examination), so that the professor has a more accurate measure of a student’s progress and overall competence. This common feature of almost all curriculums rests on the assumption that the work a student submits for evaluation during the semester is produced by her own effort.

As the professoriate wrestles with this problem, there are some nice ironies at work. When computers became generally accessible to all students and we could require students to prepare their assignments in a word processing program and print the results, I was overjoyed, because I would no longer have to deal with one of the least welcome tasks I faced as a college teacher–combing through page after page of appalling student handwriting. Later, when spellchecks appeared, I urged my students to use them. “I don’t care if you know how to spell or not. But I do want the spelling to be correct. So use the spell checker.” I retired before grammar checkers became widely accessible, but I would probably have treated them the same way. But now that AI is here, some professors are insisting that all student writing assignments must be handwritten under supervision. Needless to say, a generation of students raised with computers and smart phones has not made legible handwriting an educational priority–they are even worse than their predecessors thirty years ago.

In one sense, of course, the problem posed by GAI in the colleges is not new. For at least three decades professors have been hunting down plagiarized work made available to students by computers and punishing severely those whose work is not their own. In spite of their efforts, plagiarism has become endemic in most colleges and universities: one study suggests that“more than 1 in 10 university students submit assignments written by someone else, with new research suggesting that 95% of students who cheat this way are not caught.” One might also mention, in passing, that such cheating is not confined merely to students–plagiarism is apparently rife among the professoriate, as well. But that is a story for another time.

For the time being, the war against plagiarism in the halls of learning will no doubt continue, with colleges placing a strong emphasis on programs that claim they can detect copying or paraphrasing GAI texts with 99 percent accuracy (for example, Copyleaks), and a growing number of public school districts following the lead of Seattle and Los Angeles by blocking access to ChatGPT (and one assumes, similar sites) throughout the system. However, I’m not sure that declaring war on GAI is the wisest course of action. After all, given the increasing importance of GAI in all sorts of areas (including many of the businesses where the graduates will be seeking employment), surely they should be encouraged to learn about it and, if possible, educated to use it properly. How one does that I’m not sure; I have no detailed pedagogical solution to offer. But the challenge is to find some way to employ the technology to stimulate the student’s imagination by opening up a range of choices for her own work, bearing in mind that  “There’s a fine line between inspiration and plagiarism. A.I only feeds the dominant side of its human operator” (Clyde DeSouza). Amid all our concerns about cheating, it’s worth remembering how many of our greatest artists flourished in ages of rampant plagiarism, when copyright was unheard of and authors freely stole whatever they needed from predecessors and contemporaries alike, following the familiar adage variously attributed to Igor Stravinsky, Steve Jobs, Pablo Picasso, William Faulkner, T. S. Eliot, and others: “Good artists copy; great artists steal.”

In the spirit of this adage I have always permitted those who scour the web for classical Greek drama to use my translations free of charge and to edit the text to suit their purposes. In the past twenty-five years I have received hundreds of emails thanking me for this content and describing outlines of all sorts of different artistic projects: dramas, novels, artistic exhibitions, audiobooks, musical composition, and so on. It is a constant amazement to me how much stealing (with permission) from someone else’s work can spur all sorts of other artistic endeavours.

 

[the_ad id=”2283″]

  1. https://en.wikipedia.org/wiki/Burrow-Giles_Lithographic_Co._v._Sarony[]
  2. https://www.theguardian.com/commentisfree/2023/aug/26/ai-generated-art-copyright-law-recent-entrance-paradise-creativity-machine[]
  3. However strange, the Tate purchase has proved a good investment. In 2007 another tin of Merda d’Artista sold for 81,000 pounds. No one, so far as I know, has ever confirmed that the contents are genuine.[]
  4. https://www.theguardian.com/books/2023/aug/22/zadie-smith-stephen-king-and-rachel-cusks-pirated-works-used-to-train-ai[]
  5. https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated[]
  6. https://www.usg.edu/galileo/skills/unit07/internet07_02.phtml[]
  7. https://www.theguardian.com/books/2021/nov/26/robot-artist-to-perform-ai-generated-poetry-in-response-to-dante[]
  8. https://www.theguardian.com/artanddesign/2023/aug/07/ai-architects-revolutionising-corbusier-architecture[]