“…it was deemed desirable to be able to compare on a level plane the performance of every university in the country, and this inevitably led to a ubiquitous quantification of every aspect of teaching, research and service …[and] … the forced crunching of all intellectual activity into a number.”
Neil Smith, “Academic Free Fall” (2010).
“…it is the very existence of the written work that makes it so difficult to understand the written work.”
Michael Curry, The Work in the World (1996, p. x).
If you’re reading these words, then you appreciate the first principle that motivated the creation of the journal Human Geography — “the need to retain control of the value produced by academic labor.” I’m making an effort to write these words, and you’re working to read them — to put my thoughts into a dialogue with your own knowledge, expertise, experience, and values. If I keep working to write, and if you decide to keep reading, then we’re building something together — an achievement of academic labor. The result is a shared meaning that belongs to you just as much as it does to me.
This is what I want to tell you: this principle is under siege, in the context of a widespread dehumanization of thinking as a political process. The shared meanings produced by readers and writers, young students and not-so-young students, are now subjected to an entirely new kind of automation, surveillance, and commodification. It threatens your work and mine, and we must fight it.
The very conditions of possibility for writing, reading, and thinking are being automated and dehumanized at an accelerating pace. The idea of dehumanization is not new, and for more than a quarter century Donna Haraway (1991) has been teaching us to beware of promises that technological advances will finally deliver the long-awaited perfections of Enlightenment Reason, with human-machine ‘cyborg’ innovations transcending the Cartesian mind-body dualism. But those false promises seem to be accelerating, as more aspects of life take place in digital networks on a cloud ruled by Moore’s Law and a “universe of self-replicating code” (Dyson, 2012) that now grows by five trillion bits per second. Flows of digital information not only reflect human actions, but also mediate and reproduce distinctive environments for human decisions — with adaptive algorithms that present behaviorally-responsive choices and recommendations across a growing number of social and institutional sites in a turbo-charged evolutionary ecosystem. Automated yet micro-targeted recommendations and rankings are becoming pervasive in all domains of capitalist society, including academic publishing and education. Several theories help me understand what’s happening — Neil Smith’s diagnosis of the destructive impact-factorization of the neoliberal academy, Haraway’s cyborg, Foucault’s biopower, and what Kim Cunningham (2012) calls “NeuroGovernance.” Now the changes are automating entire traditions of how to create and share knowledge: Big Data is automated epistemology, and our very conditions of possibility of thought are increasingly mediated by a constellation of devices, codes, and algorithms embedded in the transnational operating systems of neoliberal neurogovernance. It was only after I listened carefully to my professors, my students, and my peers that I began to understand what happens when neoliberal neurogovernance is applied to academic labor.
The iParadigm Shift: Turnitin Dot We
Plagiarism has become a pervasive threat to the integrity of the “work in the world.” This is the phrase Michael Curry (1996) uses to frame his panoramic history of geography and the development of writing and cultures of literacy, in a book that “explores the ways in which the written work is associated with particular places and with space.” At first glance, the dictionary definition seems quite simple:
plagiarize … to use and pass off (someone else’s ideas, inventions, writings etc.) as one’s own || … to take another’s writings etc. and pass them off as one’s own [plagiary] (Cayne, 1990, p. 767)
But things quickly get very complicated. Plagiary comes from the Latin plagiarius, “a kidnapper,” hinting at a divergence in Western thought between words as property versus the moral rights of an author who might reasonably claim to feel kidnapped if her words are stolen or intentionally misinterpreted (Curry, 1997). The Work in the World, however, helps us to see the beginning of another transformation, another ‘Gutenberg moment’ in which writing and reading are being redefined through automation and new relations of space and place. “It is by creating a place within which the book may be read,” Curry (1996, p. 8) writes, “that one creates a set of limits on the nature of the book, and the same is true for the matter of authoring.” I suggest that this set of limits is fast disappearing with the quick adoption of global-scale, placeless, web-based technological solutions to the problem of plagiarism. Paradoxically, these “solutions” have fostered a new kind of kidnapping.
Plagiarism, one of the founders of iParadigms declares, is a “real threat to the entire educational system … the capital crime of academia” (Barrie, 2008, p. 19). Unfortunately, it is difficult to obtain systematic, comparable evidence on the prevalence and severity of plagiarism over time. Estimates vary depending on how plagiarism is defined, whether it is measured by self-reporting or external validation, and where it is measured: type of institution, field of study, level of curriculum (Scanlon and Neumann, 2002; Walker, 2010; Ison, 2012). Whether plagiarism has become systematically worse, however, is irrelevant. What matters is that administrative consensus has coalesced with Big Data and venture capital. In the last decade, a variety of software tools and web-based services have been developed and marketed to educational institutions, promising to help educators in “Catching the Cheats” (Barrie, 2008). The “Turnitin.com” system has quickly emerged as the industry leader. The system was developed by John Barrie, who studied rhetoric and neurobiology as an undergraduate, and then completed doctoral work in biophysics, all at the University of California, Berkeley. Barrie’s company, iParadigms, is a privately held corporation with backing by Warburg Pincus (2012), a $40 billion private equity firm that describes itself as a “globally integrated partnership.” Turnitin.com is used by more than 10,000 institutions in 126 countries (iParadigms, 2012a). The system compares student submissions to a constantly growing database, with more than 220 million student papers, at least 90 thousand publications, and more than 20 billion web pages (iParadigms, 2012a, 2012b; Barrie, 2008).
The planetary, web-based scale of this enterprise redefines the relations among space, place, and the nature of the written work. Curry’s history teaches us that writing and reading have always been social practices of place-making, and that “the place wherein a work is read is one in which the reader can by the act of reading engage ‘the mind’ of an author far removed in space or even time” (Curry, 1996, p. 3). But when the minds of human authors are replaced by the surveillance algorithms of artificial intelligence, neoliberal neurogovernance creates an entirely new geography of academic labor. To illustrate how Turnitin.com performs these principles, consider this scenario. A student writes a paper with integrity, with no intent to plagiarize. The student submits the paper to Turnitin.com directly, or perhaps through WriteCheck™, another iParadigms product that implores students to “check your writing against the same database as turnitin.com” (iParadigms, 2012b). At this point, a problem appears: iParadigms’ sophisticated algorithms search through an ever-expanding database, and parts of the paper are highlighted. In WriteCheck™, this will appear in the Plagiarism Checker “Similarity Score,” which will allow students to see how much of their paper “matches content in our database” so they can “correct accidental plagiarism” (iParadigms, LLC, 2012). In Turnitin.com, the offending passages will be detected by Originality Check, which produces an “Originality Report” for the instructor. Depending on the software settings chosen by the instructor, a student may have the opportunity to delete the offending material prior to formal submission; the instructor may also be able to see how many times a student resubmitted to iParadigms’ servers. iParadigms’s promotional materials emphasize that the Originality Report is only a supplement to human judgment: the final verdict on whether plagiarism has occurred is left to “an educator” who is presumed to make “a judgment call” (Barrie, 2008, p. 18) after studying a 189-page Instructor User Manual (iParadigms, 2010).
Three aspects of this scenario are crucial. First, the system performs like a smartphone app that could have been designed by Foucault himself, with disciplinary mechanisms operating through perpetual self-surveillance and correction. Each student becomes a quantified and disciplined subject by conforming to digitized, automated scrutiny. In the process of Foucauldian objectification, a digital totalizing equalization proceeds as each student’s writing is homogenized into a surveillance infrastructure. Students’ writing is subjected to comprehensive surveillance, even as educators are disciplined in the process of the “judgment calls” in cases of suspected plagiarism. Students’ academic labor also undergoes a process of individuation to enable digitized comparison, and in the case of proctoring firms serving the for-credit market of MOOCs — massive open online courses — the writing process itself is now surveilled through algorithms to analyze live images, signatures, and even the typing rhythms of keystrokes (Eisenberg, 2013). All indications are that this surveillant infrastructure is set for explosive expansion, with artificial intelligence essay-grading algorithms recently deployed by EdX, the online consortium founded by Harvard and the Massachusetts Institute of Technology (Markoff, 2013).
Foucault (1977, p. 181) writes that “the micro-economy of a perpetual penalty” operates in differentiating individuals — “their nature, their potentialities, their value” — through an intricate (and in this case digitized and automated) architecture built to observe and correct from a distance, never to be seen. Turnitin makes the threat of penalty visible on a computer screen, encouraging and perpetuating self-corrective behaviour. In a world where the threat of penalty subsumes students’ work, the act of self-surveillance fosters cautionary, defensive writing instead of creative writing. This is the explicit intent (Barrie, 2008, p. 17):
a real deterrent would require the real threat of being caught …. the only real threat would involve creating a database so massive that, when a student is told that their paper will be compared with documents in that database, a student is then deterred from cheating. That database would have to include all of the sources a student might use…
The database is massive, and it continues to expand. Students now confront a system of deterrence that is truly transnational, powerfully planetary. Barrie (2008, p. 19) reminds us that it took “10 years to perfect the algorithms, aggregate the proper data, internationalize the technology to support more than 30 languages and deploy the service in over 106 countries.” Students are quickly learning to write, edit, and rewrite in Turing-test conversations with an algorithm that was first “designed to detect regularities in large databases of brainwaves” (Barrie, 2008, p. 18).
Second, the system fundamentally alters the relationship between writer (student) and reader (educator). The enterprise is premised on students’ knowledge of the power and universality of the database, creating a shock-and-awe dynamic embedded within the multi-scalar context of student-teacher-institutional power relations. In the God’s-eye plagiarism panopticon, “inspection functions ceaselessly” (Foucault, 1977, p. 195). As they write, students are constantly patrolled by the World Wide Warden of the Web. iParadigms is an academic Robocop, a cognitive Predator drone ready to be triggered by a student’s transgression. The body, and the post-Cartesian mind, are now part of a system of suspicion as accumulation strategy (Smith, 2006), where probabilities and pattern-recognition algorithms implement a new “technology of the self” (Foucault, 1982).
The third issue is the matter of evidence and the burden of proof. False positives cannot be dismissed. In a paired-testing comparative audit methodology, researchers at Texas Tech found that Turnitin.com flagged 2.5 times as many papers for plagiarism as SafeAssign (reported in Jaschik, 2009). One implication is that the iParadigms algorithm is more sensitive, and thus better. Indeed, tautologically, with an expanding database and pattern-recognition algorithms that improve themselves, such an assertion is as close to an axiom as we are likely to get in a post-Kuhnian paradigm-shifting post-positivist world. Never before in human history has it been possible to perform, program, and purchase access to the imagined planetary consciousness of written work, including that written by bots in the growing field of “robo-journalism” (Lohr, 2011).
Planetary cognitive surveillance, however, yields strange results. Much of the unoriginal material identified by Turnitin.com involves commonly-used phrases, like paper topics (“global warming”) or topics along with a few other keywords (“the prevalence of childhood obesity continues to rise”). These results reflect students following directions on paper topics or the craft of writing clear topic sentences. The Texas Tech researchers discovered one professor “who tells his students to write their papers, and then to delete any topic sentences so that their papers won’t be flagged in error” (quoted in Jaschik, 2009). iParadigms’s Chief Operating Officer bestows friendly reassurances that “the value is there” for institutions that spend enough time “to properly prepare teachers and students.” Time spent navigating the structure of scientific revolutions as defined by the algorithms of iParadigms, however, redefines the experience of reading and writing.
In this new technology of the self, students are deleting their topic sentences even before they are charged with the “capital crime” of plagiarism. Faulkner’s reported writing advice (Sante, 2007) was blunt: “In writing, you must kill all your darlings.” How many darlings are dying? What if Mark Twain had it right when he declared, “There is no such thing as a new idea,” that “We simply take a lot of old ideas and put them into a sort of mental kaleidoscope” to get “new and curious combinations”? (cited in Paine [1868-1935], entry CCLI, December 7, 1906). As more textual information goes online and the pattern recognition improves, probability theory ensures an increasing likelihood of false positives resulting from honest behavior.
This is not hypothetical. In her third year at Dalhousie University, Emma Teitel was accused of plagiarism by her creative-writing professor (see Teitel, 2011). Teitel had used the phrase “category mistake” in an essay, assuming that these two words had “entered the academic vernacular, like Kant’s ‘categorical imperative.’” Teitel failed to attribute the term to the philosopher Gilbert Ryle, who first used it in his 1949 book, The Concept of Mind, and for this violation she was convicted under Dalhousie’s dishonesty policy. But Teitel (2011) never thought she “had done anything intentionally dishonest, let alone anything that could be called cheating.” She asks that “university officials stop weighing technical transgressions on the same scale as moral ones.”
I fear that the growth curve for false positives will resemble Moore’s Law, with the slope constrained only by the speed at which human authors kill their darlings. At the asymptote, truly human authors will be the darlings who have been sacrificed in the unforgiving theology of the noosphere. Think back to the scenario I described — the student who writes a paper with integrity, with no intent to plagiarize. If our student has been deeply immersed in good scholarship — engaging “the minds” of authors “far removed in space or even time” (Curry, 1996, p. 3) — the individual student author’s voice will be put into a healthy dialogue with the collective conversation of the scholarly tradition. If our student learns to think and write in ways that are statistically similar to passages written by others amongst all the generations of authors, do we really want to allow a corporate algorithm to kill that darling? Late at night as paper deadlines loom, I fear that too many students will see that Similarity Score in WriteCheck™, and they’ll press delete. Then they’ll write something else to please the algorithm. Engaging the minds of authors far removed in space and time will be penalized by an evolutionary informational ecosystem that is quickly dehumanizing the relations between space, time, reading, and writing. I fear that corporate neurogovernance is kidnapping an entire generation of authors, crushing the muse of students before they have the chance to become the next Julie Graham, the next Neil Smith, the next Donna Haraway, the next Michel Foucault, the next Karl Marx.
Trust: The Hara Dot Way
Let me be clear. I abhor plagiarism. I mean no disrespect to any of my busy educator colleagues with overcrowded classes, who may feel that they have no choice but to use these systems. I acknowledge my student peers who tell me that these systems are necessary to ensure the integrity of credentials that my peers are working so hard to earn. But my deference is reserved for human students and educators teaching and learning in the tradition of trust and free thought in genuine scholarship. The corporations, the algorithms, and the structurally produced societal failures that give rise to this enterprise earn my disdain, my anger, and a few other emotions that I do not wish to confess to the screen-scraping bots that will soon be scouring these words. If you’re still reading, then you and I have produced a shared meaning, an interpretation that is the product of our collective academic labor. It belongs to you as much as it does to me. Please use it, and fight the corporate kidnappers of neoliberal neurogovernance.
Barrie, John M. (2008). “Catching the Cheats: How Original.” The Biochemist 30(6), 16-19.
Bartholomae, David, and Anthony Petrosky, eds. (1993). Ways of Reading: An Anthology for Writers. Boston: St. Martin’s Press.
Cayne, Bernard S., ed. (1990). The New Lexicon Webster’s Encyclopedic Dictionary of the English Language. New York: Lexicon Publications, Inc.
Cunningham, Kim (2012). “Should We Be Triggered? NeuroGovernance in the Future/(Tense).” Social Text Periscope, April 1.
Curry, Michael R. (1996). The Work in the World: Geographical Practice and the Written Word. Minneapolis: University of Minnesota Press.
Curry, Michael R. (1997). “The Digital Individual and the Private Realm.” Annals of the Association of American Geographers 87(4), 681-699.
Dyson, George (2012). “A universe of self-replicating code.” Edge, http://edge.org, March 26.
Eisenberg, Anne (2013). “Keeping an Eye on Online Test-Takers.” New York Times, March 2.
Foucault, Michel (1982). “Technologies of the Self.” In Paul Rabinow and Nicolas Rose, eds., 2003, The Essential Foucault. New York: New Press, 145-169.
Foucault, Michel (1977). Discipline and Punish. Trans. Alan Sheridan, 1995. New York: Penguin.
Haraway, Donna (1991). Simians, Cyborgs, and Women: The Reinvention of Nature. London: Free Association Books.
iParadigms, LLC (2010). Turnitin Instructor User Manual, Version 2.1.1, Updated November 11. Oakland, CA: iParadigms, LLC.
iParadigms, LLC (2012a). Turnitin Approaches 20 Million Graded Papers in GradeMark. Press release, September 13. Oakland, CA: iParadigms, LLC.
iParadigms, LLC (2012b). WriteCheck™ by Turnitin & ETS® e-rater.® http://www.writecheck.com, last accessed October 7.
Ison, David Carl (2012). “Plagiarism Among Dissertations: Prevalence at Online Institutions.” Journal of Academic Ethics 10(3), 227-236.
Jaschik, Scott (2009). “False Positives on Plagiarism.” Inside Higher Education, March 13.
Lohr, Steve, 2011. “In case you wondered, a real human being wrote this column.” New York Times, September 11, BU3.
Markoff, John (2013). “Essay-Grading Software Offers Professors a Break.” New York Times, April 4.
Paine, Albert Bigelow (1868-1935). Mark Twain: A Biography. Compiled and republished from Overland Monthly / Out West Magazine, redistributed via Project Gutenberg.
Pearson, E.S. (1968). “Studies in the History of Probability and Statistics, XX: Some Early Correspondence Between W.S. Gosset, R.A. Fisher, and Karl Pearson, With Notes and Comments.” Biometrika 55(3), 445-457.
Sante, Luc (2007). Kill All Your Darlings. Portland, OR: Verse Chorus Press.
Scanlon, Patrick M., and David R. Neumann (2002). “Internet Plagiarism Among College Students.” Journal of College Student Development 43(3), 374-385.
Smith, Neil (2010). “Academic Free Fall.” Social Text Periscope, August 21.
Smith, Neil (2006). “Nature as Accumulation Strategy.” Socialist Register, 16-36.
Teitel, Emma (2011). “Accused.” Maclean’s 124(43).
Walker, John (2010). “Measuring Plagiarism: Researching What Students Do, Not What They Say They Do.” Studies in Higher Education 35(1), 41-59.
Warburg Pincus (2012). “About Us.” Last accessed November 16. http://www.warburgpincus.com.
 With apologies to W.S. Gosset, and “to Arthur Guinness Sons and Co., Ltd, acting on behalf of Gosset’s family.” (see Pearson, 1968, p. 445). Student is a limited-liability thought collective. In developing this t-test, Student has been educated by Jatinder Dhillon, Sam Johns, Paige Patchin, Hannah D’souza, Emma Abdjalieva, Rebekah Parker, Wei Hao Loh, John Bul, Montana Yuen, and Elvin Wyly, corresponding author: email@example.com. “In any case I should be glad of your opinion of it,” Student writes, as reported in Pearson (1968, p. 446).
Login Or Register