Tanya Clement poses digital tools as assets to the humanities; she suggests “By identifying quantifiable pieces of a text using word frequencies and locations, these scholars have generated computer-assisted close readings of the structures of texts that correspond to, contradict, or otherwise provide interesting insight into what has been assumed about the texts on an abstract level.” She includes John Burrows’s statistical analysis of idiolects in Jane Austen, Wayne McKenna’s and Alexis Antonia’s plotting of modals in James Joyce, and Stephen Ramsey’s mapping of structural elements in Shakespeare. All of these projects validate or detail existing hypotheses in the humanities, hypotheses which were first formed without the help of digital tools. Clement adds that these tools “can help scholars generate hypotheses,” yet she says later that “the computer’s ability to sort and illustrate quantified data helps identify patterns, but understanding why a pattern occurs and determining whether it is one that offers insight into a text requires technologies of self-reflective inquiry.” Her own project traces unique words, average word frequency, and largest number of words in works comparable to Gertrude Stein’s Making of Americans. Her data supports the observation that The Making of Americans is experimental—a fact which can be seen on “almost any page.” It seems to me that self-reflective inquiry, in this case, must occur before a computer can sort data or identify patterns; in other words, the human-made hypotheses must come before the computer confirmation.
Intriguingly, Clement asks how digital analysis and visual representations offer “a perspective from which we can begin to ask why we as close readers have found some patterns and yet left others undiscovered.” Now this is a thought-provoking question. If some element of a text (by which I mean any cultural artifact) entirely misses notice during close reading but becomes dramatically crucial in a distant reading, what should we think? In her textual analysis of the Hodder episode in Stein’s Making of Americans, Clement discovers that subject and style are not always concurrent, despite what critics have gathered from their close readings. Is this blip in Clement’s graph important? I’ve never read The Making of Americans, nor am I educated on the scholarly debate on exegesis and diegesis or identity construction in Stein’s work. Since I’m clueless, who do I trust? The lineage of critics or the Spotfire scatter plot? From the above results, Clement hypothesizes that “arguments scholars make about The Making of Americans are based on limited knowledge of the text’s underlying structure because the underlying patterns are difficult to discern with close reading.” Moreover, “Data-mining procedures proved to be productive in initially illuminating complex structural patterns that helped me discern those underlying patterns.” Are we to understand that text mining programs can read better than humanists can? Can digital tools rebut and critique critics?
An analogy might serve us here. Mercedes-Benz equips its cars with automatic-braking technology faster than a driver’s motion reflexes. Buyers can rest assured their vehicle will always warn them when drifting into another lane. And worry not, no one will have to parallel park ever again. Mercedes-Benz assumes that humans are fallible and accident-prone in ways that their technology is not. Are we to assume the same for humanist inquiry? Are close readers fallible and unobservant in ways that digital tools are not? Do these tools support claims or do they make them?
Comments