Why “Hack the Academy”?

From the moment it was announced almost a week ago, I was excited about the Hacking the Academy project. As a fan of Oulipo and Oubapo, the notion of trying to crowdsource the meat of an edited volume in a single week is particularly exciting to me. I think that imposing constraints, even arbitrary ones, can be a very effective technique that can foster creative thought, new ideas, and force one to re-assess convention. Which, of course, is all in keeping with the very spirit of the book.

However, as I began to explain the project to friends outside the digital humanities, even my academic friends who just aren’t plugged into the world of computer-based methodologies in humanistic research and pedagogy, I got a lot of confused looks and cocked heads when I mentioned the title.

“What does that mean, exactly?” was a common reply.

The metaphor of hacking is central to this project. And I think it’s extremely apt. But the term is a subtle one, and frequently misused in public discourse. To avoid preaching to the choir– to make this project more comprehensible and useful to readers who may be coming from a less technical background– I think it’s important to talk, briefly, about what “hacking” means, and what it might mean to “hack the academy.”

Popular Images of Hackers

From news accounts, film, and television, most people have a certain concept of what the term “hacker” means. And it’s not a term with many positive associations. News accounts over the last twenty-six years or so have constructed a notion of hackers as a dangerous element– young men in basements, ruthlessly attempting to subvert any notion of security in the age of networked computers. Hackers endanger national security by cracking into national security networks. (Which, after all, is how the net was born– out of DARPA’s ARPANET.) Hackers are trying to steal your personal data. They want to steal your passwords, and empty your bank account. They are malevolent, egotistical, and avaricious.

Movies like WarGames and 1995’s Hackers bring a more human face to hackers, portraying them as young men (they are almost always portrayed as men) who are driven by youthful exuberance, curiosity, and misled idealism who nevertheless get involved in a very dangerous game of violating security. And from sources like these we get the imagery that dominates the public imagination about hackers: dark rooms, incessant typing into Unix terminals, sometimes strange three dimensional object-oriented graphical user interfaces with which the hacker virtually flies through towers of pure information.

However, all of this focuses simply on crackers, a specific subgroup of hackers who “crack” security systems. The term itself has a far more expansive, impressionistic meaning.

The Meaning of “Hack”

The Jargon File presents a wide variety of meanings to the term “hack,” as well as a short article on the topic. There are many definitions of “hack,” some of them seemingly deeply contradictory. And yet there is, in the final analysis, a unity to the term.

Originally, the term was used to describe code. There were two opposing meanings to calling a piece of code a “hack.” One, it is expertly written, efficient, and does precisely what it is intended to do, with eloquence. The other was that the code was hastily written, sloppy, and essentially only just good enough. It was a workaround, the software equivalent of a hardware kludge.

As mutually exclusive as these two connotations of the term may seem, however, both the polished, impressive hack and the quick-and-dirty hack have a fundamental similarity. They are both born of a certain relationship to a certain type of knowledge.

Hackers are autodidacts. From the earliest hackers working at large research universities on the first networks to anyone who deserves the term today, a hacker is a person who looks at systemic knowledge structures and learns about them from making or doing. They teach themselves and one another because they are at the bleeding edge of knowledge about that system.

Through that type of knowledge-seeking and knowledge-creation, you may approach a fork in the road with a particular problem you are working on, and you have to decide to either go for an ugly hack or an eloquent hack. But either way, the product is functional, it does something, and it is innovative. And it is a product of your relationship to that systemic knowledge structure– to the computer languages, networking protocols, etc.

The culture of the first people to use the term “hack” produced a second-order meaning, as well. A hack is a practical joke, a playful subversion or gaming of a system. The MIT Gallery of Hacks presents a fascinating history of such hacks on the MIT campus, from CalTech’s cannon mysteriously disappearing and re-appearing at MIT to a Campus Police car appearing on the roof of the MIT dome. These “hacks” aren’t really so different, however, from the software hacks discussed above. There is a sense of play in coding, too– it is not apparent to everyone, but it is there. And the fundamental action here is the same: it’s the clever gaming of complex systems to produce an unprecedented result.

The Hacker Ethos

Learning about and improving highly complex systems by playful innovation is at the core of what I would call the “hacker ethos.” The fact that this is about a relationship to knowledge systems means that the term has, over the last thirty years or so, come to be applied to an ever-growing assortment of activities.

This was very apparent when a group of us at THATCamp 2010 attempted to put together the beginnings of a syllabus on “Ethical Hacking for the Humanities.” The more we discussed what should be included, the more amorphous the whole endeavor began to feel. Just a sampling of the things that were mentioned:

  1. Life Hacking, the application of scripts, codes, and tricks to make one’s life more productive and efficient.
  2. Case Modding, the modification of computer cases to create functional artwork.
  3. Phone Phreaking, the gaming of telecommunications systems to get free calls
  4. iPhone Jailbreaking, the act of hacking an iPhone to run non-Apple-approved software.
  5. Ikea Hacking, the use of modular Ikea furniture components to create other types of furniture not offered
  6. Game Modding, the revision of computer games (and creation of new games) by the alteration and cannibalization of existing game code.

…And that’s just a partial list.

But in each of these activities, you can see the kernel of the same hacker ethos. Each of these activities is based on the use of playful creation to enrich knowledge of complex systems, whether you’re making furniture from the complex system that is the Ikea catalog, or learning how to game Ma Bell for free calls to Bangalore.

This sort of playful creation should not be unfamiliar to academics. It’s not dissimilar to the Situationist International’s concept of détournement, or Dick Hebdidge’s notion of subcultural style systems. It’s Levi-Strauss’s bricolage re-imagined for a time when computers have replaced magic.

A different approach to this hacker ethos can be found in what Eric Steven Raymond has described as “The Hacker Attitude.” Raymond discusses five elements that he feels are central to the hacker attitude, which is born of what I’d describe as its general ethos:

  1. The world is full of fascinating problems waiting to be solved.
  2. No problem should ever have to be solved twice.
  3. Boredom and drudgery are evil.
  4. Freedom is good.
  5. Attitude is no substitute for competence.

…I’d argue that a great number of academics would agree with most if not all of those statements, though they might not want to admit to it.

Why Hack the Academy?

Many of the entries in this project offer answers to this question, and I don’t have the space or time here to even begin fully answering it. Let me sketch out a few thoughts, however.

The academy is approaching a new integration with revolutionary new technology. We’ve quickly gone from computers in the classroom to classrooms inside computers, and to the integration of new media into the very fabric of classroom interaction. Computer-based research in the age of ubiquitous, fast, and cheap computing is changing very fundamentally our approaches to research, collegiality, and collaboration. Pure information is getting cheaper and more easily accessible, while the mental and coding chops to process the glut of information are becoming more and more valuable in the new knowledge economy.

We can see two highly complex systems– computer technology and the academy, one complex by nature and one deeply complex by force of history– colliding and hybridizing. And as this happens, we are faced with a situation where even the very clever people on the cutting edge who have working knowledge of both systems cannot fully synthesize them and predict outcomes. We don’t know what this hybridization will amount to. So all we can do is steer it by getting out there and learning more by creative creation. You have to make the tools that steer the future of academia, or that future will be steered by whomever has the best sales pitch to the administrators. We have to create tools and efficiencies that improve the way we do things, because only by so doing can we fully understand the new world we inhabit.

In other words, we have to embrace the hacker ethos.

There’s a lot to be bleak about when you look to the future of higher education. The academic job market is grim. The publishing system seems on the verge of economic collapse. Universities are quickly becoming prohibitively expensive for the vast majority of students, who are in turn forced into an exploitative system of student loans. The system, to some of us, appears to be broken.

But when a system fails, you hack around it. Some hacks may be eloquent and subtle, they may be almost poetic. Others are nasty hacks that only really serve in a single work case– but in either case, you’ve routed around the problem. You’ve fixed something. You’ve improved functionality. And likely, you’ve learned a little something yourself about the functioning of the system you’re working with, and will be better prepared next time you find a bug.

The hacker ethos, in the end, might save us– or at least prolong the life of the academy as we know it.

And finally, there is that sense of play. It’s something that “serious” academics don’t get to explore as often as they should. Play is good for the soul– it reinvigorates, brings joy, renews commitments. It makes things fun. And it is also good for the intellect. Play leads to types of problem-solving and synthesis that would otherwise be impossible. There’s a reason that “clever” means both funny and smart. And reading through the submissions to this project, I think that’s one theme that comes back again and again. We’re working on fixing things, yes, and we’re trying to figure things out. But we’re also having a heck of a lot of fun.

The academy, ultimately, can only be invigorated and improved by an infusion of the hacker ethos that goes beyond the Computer Science departments and infects all the disciplines. It has the potential to help fix problems in the system, deepen our understanding, and make our lives a little more fun.

So yes, we should go out and hack the academy.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.


Are EdTechers Ahead of the Curve?

My copy of Wired came in the mail last week. As is my habit with magazines, I put it in a pile in a corner of my room, planning to look at it at some point when I just couldn’t stand to read something more “productive.” (Read: when I can’t stand to look at another History book.)

Gary Wolf’s article on Craigslist is an interesting read, and provides (at least in my reading) an interesting insight into how maintaining a purity in organizational culture and delivering a product that just plain works can be even more important than keeping current and offering the latest bells and whistles.

But then I read Robert Capps’s The Good Enough Revolution: When Cheap and Simple Is Just Fine. It’s a really good article about how sometimes low-def, cheap, and simple is actually better than high-fidelity, premium, and blinged-out. Capps gives a lot of great examples, and it’s a good article.

Just one thing, though: I already wrote about the same thing. Months ago.

Back in August, without being clever enough to coin the phrase “good enough tech,” I talked about exactly this sort of approach to educational technology. I argued that it’s precisely this DIY, kludgey, corners-cutting mentality that is what’s so “punk” about EDUPUNK. Quick and dirty, cheap and simple is just better for certain things, and these qualities better match the needs, budgets, and time constraints of digital educators.

So yeah, I know not many people read this, but I like to think that– to a certain extent– I scooped Wired.

Similarly, I was listening yesterday to the most recent episode of This Week in Google— my new favorite podcast– and the subject of Brown announcing that they’re testing the idea of switching their university email to Gmail came up. The discussion– while I usually find the show to be quite thought provoking– was brief and somewhat superficial.

And I couldn’t help but notice that it was much less nuanced and thought-out than a similar discussion that Tom Scheinfeldt, Dan Cohen, and Mills Kelly had last November on the Digital Campus podcast.

These two things, in the last two days, have gotten me thinking– does EdTech have a visibility problem? Are the people working in or on Educational Technology actually somewhat ahead of the curve, and just not being heard in the greater tech community? Does throwing “Educational” in front of Tech somehow take you out of the tech discussion? And is this a positive or a negative?

I just watched the Marx Brothers’ 1932 movie Horse Feathers, and was struck at how little public perception of Academia has really changed in the years between 1932– the year of Hitler’s unsuccessful run for president against Hindenburg– and now. And by how, compared to the rest of the world, postsecondary education really is relatively unchanged since then.

Then as now, academics were seen as dull, myopic, entrenched in a culture that was deeply out of touch with everyday life. And there’s some truth behind those perceptions. It’s part of what makes pushing forward an agenda of technologically progressive, student-oriented education so tough.

But are the EdTechers being taken out of the dialog in the larger tech world because they’re being lumped with precisely the people they have to constantly struggle to try to convert to the new realities of education in the 21st century?

When can we get someone from EdTech as a guest host on This Week in Tech or This Week in Google?

How can we start getting Education coverage in Wired?

Why can’t TechCrunch get Blackboard’s internal documents, like it did Twitter’s?

Or am I asking the wrong questions? Should we revel in our relative obscurity? Can you do more damage when you’re off the radar?


Digital Scholarship and Peer Review– The Question of Where…

I was writing a reply to Mills Kelly’s most recent post, and realized that my reply was long enough to constitute its own post. I suppose this is exactly what trackbacks are for.

The whole pre-press peer review process is based on a different model of the economy of publishing. Review after the fact can be better used online, where we have the ability to keep everything in a perpetual beta. (And I’d argue that there’s a difference between the feedback of blog comments– which one commenter aptly likened to responses at a conference panel– and an actual critical review, like one finds at the ends of most scholarly journals.)

But this brings one to the question of how post-publication review could best be disseminated, etc. More scholarly, critical reviews of online scholarship are definitely a must, but where would they best be published? To put them in traditional print journals gives some name-brand credibility and authority, which online scholarship could definitely use. But publishing reviews in such journals closes off the dialogical potentials of digital scholarship.

Blogs published by individual scholars would seem a good vehicle, but there are many scholars who might be capable of producing great critical review pieces who don’t have the time or the inclination to maintain a blog, to foster the audience that grants individual blogs status, etc.

And then there’s the option of online journals, which might resist some of the problems of the previously-mentioned formats, but bring up a lot of their own issues. Many (most?) are too new to have built up a sufficient academic cache, especially among those resistant to digital scholarship. Many online journals don’t benefit from being indexed in subscription-based journal databases, like JSTOR, rendering them invisible to less-net-savvy scholars. Moreover, the ability of an online journal to be responsive, dynamic, and dialogical– the very advantages they possess when compared to print journals– pose a further question: when would these things really be done? Some of the advantages of review articles– that they’re relatively quick and easy to write, for example, and thus good CV-fodder for newer scholars building their publication lists– would be lost if one had to perpetually update, constantly adjusting a review to the most recent revisions of the site’s content or design.

No answer is ideal. Perhaps best answer would be a new model, some format not yet in existence. Barring that, maybe we should think about how best to use all three in tandem. The AHA’s Perspectives has both an online and a print presence. Magazines and journals like that could serve as a good bridge, giving the prestige of print with the capacity for online revision.


Another call for Open Access

Just a couple days after I talked about Open Access book publishing, the incomparable danah boyd, in her blog, calls for the elimination of locked-down academic journals and databases.

I’m all for her idea– after all, we’re all accessing journals electronically at this point anyway, and server space is far cheaper and more flexible than the current publication/database model. I know that some academic libraries– including one of my former institutions– are so cash-strapped that all they can afford to do is maintain their database subscriptions, and have had to put book purchases on hold.

One question I would pose, however: the open-access model works much better going forward, as we look to our next publication. What kind of model can be made to replace or lower barriers to the “backwards-facing” (for lack of a better term) aspect of academic journal databases? Is there any way we can open up access to the vast array of old journal articles that are, to most of us, only accessible via databases like JStor, Ebsco, or Project MUSE?


Open Access Academic Publishing

Dan Cohen’s blog has brought to my attention an interesting article by Charles Bazerman, David Blakesley, Mike Palmquist, and David Russell about the positive response to their book, Writing Selves/Writing Societies.

If the data they collected from their experience in electronic, open access academic publishing is generalizable, it presents a strong argument that this is something we should all be looking into. Their book’s been widely cited, and when one compares it’s number of full downloads to the sales of a lot of university presses, they’re doing a good job of disseminating the information. (Especially when you factor in the fact that libraries are likely not part of the equation, as they don’t need to download an electronic version of a book that’s already freely available.)

Academics aren’t exactly using publications to pay for their summer homes in the Hamptons. So one has to ask, other than providing peer review (which both the book in question and the journal the article appears in seem to do without their aid) and giving a patina of academic respectability to tenure review and hiring committees (which really only goes so far unless other things happen that are out of the hands of the publisher, like getting reviewed and being reviewed well), one could well ask what academics are really getting out of the deal. What is the added value of an academic publisher?

Well, for one thing, you’re getting a book. A real, solid book. One that will never have to be reformatted, that archives itself, that can survive a coffee spill or a trip to the beach when you’re at that summer place in the Hamptons. Academia is lousy with bibliophiles, print fetishists, technophobes, and Luddites who still resent or refuse to use their email accounts.

Sure, Sony and Amazon have both premiered their first take on the portable ebook reader, but since you can’t back up those systems and there’s no standards set, who knows if a book you purchase for either of those systems will be accessible ten years from now. As long as you have shelf space, your printed book will be accessible. Servers go down and websites disappear with a lot more regularity than entire libraries disappear. Having a physical book is an insurance policy. And sure, you could download it and print it yourself, but it’s a pain to do all that, ink costs money, and binders take up a lot more shelf space.

That doesn’t mean that there’s not a place for open access publishing, or that we still need the current academic press setup. While print-on-demand services like have thus far been relegated in terms of publishing stature to the ghetto of the vanity press, there’s no reason that a peer-reviewed scholarly book couldn’t be published using the same means.

Assuming that the author is willing to not make any money (often the case for scholarly publication anyway) and to sell it “at cost,” (that is, after lulu recoups their operating cost and makes their profit) it’s still competitive with most university presses. I used their pricing calculator, and a 250 page, black and white, 6″x9″ hardcover with jacket should run about twenty bucks per book. That’s less than most of the books for my classes this semester at the campus bookstore. Lulu offers bulk pricing reductions, so if a buying consortium of libraries could be formed, the book could be sold to university and public libraries at an even lower cost.

Heck, if they ended up with a significant enough market share, it might be possible to convince the folks at Lulu that it’d be a worthwhile (and possibly tax-deductible) to set up a nonprofit “academic imprint” of Lulu, thus deferring cost further.

The Espresso Book Machine offers the possibility of a similar idea, with the exciting addition of on site production and pickup.

There’s something to be said for the notion of gatekeepers, but the gatekeepers seem to be more and more of a hindrance. It may be time, as publishing is already an industry in the midst of upset, to think about establishing a new paradigm, finding a new way to integrate this gatekeeper function. Peer review, like academic publishing, comes near enough to a volunteer service that academics perform for the community anyway– there’s just not enough money in academic publishing to make it affordable to put out all the good work that’s being produced. Print runs are small. Editorial supervision is shrinking. Academics talk about their work being for the public good, about enlightenment and progress. Taking the money out of the equation seems in keeping with all this.

There’s more to be gained for authors than just the high-minded ideals I’ve been discussing. It puts the reigns of design in our hands. It gives academics control over how their book looks. There’s some books I love– and I’ll avoid naming names to protect the innocent– that have suffered from being (apparently) completely neglected by the design staff at the university press. And as much as we want people to notice the content of our writing, we have to acknowledge that design matters. Ugly books are actually harder to read. Attractive things work better. Putting the author at the helm, giving him or her the reigns, is the best way to ensure that the formatting of the book will be handled carefully and with love.

By extension, making sure that students in programs that emphasize digital and new media scholarship have the necessary skills to produce well-designed print layouts should be one of the skills we encourage– or even demand– in such programs. There’s more to digital scholarship than databases and Dreamweaver. Let’s not forget that these new technologies have, in addition to new methods of analysis and dissemination of knowledge, have expanded access to “old media” as well.


The Cold War and the Militarization of the Academy

It is a widely-discussed problem within higher education that the current job market is, to say the least, a difficult one. Universities are creating fewer and fewer tenure-track positions, relying on adjuncts, graduate students, and limited-term visiting professors for a growing share of the teaching load. Many if not most disciplines produce more PhDs than there are academic jobs to be filled. Public Universities in most states face the constant threat of reduced funding. One of the primary reasons for this state of affairs can be traced directly to the first fifteen years or so of the Cold War. In the years between World War II and 1960, the United States government began a massive and unparalleled investment in higher education, through grants, endowments, and the GI Bill, in order to promote its anti-Soviet agenda. The beginning of Perestroika and the eventual collapse of the Soviet Union, then, created a problem for American academics—the US university system had grown, over fifty years of federal investment for Cold War aims, to a point that was unsustainable without continued levels of funding. But when the specter of Communism was no enough to justify previous levels of spending, disinvestment began, and as is the case with most large, bureaucratic systems, the American university system was slow to react and adapt.

Three books that look at different disciplines in the years between 1945 and 1960, Jessica Wang’s American Science in an Age of Anxiety, Paul Edwards’s The Closed World, and Peter Novick’s That Noble Dream, all point to the unprecedented tremendous investment made by the federal government at that time. Looking at the level to which the academy was, in effect, militarized in order to advance the Cold War cause, it becomes quickly apparent why this level of subsidizing academe would become unsustainable after the end of the Cold War.

Jessica Wang’s excellent American Science in an Age of Anxiety is a history of atomic scientists in the latter half of the 1940s. Wang’s account shows that there was a considerable level of initial trepidation among at least certain circles of atomic scientists about the increasing role of the federal government funding atomic research, fearing “that military patronage would adversely affect the content and character of physics research.” (38) A particularly clear expression of this fear can be found in Eugene Rabinowitz’s editorial “Science, a Branch of the Military?” in the 1946 Bulletin of the Atomic Scientists, which worried about the subordination of scientific research to military goals, the suppression of traditional scientific exchange in the interest of national security, and the impression that such a relationship could give to the rest of the world that the US was determined to continue military buildup and pursue aggressive foreign policies. (39-40) Even further, some scientists initially argued that control and regulation of atomic technologies was too dangerous to be left in the hands of military-governmental bodies at all, and should rather be handled by an international body of dispassionate scientists committed to civilian applications of nuclear technologies.

Such dissenting voices to the direction of US atomic policy were functionally silenced within the scientific community within five years, however, due in large part to scrutiny and sometimes persecution in the name of domestic anti-communism by federal organizations, most notably the FBI and the House Committee on Un-American Activities. The post-World War II Red Scare had a profound chilling effect on dissident atomic scientists, Wang argues, by splitting the left into two camps—left anticommunists, who put practicality and spectacles of patriotism ahead of ideals in order to maintain their position in the middle of anticommunist paranoia, and the progressive left, who were most often silenced by anticommunist persecution, and often faced tragic consequences when they refused to be silenced.

Beyond the “stick” of blacklisting and persecution, Wang also notes the powerful role of the “carrot” of scientific research funding. While many of the top atomic research facilities were governmentally-founded organizations or semiautonomous institutes dependant on military and federal funding, many other institutes were associated in some way with America’s top universities. Looking at a 1947 FBI memo on the loyalties of various chapters of the Federation of American Scientists, one quickly notices that along with military/industrial centers like Los Alamos and Northern California, many other chapters were located in the cities that were home to some of the US’s top research universities, including Cambridge, Ithaca, and Rochester. (65) Beyond simply funding research institutions, the government also invested in individual students of promise: the Atomic Energy Commission’s fellowship program, for example, “awarded grants to almost five hundred young physicists, biologists, and medical researchers in 1948 and 1949. At the time, it was the largest program for advanced science education in the nation’s history…” (220) By 1950, the prerequisites for the fellowship included a loyalty oath and an affidavit denying ties to the Communist Party. The federal government in this way attempted to assure that the top researchers of the next generation would be properly ideologically vetted and screened while still in training.

    While focusing more on the “carrot” of funding than the “sticks” of anticommunism, Paul Edwards’s The Closed World illustrates the profound impact of the militarization of the academy on scientific research far more varied than Wang’s atomic scientists. Edwards argues that Eisenhower’s notion of a “military-industrial complex” overlooks the important role of the academy in the military, technological, and scientific buildup of the Cold War, preferring an “’iron triangle’ of self-perpetuating academic, industrial, and military collaboration.” (47)

Two aspects of this collaboration he pays particular attention to are research in computers and psychology. While early research into computers was certainly driven by military funding and researchers, as well as corporate players like IBM and Bell Labs, universities like MIT and Harvard were integral in early developments in computer science. Many researchers moved back and forth between private, governmental, and academic research positions. Like atomic energy, the military uses of computers are immediately obvious and it is perhaps not surprising that the federal government would invest large amounts of money into such projects. However, Edward’s discussion of the funding of psychological research yields somewhat surprising numbers. In the years between 1941 and 1960, the American Psychological Association’s membership grew from 2,600 to 12,000.  During World War II, the majority of its membership worked on war-related research, and half of all professional psychologists were employees of the federal government. (177) The government was interested in the uses of psychology for everything from propaganda and public opinion to finding better ways to regulate and control military personnel. Moreover, as Edwards makes quite clear, there was a lot of overlap between some of the earliest developers of computers and information theory and some of the most influential psychologists of the time.

The militarization of the military was not limited to the sciences, however. While his classic That Noble Dream chronicles around a hundred years of the historical profession, Peter Novick’s discussion of historians in the wake of the Cold War bears mention here. Novick notes that there was a sharp rise in diplomatic history in the years immediately following World War II. (305) Moreover, while “private philanthropic organizations… provided initial funding for most of these ventures… as the academic cold war became institutionalized, a program of official government grants became established, mostly under one or another ‘national defense’ rubric.” (310) This new influx of money and interest led to the creation of integrated area studies programs throughout America—twenty-nine had been created by 1950, and by 1965, that number had grown to 153. (310) While the research that came out of these diplomatic histories and area studies programs tended to reflect a strong interest in the narrative of the US as the Free World, diametrically opposed by the Totalitarianism of the Soviet Union, the perception of the field from within was one of absolute objectivity, of history as an account of Truths about the past.

Moreover, another element beyond the investment in the institutions and members of the academy that the federal government was the massive investment in tuitions for many who came back from World War II and later Korea under the GI Bill—according to the Department of Veteran Affairs, GI Bill veterans made up almost half of all college students in 1947, and ultimately, 7.8 out of 16 million veterans of the World War had taken part in education or training programs under the bill.  Such a large infusion of students helped to create demand for a large number of positions throughout higher education, meaning that even academics working outside areas of direct military interest.

Jessica Wang has noted that many scientists, in reaction to the post-Cold War drawback in research funding, are more likely to rail against postmodernist critics of science and general public ignorance than to look back to the historical trends that have created this situation. (290) But what she doesn’t comment on is whether or not this federal and military disinvestment in higher education after the end of the Cold War has put the American academy in an untenable situation. Did the infusion of so much money over half a century artificially inflate the size and number of postsecondary institutions to the point where they cannot structurally adjust to this lessening of funding? How can we maintain—or even responsibly reduce, while maintaining the primary benefits of—the previous levels of research in this new age? America has become a leading exporter of the educated—the number of foreign students coming to the States for an education is higher than ever—but if the primacy in academics that this is built on was the result of Cold War funding, will this disinvestment engender a decrease in the prestige of American higher education—or even of America itself?  Even with a war going on the GI Bill—now the Montgomery GI Bill—is nowhere near what it was in the middle of the twentieth century. Veterans coming back from the Middle East won’t be flooding into our institutions of higher learning on the government’s tab, meaning not only that the money the GI Bill represented won’t be there, but neither will the talent that it provided.

It is perhaps possible that, in framing the “War on Terror” as a generational conflict, the current administration has created a discourse that could serve to frame military and federal funding into higher education once again, but that is uncertain at best, and one cannot in good conscience hope for an outcome that relies on so much potential for human suffering.  All one can say for sure is that this is a discussion that needs to be continued and expanded, and that the history of this phenomenon and its ties to Cold War military spending should be a larger part of the broader public discourse on the topic, as it is so intrinsic to what is at stake.