Categories
Uncategorized

Live-Tweeting the War of 1812

I’m fascinated by Andrew Smith‘s just-beginning @Warof1812Live Twitter project. If you’re not familiar, check out the “About” page for the project’s accompanying blog. Real-Time WWII was a fascinating project, and I think that this stream’s focus on primary sources means that it will be a great tool for both teachers and more casual learners.

As someone who studies communications history, I was immediately struck by the dichotomy of live-tweeting historical events in a time when it still took over a week to get mail from Washington, D.C. to Nashville. There’s a disconnect between our near-instantaneous communication and the slowness with which information moved in the Early Republic. For this reason, I was elated to see that one of the project’s guiding principles is “To show how the slow communications systems of the era affected the diplomatic and military history of this conflict.”

Which brings me to the image to the left, which I found on the project’s About page– a 1910 painting of the Battle of New Orleans by Edward Percy Moran, from the collections of the Library of Congress.

Allow me to engage in a bit of   counterfactual history: if communications technologies had been roughly a hundred years more advance, or even less, the Battle of New Orleans could have been avoided. Twice.

Next week, followers of @Warof1812Live will “witness” the British government’s repeal of the Orders in Council. The Orders in Council were issued as a response to the Napoleonic Wars, and forbade British, allied, and neutral ships from trading with France. The British stopping neutral American trade vessels from getting to France was, in turn, one of the major precipitating factors in the United States declaring war on Britain.

The repeal of the Orders was a good-faith effort on the part of England, but it unfortunately came too late. While the Orders were repealed on June 16, Madison signed the declaration of war on June 18th. The news of the repeal would not reach America for some time, nor would Robert Stewart, the British Secretary of State, who sailed for America in advance of the repeal he had advocated for in an attempt to avoid hostilities. By the time Madison learned of the repeal, he would not stop hostilities because he did not know how the British would react to the declaration of war.

Later, on Christmas Eve, 1814, the United States and Britain signed the Treaty of Ghent, which ended the war of 1812 and re-established prewar relations between the two nations. This news had not yet reached Generals Andrew Jackson or Edward Pakenham when they fought the Battle of New Orleans on January 8.

Information traveled faster and more efficiently than ever before in the time of the War of 1812, but it did not travel fast enough to avoid these decisive moments. That would change very quickly, however. In the years between 1845 and 1860, commercial telegraphy sprung up around the United States, and information moved so much more quickly that the effect can only be described as transformative. By 1866, a little more than fifty years after the war, there was a functional transatlantic telegraph cable capable of transmitting at such a rate that news of the Repeal or the Treaty could have reached DC and New Orleans respectively in time.

By 1907, less than 100 years after the war, there was a regular transatlantic radio-telegram service connecting America and Europe, and by 1926, there was a commercial service providing reliable shortwave radio contact between the continents.

Of course, like all counterfactual history, it’s highly speculative to say for certain that a few years’ communication technology advancement would have decisively stopped the Battle of New Orleans. While the British blockage of trade between the US and France was one of the major factors that encouraged the move to war, it was not the only one. The repeal of the Orders of Council was a strong diplomatic move for peace between the two nations, but it might not have been enough.

Likewise, when dealing with Andrew Jackson, historians always have to take force of personality into account as well as force of history. Jackson was deeply bellicose man who used his military exploits strategically to advance himself. The Battle of New Orleans was a follow-up to hostilities that began a day before Ghent. It’s possible that he might have moved up his battle plans and conveniently missed the news of the treaty, or continued with the understanding that while it was signed it was not yet ratified.

Nevertheless, as a thought experiment, I think it’s a good one– it really points out the import of communications technology and the speed with which information travels. It is also a powerful illustration of just how much communications technologies advanced in the 19th century.

I’m looking forward to following the @Warof1812Live feed, and seeing how they use primary sources to illustrate the import of communications in that conflict.

Categories
Uncategorized

Why *I* Tweet

Just because Jim Groom already did it, and did it better, doesn’t mean I can’t jump in with my two cents.

In response to Jeff Swain’s video asking, “Why Do You Tweet?”

Categories
Uncategorized

The Cloud is a Lie.

Now, I want to start this blog post with a caveat that I am in no way an engineer or a coder. I’m not a computer science guy, I’m a humanist. And this means that I may not know the intricacies of how some computer systems work. But it also means that I can spot a bad metaphor when I see one. And having said that, I’m hereby calling for a moratorium on all discussions of “cloud computing.”

If you want to discuss what the above-linked Wikipedia article describes as “a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet,” there’s a term that’s already existed for quite some time: “remote storage and computing.”

The problem with “cloud computing” is that there’s no “cloud” there.

Clouds are masses of water vapor (and/or ice crystals) in the air. The vapor or ice begins to condense around certain “seeds,” or condensation nuclei, like airborne dust or salt.

In other words, the “stuff” of the cloud coalesce around an element that attracts them and brings together. But it’s still vapor. It’s still made up of individual particles of water.

So-called “cloud computing” is nothing of the sort. It’s entrusting your data and processes to a remote computer or set of servers. These computers are owned by a company– whether that company is Google or Salesforce.com. Those companies hold the data and the processes. They aren’t “in the cloud,” they’re in particular computers in particular places owned by particular people. This isn’t “cloud computing.” It’s just remote data services and storage.

This isn’t to say that the services we’ve been describing as “cloud computing” are a good or bad thing– I think there’s strong arguments for both. Honestly, I like owning my data locally. But I also like knowing that there’s a remote computer somewhere far off with all my important files, in case my house burns down tomorrow. There’s privacy issues, definitely, but I have more faith in the ability of some of these companies to keep my data secure than I do in my own ability to keep my networked computer completely safe. I don’t have on-site security experts. They do. I could go back and forth all day. But the main thing is, it’s a misnomer.


Part of the reason that the inaccuracy of the term bothers me is that I think the metaphor has potential. But let’s look at making something truly cloud-like.

What would truly “cloud-like” cloud computing look like? It would take data, disaggregate it, and distribute it across a number of other computers that all had a tiny piece of the data. Like a droplet of water, none of those tiny pieces of data would be the whole “cloud,” just one of the many small parts that, in toto, constitute the data cloud itself.

Distributed computing is a powerful tool– it’s what makes BitTorrent such a useful file sharing protocol. As computers and networks become faster and more powerful, more opportunities to follow this model of disaggregation and reaggregation of more (and more complex) data.

This new “true cloud” computing would have some obvious drawbacks. Anyone who uses torrents will tell you that it’s not the quickest way to move data. There would definitely be security concerns as well. But not all of your data needs be that secure. And there’s at least the possibility that systems of encryption could ensure that the data in any given “droplet” of data were essentially useless to anyone who didn’t have access to all the rest of it. And the system would have to incorporate massive redundancies, as well– so that one guy in Iceland shutting his computer off or losing power wouldn’t suddenly result in your own inability to access important data.

But for some functions, such a system would work far better than the current model. One example: Twitter. Twitter is almost as famous for the fail whale as it is for its sudden and striking ubiquity. The company’s servers go down, and there is no Twitter until they go back up.

A “true cloud” Twitter could adapt to failure, could reroute you via various series of networks to your and your friend’s tweets, as long as a certain critical mass of users were online at any given time. It could scale quite well. I’ve been saying this to friends for a bit now– what Twitter needs is an open-source, distributed alternative. The fact that the service has, from the get-go, worked on an API model means that there could even be a place for Twitter, the company, within the greater cloud of Twitter, the distributed microblogging protocol.


Again, I’m not a computer scientist or a programmer. I’m just a humanist who’s interested in looking at how evolving digital media shape our lives. But I think there’s the seed of a good idea in the phrase “cloud computing.”

So first things first– let’s stop using the term to describe something that isn’t cloud-like at all.

css.php