Now, I want to start this blog post with a caveat that I am in no way an engineer or a coder. I’m not a computer science guy, I’m a humanist. And this means that I may not know the intricacies of how some computer systems work. But it also means that I can spot a bad metaphor when I see one. And having said that, I’m hereby calling for a moratorium on all discussions of “cloud computing.”
If you want to discuss what the above-linked Wikipedia article describes as “a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet,” there’s a term that’s already existed for quite some time: “remote storage and computing.”
The problem with “cloud computing” is that there’s no “cloud” there.
Clouds are masses of water vapor (and/or ice crystals) in the air. The vapor or ice begins to condense around certain “seeds,” or condensation nuclei, like airborne dust or salt.
In other words, the “stuff” of the cloud coalesce around an element that attracts them and brings together. But it’s still vapor. It’s still made up of individual particles of water.
So-called “cloud computing” is nothing of the sort. It’s entrusting your data and processes to a remote computer or set of servers. These computers are owned by a company– whether that company is Google or Salesforce.com. Those companies hold the data and the processes. They aren’t “in the cloud,” they’re in particular computers in particular places owned by particular people. This isn’t “cloud computing.” It’s just remote data services and storage.
This isn’t to say that the services we’ve been describing as “cloud computing” are a good or bad thing– I think there’s strong arguments for both. Honestly, I like owning my data locally. But I also like knowing that there’s a remote computer somewhere far off with all my important files, in case my house burns down tomorrow. There’s privacy issues, definitely, but I have more faith in the ability of some of these companies to keep my data secure than I do in my own ability to keep my networked computer completely safe. I don’t have on-site security experts. They do. I could go back and forth all day. But the main thing is, it’s a misnomer.
Part of the reason that the inaccuracy of the term bothers me is that I think the metaphor has potential. But let’s look at making something truly cloud-like.
What would truly “cloud-like” cloud computing look like? It would take data, disaggregate it, and distribute it across a number of other computers that all had a tiny piece of the data. Like a droplet of water, none of those tiny pieces of data would be the whole “cloud,” just one of the many small parts that, in toto, constitute the data cloud itself.
Distributed computing is a powerful tool– it’s what makes BitTorrent such a useful file sharing protocol. As computers and networks become faster and more powerful, more opportunities to follow this model of disaggregation and reaggregation of more (and more complex) data.
This new “true cloud” computing would have some obvious drawbacks. Anyone who uses torrents will tell you that it’s not the quickest way to move data. There would definitely be security concerns as well. But not all of your data needs be that secure. And there’s at least the possibility that systems of encryption could ensure that the data in any given “droplet” of data were essentially useless to anyone who didn’t have access to all the rest of it. And the system would have to incorporate massive redundancies, as well– so that one guy in Iceland shutting his computer off or losing power wouldn’t suddenly result in your own inability to access important data.
But for some functions, such a system would work far better than the current model. One example: Twitter. Twitter is almost as famous for the fail whale as it is for its sudden and striking ubiquity. The company’s servers go down, and there is no Twitter until they go back up.
A “true cloud” Twitter could adapt to failure, could reroute you via various series of networks to your and your friend’s tweets, as long as a certain critical mass of users were online at any given time. It could scale quite well. I’ve been saying this to friends for a bit now– what Twitter needs is an open-source, distributed alternative. The fact that the service has, from the get-go, worked on an API model means that there could even be a place for Twitter, the company, within the greater cloud of Twitter, the distributed microblogging protocol.
Again, I’m not a computer scientist or a programmer. I’m just a humanist who’s interested in looking at how evolving digital media shape our lives. But I think there’s the seed of a good idea in the phrase “cloud computing.”
So first things first– let’s stop using the term to describe something that isn’t cloud-like at all.