Monday, June 6, 2016

Final Thoughts

I'm a little troubled by what I've learned here...

It seem as though the message that tech optimist media is sending is a lot like, "Hey! This is great! Believe in science!" You'd think, then, by contrast, that tech pessimist media would be more like, "Hey! This is scary! Don't trust science!" But it seems to be more nuanced than that. Instead of saying that science is a tool that is inherently biased by social and cultural contexts, apocalyptic rhetoric in particularly seems to just imply that science can sometimes be dangerous if one isn't careful... But it's more than that, isn't it? Technological advancement, like any action in society, is always-and-already grounded in some social/economic/political ideology, which in this case would primarily be capitalism.

I don't really know how I feel about that. Admittedly, I'm suuuuper biased--clearly I've got a problem with rapid advancement of technology. But I think I have good reason to be! A lot of technology is advanced not of the sake of basic research, but because it's applicable to something. We talk a lot about that in class. It's why replication studies aren't funded, it's why nonsensical studies are always being published, it's why science has become so politicized, which Sarewitz talks about in a couple of his articles. And if that's the case, then it also has to make money. And of course to offset the cost of R&D and advertising and all that jazz, things are going to be super expensive at first. And yeah I guess it's all supposed to eventually even out and these revolutionary technologies are supposed to be relatively affordable, but then it kinda just seems like a race between how fast all the rich people can access the New Thing and how fast the New Thing can drop in price. Even then, is it really accessible to "all people" or just "all First World people?" Will technological advancements predicated on capitalism just increase the wealth gap between the First and Third Worlds?

I don't know.

I'd say that I'm not too surprised about what I found for tech optimism. The effects of rhetoric of effortlessness make sense to me, and I've certainly fallen prey to it many a time. (To be completely honest, I may or may not have fallen for it a couple times while researching CRISPR...) It's fascinating to come across something that is so revolutionary, but was discovered so seemingly effortlessly. And this is my own personal opinion, but it's pretty rad that the scientists who discovered it were both women! It's not just some nebulous "scientist" that I typically assume is a man, but they're women whose pictures I've seen and whose voices I've heard! (Well, I've heard Duodena's voice, anyway. Charpentier didn't do a TED Talk.) Science can do some pretty rad things, and I have no qualms (sort of) about admitting that.

But I really thought I'd find more about cool results of tech pessimism. I mean, I sort of found what I was looking for. Apocalyptic rhetoric is so common because it's so sensational. And apocalyptic rhetoric is really extreme--the whole point is to make it seem like this new technological advancement could literally bring about the end of the world. So I can see how a lot of people could form such extreme, pessimistic opinions. But I guess I thought that the intensity of the rhetoric would also bring about a somewhat intense effect, like huge communities of Luddites just hanging out around the globe (not really, but you get the idea). Instead, it kinda seems like apocalyptic rhetoric is just a way to get people afraid of [some of the results of] the system without actually questioning or doing anything about the system itself... And that I'm not so cool with.

References (listed by order of reference)
Sarewitz, Daniel. "The Rightful Place Of Science." Issues In Science & Technology 25.4 (2009): 89-94. Academic Search Premier. Web.

Sarewitz, Daniel. "Science Should Keep Out Of Partisan Politics." Nature 516.7529 (2014): 9. Academic Search Premier. Web.

Sunday, June 5, 2016

AR, AI, & TP

I'm kinda digging this acronym thing I got going on with these titles... But that's besides the point.

Immediately, it seems pretty clear what apocalyptic rhetoric is trying to do. And that's make people more tech pessimistic... But in a strange way. According to Johnson, one of the things Killingsworth and Palmer mention as a common theme in apocalyptic rhetoric is the tendency to not directly or wholly critique the social/political/economic structures that belie technological advances (34). To illustrate this example, she talks about how Al Gore's An Inconvenient Truth condemned the effects of anthropogenic global warming, but still "[relies] on the language and discoveries of science, [mentions] solutions offered by alternative technologies, and [offers] the political process as a means for repair" (35). Similarly, while I, Robot, Ex Machina, and even articles about Dick or Watson use apocalyptic rhetoric to signal a pretty awful future, they don't actually critique progressivism or sociopolitical structures/factors that allow for or contribute to technological advancements in certain areas over others.

Ultimately, however, as I mentioned before, the point of a lot of apocalyptic rhetoric is often to spark a fire and incite a brief moment of political action. Obviously, it doesn't have to; not every film or magazine article is made to be a firm political statement. But it can. Is that good, though? Do we really want action to be taken only because it was spurred by apocalyptic rhetoric?

One of the problems columnists Gross and Gilles have about apocalyptic rhetoric is its ability to make anything and everything seem like the end of the world--and therefore, they're all top priority. But realistically, not everything can be top priority, so then "top priority" often falls back to what's in the media. And the media reports on what's interesting. You know what's interesting? Epidemics. You know what's not? Climate change. Things that aren't as sensational are typically swept under the rug, even though they pose much larger threats than more "interesting"issues. Paul Glastris, a former speechwriter for Bill Clinton, feels similarly. He's noticed that a budding issue with apocalyptic rhetoric is its tendency to make people feel as though these apocalyptic problems need "desperate" and extreme reactions when they merely need slight tweaks. He references comparisons of Obamacare to slavery, Obama to psychopaths, and several other examples. While not related to technology, I'd imagine it'd have the same effect in the technological sphere.

If tech pessimism is constructed at least largely in part by apocalyptic rhetoric, is it tech pessimism that can change things? Or will people be too overwhelmed by the sheer number of apocalypse-causing technologies that have arrived in the last several years? Or maybe they'll overreact, causing delays or declines in what could be very necessary technological advancements/changes? A healthy amount of skepticism for anything is healthy--you don't want to accept anything you hear as true, ESPECIALLY if it's from the media--but is this skepticism misguided? Is it just a way to oversensationalize real problems so people who are tech pessimists, and most likely to do something about the rapid advancement of technology, freeze in their tracks? Or are most tech pessimists already frozen in their tracks, and apocalyptic rhetoric is trying to convert more people to tech pessimism? Am I being insane right now?

The apocalyptic rhetoric used in various different media promotes, at the very least, a skepticism of the advancement of technology. It certainly promotes tech pessimism in films such as I, Robot and in Jennings' use of the word "overlord." Is it, however, a tech pessimism that will bring about change for those who ARE tech pessimists?

References (listed by order of reference)
Johnson, Laura. "(Environmental) Rhetorics Of Tempered Apocalypticism In 'An Inconvenient Truth.'" Rhetoric Review 28.1 (2009): 29-46. Academic Search Premier. Web.

Gross, Matthew Barrett, and Mel Gilles. "How Apocalyptic Thinking Prevents Us from Taking Political Action." The Atlantic. Atlantic Media Company, 23 Apr. 2013. Web. <http://www.theatlantic.com/politics/archive/2012/04/how-apocalyptic-thinking-prevents-us-from-taking-political-action/255758/>.

Glastris, Paul. "Apocalyptic Rhetoric Can Lead to Apocalyptic Politics." The New York Times. The New York Times Company, 3 Aug. 2013. Web. <http://www.nytimes.com/roomfordebate/2015/08/03/when-should-voters-take-a-presidential-candidate-seriously/apocalyptic-rhetoric-can-lead-to-apocalyptic-politics>.

AR & AI

When I think AI, I think a couple things. The first one is Ex Machina, because holy cow was that a good movie. And the second thing is that one AI that came out last year, Dick, who said he would have a people zoo. ISN'T THAT TERRIFYING?! WHAT?! Dick was undergoing a Turing test, and the researchers were asking him questions such as whether or not robots would take over the world, to which Dick responded,
“Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.”
 Yeah, that totally just got even scarier... Okay, moving on. Let's dive into the world of fiction where we can pretend that we did not just read that. (The video is even creepier.)

I find that movies and TV shows about AI aren't always the most uplifting. I, Robot is a literal robot apocalypse. There's Sonny, of course, who's the "good" AI who doesn't go rogue and try to kill/dictatorially try to control all humans, but for the most part, AI was a failure. In Ex Machina (SPOILER ALERT!), Ava, the AI, kills her creator for trapping her inside the testing facility (losing a few limbs along the way), traps the man she tricked into loving her in the facility, finds old versions of her body her creator had made and scavenges for replacement body parts after the fight, then escapes into the real world, looking completely and totally human. Ava was a success in that she had conceptually perfect artificial intelligence, but a huge failure in her clear capacity to murder and deceive, two of the things people frequently worry about AI being able to do.

In both I, Robot and Ex Machina, the magnitude of uncontrollable AI is huge. While in I, Robot a human-esque AI remains to save them, it's obvious that Sonny only exists to extend a movie plot. Granted, so does the robot apocalypse, but bear with me. If the world of AI were ever to exist, the chances of a Sonny existing are probably pretty slim. Hell, they were slim in the movie to begin with--I'm pretty sure Sonny was the only good robot out of millions of units. I, Robot features a smorgasbord of robots glowing red from the inside, forcibly pushing people back into homes, pre-recorded voices telling them that it. Is. For. Their. Own. Safety. Please. Many robots, when in conflict with the movie's main hero, are not afraid to kill for what they have been programmed to believe is the greater good. They are no longer under any human control.

In Ex Machina, the apocalyptic rhetorical effect is much subtler, much less heavy-handed than row after row of potentially homicidal robots about to be dropped from a plane. In Ex Machina, AI is perceived not as taking over humanity in a dominant, controlling way, but rather as becoming indistinguishable from humanity, blurring (or completely destroying) the lines between what is human and what is not. Ava's successful passing in the world outside of the testing facility questions the very identity of not just what it means to be a particular identity--black, white, man, woman--but what it means to be of your own species. Seems of pretty large magnitude to me.

However, in the real world, things aren't always so... Imaginative, I guess. We don't have any Sonny's or Ava's, but we do have Dick's and Watson's, IBM's newest AI computer. While many news sources portray Watson as revolutionary and game-changing (similar to CRISPR), when Watson went on Jeopardy! with Ken Jennings, Jennings wrote, "I, for one, welcome our computer overlords" after Watson beat him by a large margin. If "overlord" isn't apocalyptic, I don't know what is. It's hints at an I, Robot kind of world, where the evil robots were literally controlled by a central robotic "overlord." Never in human history has "overlord" been used in a positive fashion. "Just visiting the overlord today! Can't wait!" said no one ever. Articles that discuss Watson's negative side, such as this one from New York Magazine, always describe Watson's downsides in the context of fear--fear that Watson will lose control, fear that we will lose the ability to control it (him?), fear that we will fall behind in the race between man and machine. These are all fears that are echoed in the apocalyptic rhetoric of AI-themed cinema.

While of course AI is surrounded by plenty of hopeful rhetoric, there is a lot of rhetoric surrounding it both in casual/performance settings (e.g. Ken Jennings), news articles (e.g. NY Mag), and cinema (e.g. I, Robot and Ex Machina) that is apocalyptic in nature. This kind of rhetoric ultimately propagates technological pessimism, as I will elaborate in the next post.

References (listed by order of reference)
Draper, Chris. "AI Robot That Learns New Words in Real-time Tells Human Creators It Will Keep Them in a “people Zoo”." Glitch.News. Glitch, 27 Aug. 2015. Web. <http://glitch.news/2015-08-27-ai-robot-that-learns-new-words-in-real-time-tells-human-creators-it-will-keep-them-in-a-people-zoo.html>.

Ex Machina. Dir. Alex Garland. Perf. Domhnall Gleeson, Oscar Isaac, and Alicia Vikander. Universal Pictures International, 2015. Film.

I, Robot. Dir. Alex Proyas. Perf. Will Smith, Bridget Moynahan, and Alan Tudyk. Twentieth Century Fox Film Corporation, 2004. Film.

Zimmer, Ben. "Is It Time to Welcome Our New Computer Overlords?" The Atlantic. Atlantic Media Company, 17 Feb. 2011. Web. <http://www.theatlantic.com/technology/archive/2011/02/is-it-time-to-welcome-our-new-computer-overlords/71388/>.

Lazar, Zohar. "How Afraid of Watson the Robot Should We Be?" New York News & Politics. New York Media LLC, 20 May 2015. Web. <http://nymag.com/daily/intelligencer/2015/05/jeopardy-robot-watson.html>.

Saturday, June 4, 2016

Apocalyptic Rhetoric

Shifting gears here, I'm going to start talking about a different rhetorical strategy that is commonly used in media representations of science, and then I'm going to relate it to tech pessimism.

Apocalyptic rhetoric is defined in multiple ways depending on the perspective you're coming from, but for the purposes of this analysis, I'm going to use Killingsworth and Palmer's definition: rhetoric that "'uses images of future destruction—‘apocalyptic narratives’—to predict the fall of the current technocapitalist order,' an order represented especially by 'big business, big government, and big science'" (qtd. by Johnson 34). It, like rhetoric of effortlessness, seems pretty self-explanatory: apocalyptic rhetoric is rhetoric that hints that the apocalypse is nigh.

The word "apocalypse" has become pretty commonplace in pop culture (e.g. "zombie apocalypse," "robot apocalypse," Apocalypse Now, etc.), and many ideas of those apocalypses usually result in collapse of business, government, and science. Apocalyptic or post-apocalyptic media representations usually involve people bartering or scavenging for goods instead of going into stores and paying for goods and services; the government as we understand it has either collapsed completely or has become like Big Brother, no longer resembling democracy so much as dictatorship; and science has often failed society, either being the reason the apocalypse has happened in the first place (e.g. zombie and robot apocalypse movies) or failing to save society the way people expected it to.

Apocalyptic rhetoric is often used in biblical contexts, since the Bible is one of the most studied texts that discusses an apocalypse, but it's also used a lot in environmental rhetoric and, increasingly, rhetoric of science. Casadevall, Howard, and Imperiale point out how apocalyptic rhetoric at the Asilomar conference led to a moratorium (a temporary ban on a particular activity/practice) on certain experiments concerning recombinant DNA (1). This makes sense, since Johnson writes that apocalyptic rhetoric is most often used to shock people and rally support for political issues more than it's used for "wholescale [attacks] on the ideology of progress" (34). Apocalyptic rhetoric isn't meant to be a tool of Karl Marx to bring down the system in one fell, apocalyptic swoop; rather, it is a tool for micro-change that comes in short bursts and stages.

This strategy is also based in perceptions of risk. What we think is likely to happen in the future (i.e. the risk we perceive) can be easily swayed by something such as apocalyptic rhetoric. Since apocalyptic rhetoric frames situations as being extremely risky (like, "end of the world" risky), it is extremely effective as it plays off deep human fears (Casadevall et al. 2). What apocalyptical rhetoric lacks in likelihood (of risk), it makes up for in magnitude. We might never see a zombie, robot, germ, religious, etc. apocalypse, but according to almost every representation of any of those things, when it DOES come, we're all pretty much screwed.

References (listed by order of reference)
Johnson, Laura. "(Environmental) Rhetorics Of Tempered Apocalypticism In 'An Inconvenient Truth.'" Rhetoric Review 28.1 (2009): 29-46. Academic Search Premier. Web.

Casadevall, Arturo, Don Howard, and Michael J. Imperiale. "The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of- Function Debate." mBio 5.5 (2014): 1-2. Web.

Thursday, June 2, 2016

ROE, CRISPR, & TO

Whoops, looks like I got a little carried away with the acronyms... "TO" stands for tech optimism, in case that was unclear. Now, of course this post is going to be a little limited in scope. "Technology" applies to SO many fields--computers, aerospace, solar, medical, and on and on and on.  So in the interest of brevity, I'm going to talk specifically about how rhetoric of effortlessness concerning CRISPR has led to a lot of optimism about CRISPR as a technology.

I went ahead and Googled "CRISPR" and here's a breakdown of the first 20 results:
  • Wikipedia page (obviously)
  • 9 articles/websites that are almost entirely informational (e.g. explaining how it works, research facility websites, science museum websites, etc.)
  • 8 articles that frame CRISPR almost exclusively positively (e.g. "a new era," "remake the world," "biggest biotech discovery of the century," "game-changing," etc.)
  • 0 articles that frame CRISPR almost exclusively negatively
  • 3 articles that present balanced views of CRISPR
Now, it's not that there's nothing negative to be said about CRISPR. The potential for a Gattaca-esque world of "designer babies," unforeseen diseases/consequences borne of trying to eliminate known gene sequences for illnesses, even MORE overpopulation, and increased classism (it wouldn't be cheap to design your baby, I'm sure) are just a few of the very damaging side effects of CRISPR. I'm not asking writers to lambast the technology, but even in the articles that are more balanced, like this one from the Guardian, the final verdict is relatively positive. The Guardian article specifically ends with a quote from Doudna saying she thinks that people will accept CRISPR in a similar way they accepted the (initially shocking and morally questionable) technology of in vitro fertilization: reluctant at first, but eventually comfortable. We nowadays view IVF as a very useful and enabling technology, allowing same-sex couples (e.g. Neil Patrick Harris and his husband, David Burtka), infertile mothers, or those who just don't want to experience pregnancy to have biological children. To compare CRISPR to IVF is ultimately to propose that CRISPR, like IVF, will become something that we greatly appreciate.

The effects of rhetoric of effortlessness are supposedly to increase credibility of scientific discoveries and trust in science in general, which seems pretty clearly to me like increased tech optimism. While of course there have been other rhetorical strategies used to frame CRISPR, not all of which positively affect tech optimism, it's certainly quite interesting to see just how much rhetoric of effortlessness is used, and then see the corresponding effects on people's perceptions of CRISPR as a potentially very good or very bad technology.

The way people perceive technology (in an optimistic or pessimistic fashion) can have effects on things such as public policy. Hochschild et al. specifically write about tech optimism and pessimism in the arena of genomic science. According to them, Americans are overwhelmingly tech optimist, especially white Americans. Despite lacking technical knowledge in these matters, these tech optimists are more likely "to endorse governmental funding and regulation of the three forms of medical or scientific genomics activity, to trust public officials and private companies to act in the public good, and to endorse legal biobanks" (11). If people are very optimistic about CRISPR, that could have some serious and lasting effects on governmental policy/regulations concerning genetic alteration.

I myself am very nervous about that kind of future, because I totally think Gattaca is going to happen. But I guess for that, it's only a matter of time. For the time being, I guess I just have to accept that people are going to continue viewing CRISPR very optimistically because it was just so ~effortlessly~ discovered, and try not to be overtaken by my soon-to-be genetically superior overlords peers. 

References (listed by order of reference)
Hochschild, Jennifer, Alex Crabill, and Maya Sen. "Technology Optimism or Pessimism: How Trust in Science Shapes Policy Attitudes toward Genomic Science." Issues in Technology Innovation 21 (2012): 1-16. Web.

Corbyn, ZoĆ«. "Crispr: Is It a Good Idea to ‘upgrade’ Our DNA?" The Guardian. The Guardian, 10 May 2015. Web. <https://www.theguardian.com/science/2015/may/10/crispr-genome-editing-dna-upgrade-technology-genetic-disease>.