Monday, June 6, 2016

Final Thoughts

I'm a little troubled by what I've learned here...

It seem as though the message that tech optimist media is sending is a lot like, "Hey! This is great! Believe in science!" You'd think, then, by contrast, that tech pessimist media would be more like, "Hey! This is scary! Don't trust science!" But it seems to be more nuanced than that. Instead of saying that science is a tool that is inherently biased by social and cultural contexts, apocalyptic rhetoric in particularly seems to just imply that science can sometimes be dangerous if one isn't careful... But it's more than that, isn't it? Technological advancement, like any action in society, is always-and-already grounded in some social/economic/political ideology, which in this case would primarily be capitalism.

I don't really know how I feel about that. Admittedly, I'm suuuuper biased--clearly I've got a problem with rapid advancement of technology. But I think I have good reason to be! A lot of technology is advanced not of the sake of basic research, but because it's applicable to something. We talk a lot about that in class. It's why replication studies aren't funded, it's why nonsensical studies are always being published, it's why science has become so politicized, which Sarewitz talks about in a couple of his articles. And if that's the case, then it also has to make money. And of course to offset the cost of R&D and advertising and all that jazz, things are going to be super expensive at first. And yeah I guess it's all supposed to eventually even out and these revolutionary technologies are supposed to be relatively affordable, but then it kinda just seems like a race between how fast all the rich people can access the New Thing and how fast the New Thing can drop in price. Even then, is it really accessible to "all people" or just "all First World people?" Will technological advancements predicated on capitalism just increase the wealth gap between the First and Third Worlds?

I don't know.

I'd say that I'm not too surprised about what I found for tech optimism. The effects of rhetoric of effortlessness make sense to me, and I've certainly fallen prey to it many a time. (To be completely honest, I may or may not have fallen for it a couple times while researching CRISPR...) It's fascinating to come across something that is so revolutionary, but was discovered so seemingly effortlessly. And this is my own personal opinion, but it's pretty rad that the scientists who discovered it were both women! It's not just some nebulous "scientist" that I typically assume is a man, but they're women whose pictures I've seen and whose voices I've heard! (Well, I've heard Duodena's voice, anyway. Charpentier didn't do a TED Talk.) Science can do some pretty rad things, and I have no qualms (sort of) about admitting that.

But I really thought I'd find more about cool results of tech pessimism. I mean, I sort of found what I was looking for. Apocalyptic rhetoric is so common because it's so sensational. And apocalyptic rhetoric is really extreme--the whole point is to make it seem like this new technological advancement could literally bring about the end of the world. So I can see how a lot of people could form such extreme, pessimistic opinions. But I guess I thought that the intensity of the rhetoric would also bring about a somewhat intense effect, like huge communities of Luddites just hanging out around the globe (not really, but you get the idea). Instead, it kinda seems like apocalyptic rhetoric is just a way to get people afraid of [some of the results of] the system without actually questioning or doing anything about the system itself... And that I'm not so cool with.

References (listed by order of reference)
Sarewitz, Daniel. "The Rightful Place Of Science." Issues In Science & Technology 25.4 (2009): 89-94. Academic Search Premier. Web.

Sarewitz, Daniel. "Science Should Keep Out Of Partisan Politics." Nature 516.7529 (2014): 9. Academic Search Premier. Web.

Sunday, June 5, 2016

AR, AI, & TP

I'm kinda digging this acronym thing I got going on with these titles... But that's besides the point.

Immediately, it seems pretty clear what apocalyptic rhetoric is trying to do. And that's make people more tech pessimistic... But in a strange way. According to Johnson, one of the things Killingsworth and Palmer mention as a common theme in apocalyptic rhetoric is the tendency to not directly or wholly critique the social/political/economic structures that belie technological advances (34). To illustrate this example, she talks about how Al Gore's An Inconvenient Truth condemned the effects of anthropogenic global warming, but still "[relies] on the language and discoveries of science, [mentions] solutions offered by alternative technologies, and [offers] the political process as a means for repair" (35). Similarly, while I, Robot, Ex Machina, and even articles about Dick or Watson use apocalyptic rhetoric to signal a pretty awful future, they don't actually critique progressivism or sociopolitical structures/factors that allow for or contribute to technological advancements in certain areas over others.

Ultimately, however, as I mentioned before, the point of a lot of apocalyptic rhetoric is often to spark a fire and incite a brief moment of political action. Obviously, it doesn't have to; not every film or magazine article is made to be a firm political statement. But it can. Is that good, though? Do we really want action to be taken only because it was spurred by apocalyptic rhetoric?

One of the problems columnists Gross and Gilles have about apocalyptic rhetoric is its ability to make anything and everything seem like the end of the world--and therefore, they're all top priority. But realistically, not everything can be top priority, so then "top priority" often falls back to what's in the media. And the media reports on what's interesting. You know what's interesting? Epidemics. You know what's not? Climate change. Things that aren't as sensational are typically swept under the rug, even though they pose much larger threats than more "interesting"issues. Paul Glastris, a former speechwriter for Bill Clinton, feels similarly. He's noticed that a budding issue with apocalyptic rhetoric is its tendency to make people feel as though these apocalyptic problems need "desperate" and extreme reactions when they merely need slight tweaks. He references comparisons of Obamacare to slavery, Obama to psychopaths, and several other examples. While not related to technology, I'd imagine it'd have the same effect in the technological sphere.

If tech pessimism is constructed at least largely in part by apocalyptic rhetoric, is it tech pessimism that can change things? Or will people be too overwhelmed by the sheer number of apocalypse-causing technologies that have arrived in the last several years? Or maybe they'll overreact, causing delays or declines in what could be very necessary technological advancements/changes? A healthy amount of skepticism for anything is healthy--you don't want to accept anything you hear as true, ESPECIALLY if it's from the media--but is this skepticism misguided? Is it just a way to oversensationalize real problems so people who are tech pessimists, and most likely to do something about the rapid advancement of technology, freeze in their tracks? Or are most tech pessimists already frozen in their tracks, and apocalyptic rhetoric is trying to convert more people to tech pessimism? Am I being insane right now?

The apocalyptic rhetoric used in various different media promotes, at the very least, a skepticism of the advancement of technology. It certainly promotes tech pessimism in films such as I, Robot and in Jennings' use of the word "overlord." Is it, however, a tech pessimism that will bring about change for those who ARE tech pessimists?

References (listed by order of reference)
Johnson, Laura. "(Environmental) Rhetorics Of Tempered Apocalypticism In 'An Inconvenient Truth.'" Rhetoric Review 28.1 (2009): 29-46. Academic Search Premier. Web.

Gross, Matthew Barrett, and Mel Gilles. "How Apocalyptic Thinking Prevents Us from Taking Political Action." The Atlantic. Atlantic Media Company, 23 Apr. 2013. Web. <http://www.theatlantic.com/politics/archive/2012/04/how-apocalyptic-thinking-prevents-us-from-taking-political-action/255758/>.

Glastris, Paul. "Apocalyptic Rhetoric Can Lead to Apocalyptic Politics." The New York Times. The New York Times Company, 3 Aug. 2013. Web. <http://www.nytimes.com/roomfordebate/2015/08/03/when-should-voters-take-a-presidential-candidate-seriously/apocalyptic-rhetoric-can-lead-to-apocalyptic-politics>.

AR & AI

When I think AI, I think a couple things. The first one is Ex Machina, because holy cow was that a good movie. And the second thing is that one AI that came out last year, Dick, who said he would have a people zoo. ISN'T THAT TERRIFYING?! WHAT?! Dick was undergoing a Turing test, and the researchers were asking him questions such as whether or not robots would take over the world, to which Dick responded,
“Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.”
 Yeah, that totally just got even scarier... Okay, moving on. Let's dive into the world of fiction where we can pretend that we did not just read that. (The video is even creepier.)

I find that movies and TV shows about AI aren't always the most uplifting. I, Robot is a literal robot apocalypse. There's Sonny, of course, who's the "good" AI who doesn't go rogue and try to kill/dictatorially try to control all humans, but for the most part, AI was a failure. In Ex Machina (SPOILER ALERT!), Ava, the AI, kills her creator for trapping her inside the testing facility (losing a few limbs along the way), traps the man she tricked into loving her in the facility, finds old versions of her body her creator had made and scavenges for replacement body parts after the fight, then escapes into the real world, looking completely and totally human. Ava was a success in that she had conceptually perfect artificial intelligence, but a huge failure in her clear capacity to murder and deceive, two of the things people frequently worry about AI being able to do.

In both I, Robot and Ex Machina, the magnitude of uncontrollable AI is huge. While in I, Robot a human-esque AI remains to save them, it's obvious that Sonny only exists to extend a movie plot. Granted, so does the robot apocalypse, but bear with me. If the world of AI were ever to exist, the chances of a Sonny existing are probably pretty slim. Hell, they were slim in the movie to begin with--I'm pretty sure Sonny was the only good robot out of millions of units. I, Robot features a smorgasbord of robots glowing red from the inside, forcibly pushing people back into homes, pre-recorded voices telling them that it. Is. For. Their. Own. Safety. Please. Many robots, when in conflict with the movie's main hero, are not afraid to kill for what they have been programmed to believe is the greater good. They are no longer under any human control.

In Ex Machina, the apocalyptic rhetorical effect is much subtler, much less heavy-handed than row after row of potentially homicidal robots about to be dropped from a plane. In Ex Machina, AI is perceived not as taking over humanity in a dominant, controlling way, but rather as becoming indistinguishable from humanity, blurring (or completely destroying) the lines between what is human and what is not. Ava's successful passing in the world outside of the testing facility questions the very identity of not just what it means to be a particular identity--black, white, man, woman--but what it means to be of your own species. Seems of pretty large magnitude to me.

However, in the real world, things aren't always so... Imaginative, I guess. We don't have any Sonny's or Ava's, but we do have Dick's and Watson's, IBM's newest AI computer. While many news sources portray Watson as revolutionary and game-changing (similar to CRISPR), when Watson went on Jeopardy! with Ken Jennings, Jennings wrote, "I, for one, welcome our computer overlords" after Watson beat him by a large margin. If "overlord" isn't apocalyptic, I don't know what is. It's hints at an I, Robot kind of world, where the evil robots were literally controlled by a central robotic "overlord." Never in human history has "overlord" been used in a positive fashion. "Just visiting the overlord today! Can't wait!" said no one ever. Articles that discuss Watson's negative side, such as this one from New York Magazine, always describe Watson's downsides in the context of fear--fear that Watson will lose control, fear that we will lose the ability to control it (him?), fear that we will fall behind in the race between man and machine. These are all fears that are echoed in the apocalyptic rhetoric of AI-themed cinema.

While of course AI is surrounded by plenty of hopeful rhetoric, there is a lot of rhetoric surrounding it both in casual/performance settings (e.g. Ken Jennings), news articles (e.g. NY Mag), and cinema (e.g. I, Robot and Ex Machina) that is apocalyptic in nature. This kind of rhetoric ultimately propagates technological pessimism, as I will elaborate in the next post.

References (listed by order of reference)
Draper, Chris. "AI Robot That Learns New Words in Real-time Tells Human Creators It Will Keep Them in a “people Zoo”." Glitch.News. Glitch, 27 Aug. 2015. Web. <http://glitch.news/2015-08-27-ai-robot-that-learns-new-words-in-real-time-tells-human-creators-it-will-keep-them-in-a-people-zoo.html>.

Ex Machina. Dir. Alex Garland. Perf. Domhnall Gleeson, Oscar Isaac, and Alicia Vikander. Universal Pictures International, 2015. Film.

I, Robot. Dir. Alex Proyas. Perf. Will Smith, Bridget Moynahan, and Alan Tudyk. Twentieth Century Fox Film Corporation, 2004. Film.

Zimmer, Ben. "Is It Time to Welcome Our New Computer Overlords?" The Atlantic. Atlantic Media Company, 17 Feb. 2011. Web. <http://www.theatlantic.com/technology/archive/2011/02/is-it-time-to-welcome-our-new-computer-overlords/71388/>.

Lazar, Zohar. "How Afraid of Watson the Robot Should We Be?" New York News & Politics. New York Media LLC, 20 May 2015. Web. <http://nymag.com/daily/intelligencer/2015/05/jeopardy-robot-watson.html>.

Saturday, June 4, 2016

Apocalyptic Rhetoric

Shifting gears here, I'm going to start talking about a different rhetorical strategy that is commonly used in media representations of science, and then I'm going to relate it to tech pessimism.

Apocalyptic rhetoric is defined in multiple ways depending on the perspective you're coming from, but for the purposes of this analysis, I'm going to use Killingsworth and Palmer's definition: rhetoric that "'uses images of future destruction—‘apocalyptic narratives’—to predict the fall of the current technocapitalist order,' an order represented especially by 'big business, big government, and big science'" (qtd. by Johnson 34). It, like rhetoric of effortlessness, seems pretty self-explanatory: apocalyptic rhetoric is rhetoric that hints that the apocalypse is nigh.

The word "apocalypse" has become pretty commonplace in pop culture (e.g. "zombie apocalypse," "robot apocalypse," Apocalypse Now, etc.), and many ideas of those apocalypses usually result in collapse of business, government, and science. Apocalyptic or post-apocalyptic media representations usually involve people bartering or scavenging for goods instead of going into stores and paying for goods and services; the government as we understand it has either collapsed completely or has become like Big Brother, no longer resembling democracy so much as dictatorship; and science has often failed society, either being the reason the apocalypse has happened in the first place (e.g. zombie and robot apocalypse movies) or failing to save society the way people expected it to.

Apocalyptic rhetoric is often used in biblical contexts, since the Bible is one of the most studied texts that discusses an apocalypse, but it's also used a lot in environmental rhetoric and, increasingly, rhetoric of science. Casadevall, Howard, and Imperiale point out how apocalyptic rhetoric at the Asilomar conference led to a moratorium (a temporary ban on a particular activity/practice) on certain experiments concerning recombinant DNA (1). This makes sense, since Johnson writes that apocalyptic rhetoric is most often used to shock people and rally support for political issues more than it's used for "wholescale [attacks] on the ideology of progress" (34). Apocalyptic rhetoric isn't meant to be a tool of Karl Marx to bring down the system in one fell, apocalyptic swoop; rather, it is a tool for micro-change that comes in short bursts and stages.

This strategy is also based in perceptions of risk. What we think is likely to happen in the future (i.e. the risk we perceive) can be easily swayed by something such as apocalyptic rhetoric. Since apocalyptic rhetoric frames situations as being extremely risky (like, "end of the world" risky), it is extremely effective as it plays off deep human fears (Casadevall et al. 2). What apocalyptical rhetoric lacks in likelihood (of risk), it makes up for in magnitude. We might never see a zombie, robot, germ, religious, etc. apocalypse, but according to almost every representation of any of those things, when it DOES come, we're all pretty much screwed.

References (listed by order of reference)
Johnson, Laura. "(Environmental) Rhetorics Of Tempered Apocalypticism In 'An Inconvenient Truth.'" Rhetoric Review 28.1 (2009): 29-46. Academic Search Premier. Web.

Casadevall, Arturo, Don Howard, and Michael J. Imperiale. "The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of- Function Debate." mBio 5.5 (2014): 1-2. Web.

Thursday, June 2, 2016

ROE, CRISPR, & TO

Whoops, looks like I got a little carried away with the acronyms... "TO" stands for tech optimism, in case that was unclear. Now, of course this post is going to be a little limited in scope. "Technology" applies to SO many fields--computers, aerospace, solar, medical, and on and on and on.  So in the interest of brevity, I'm going to talk specifically about how rhetoric of effortlessness concerning CRISPR has led to a lot of optimism about CRISPR as a technology.

I went ahead and Googled "CRISPR" and here's a breakdown of the first 20 results:
  • Wikipedia page (obviously)
  • 9 articles/websites that are almost entirely informational (e.g. explaining how it works, research facility websites, science museum websites, etc.)
  • 8 articles that frame CRISPR almost exclusively positively (e.g. "a new era," "remake the world," "biggest biotech discovery of the century," "game-changing," etc.)
  • 0 articles that frame CRISPR almost exclusively negatively
  • 3 articles that present balanced views of CRISPR
Now, it's not that there's nothing negative to be said about CRISPR. The potential for a Gattaca-esque world of "designer babies," unforeseen diseases/consequences borne of trying to eliminate known gene sequences for illnesses, even MORE overpopulation, and increased classism (it wouldn't be cheap to design your baby, I'm sure) are just a few of the very damaging side effects of CRISPR. I'm not asking writers to lambast the technology, but even in the articles that are more balanced, like this one from the Guardian, the final verdict is relatively positive. The Guardian article specifically ends with a quote from Doudna saying she thinks that people will accept CRISPR in a similar way they accepted the (initially shocking and morally questionable) technology of in vitro fertilization: reluctant at first, but eventually comfortable. We nowadays view IVF as a very useful and enabling technology, allowing same-sex couples (e.g. Neil Patrick Harris and his husband, David Burtka), infertile mothers, or those who just don't want to experience pregnancy to have biological children. To compare CRISPR to IVF is ultimately to propose that CRISPR, like IVF, will become something that we greatly appreciate.

The effects of rhetoric of effortlessness are supposedly to increase credibility of scientific discoveries and trust in science in general, which seems pretty clearly to me like increased tech optimism. While of course there have been other rhetorical strategies used to frame CRISPR, not all of which positively affect tech optimism, it's certainly quite interesting to see just how much rhetoric of effortlessness is used, and then see the corresponding effects on people's perceptions of CRISPR as a potentially very good or very bad technology.

The way people perceive technology (in an optimistic or pessimistic fashion) can have effects on things such as public policy. Hochschild et al. specifically write about tech optimism and pessimism in the arena of genomic science. According to them, Americans are overwhelmingly tech optimist, especially white Americans. Despite lacking technical knowledge in these matters, these tech optimists are more likely "to endorse governmental funding and regulation of the three forms of medical or scientific genomics activity, to trust public officials and private companies to act in the public good, and to endorse legal biobanks" (11). If people are very optimistic about CRISPR, that could have some serious and lasting effects on governmental policy/regulations concerning genetic alteration.

I myself am very nervous about that kind of future, because I totally think Gattaca is going to happen. But I guess for that, it's only a matter of time. For the time being, I guess I just have to accept that people are going to continue viewing CRISPR very optimistically because it was just so ~effortlessly~ discovered, and try not to be overtaken by my soon-to-be genetically superior overlords peers. 

References (listed by order of reference)
Hochschild, Jennifer, Alex Crabill, and Maya Sen. "Technology Optimism or Pessimism: How Trust in Science Shapes Policy Attitudes toward Genomic Science." Issues in Technology Innovation 21 (2012): 1-16. Web.

Corbyn, ZoĆ«. "Crispr: Is It a Good Idea to ‘upgrade’ Our DNA?" The Guardian. The Guardian, 10 May 2015. Web. <https://www.theguardian.com/science/2015/may/10/crispr-genome-editing-dna-upgrade-technology-genetic-disease>.


Monday, May 30, 2016

ROE & CRISPR

"... found it by accident..."

"... discovery by accident..."

"... discovered by accident..."

"... accidental discovery..."

"... a eureka moment..."

These phrases were all either in headlines or in body paragraphs of articles from Business Insider, BiotechIn, The New Yorker, the Genetic Literacy Project, and the New York Times in reference to a new genetic engineering technology called CRISPR. It's a much easier and more precise way to alter genetic sequences in pretty much any organism using specific molecules (termed "clustered regularly interspaced short palindromic repeats") found in commonly occurring microbes.

So how did it get discovered?

Well, it seems that media sources and even Jennifer Doudna, one of the inventors, make it seem as though it just sort of... Happened. Doudna and co-inventor Emmanuelle Charpentier (not that this is relevant right now, but they're both women!) were doing research together on what were essentially the immune systems of microbial DNA. After a while, they serendipitously discovered CRISPR,  lauded as biotech's "most promising breakthrough." Fast forward a few months, and hooray! They've each won a $3 million Breakthrough Prize, and are rumored to win a Nobel prize. There are countless articles about the technology itself--warning of its ethical implications on human genetic alteration (i.e. "designer babies"), celebrating its revolutionary abilities, etc.--but almost every article that mentions its actual discovery refers to it as unintentional in some way.

While on face, this use of the word "accidental" seems like rhetoric of self-effacement (where the scientist's efforts are not acknowledged as important to the discovery), media coverage of CRISPR's discovery both praises the simplicity with which it was discovered as well as the presence and significance of the scientists (especially Doudna). All of the aforementioned articles (except for the one from the NY Times) spend several paragraphs talking about Doudna and Charpentier (or sometimes just Doudna) before even mentioning CRISPR. Doudna was very clearly associated with the invention of CRISPR, and was even asked to give a TED Talk in London. The NY Times compared her discovery of CRISPR as analogous to Watson and Crick's discovery of the double helix structure of DNA (even though it was actually discovered by a woman named Rosalind Franklin, but I won't get into that...).

Two of the three components that make rhetoric of effortlessness effective can be seen here. First, the technology was discovered very naturally, therefore making it seem more credible as a "naturally-occurring truth." Doudna and Charpentier weren't putting building blocks together to make a genetic modifier; they were basically just trying to figure out how bacteria fight the flu. CRISPR is merely an already-occurring biological process applied to non-bacterial organisms. It's less of an invention and more of a nifty application, like aloe vera sap used for sunburns. Like other "natural" truths, it was more "unveiled" than it was "constructed." While machinery exists to actually use CRISPR technology (fingers have been found to be inadequate instruments for splicing RNA), CRISPR itself is a naturally occurring molecule. Second, the discovery of CRISPR seems even more effortless (and credible) because there were no problems when discovering it. How could there be, when it's a completely natural phenomenon that occurs in bacteria, like digestion or replication? Because its discovery and subsequent tests seemingly did not involve any errors or uncertainty, CRISPR was more credible as a new technology.

The reason I'm not really talking about the third reason, where the scientists are seen as more credible for having expended such little effort, is because it's hard to gauge how people view Doudna and Charpentier. I couldn't find any articles that were about their credibility as scientists. Articles that mention them mostly state facts: direct quotes from interviews, educational history, the discovery of CRISPR. However, I'm sure that the impact CRISPR has had/continues to have in the bio-technical sphere (see what I did there) will make both inventors extremely credible names in future research.

The discovery of CRISPR, overall, has been overwhelmingly represented as a coincidental finding by Doudna and Charpentier. It required seemingly very little effort, occurred completely naturally, and apparently had no obstacles keeping Doudna and Charpentier from developing its potential. In the next post, I'm going to talk about how the increased credibility of scientific discoveries/scientists as well as the increased trust in science leads to relatively extreme tech optimism.

References (listed by order of reference; some are referenced more than once)
Loria, Kevin. "The Researchers behind 'the Biggest Biotech Discovery of the Century' Found It by Accident." Business Insider. Business Insider, Inc, 07 July 2015. Web. <http://www.businessinsider.com/the-people-who-discovered-the-most-powerful-genetic-engineering-tool-we-know-found-it-by-accident-2015-6?r=UK&IR=T>.

Sushmitha. "CRISPR-Breakthrough Discovery by Accident." Biotechinasia. Biotech Media Pte. Ltd., 23 July 2015. Web. <https://biotechin.asia/2015/07/23/crispr-breakthrough-discovery-by-accident/>.

Specter, Michael. "The Gene Hackers." The New Yorker. The New Yorker, 08 Nov. 2015. Web. <http://www.newyorker.com/magazine/2015/11/16/the-gene-hackers>. 

Palca, Joe. "How Accidental Discovery Led to Gene Editing Breakthrough–and Maybe to Nobel Prize." Genetic Literacy Project. Genetic Literacy Project, 14 Oct. 2014. Web. <https://www.geneticliteracyproject.org/2014/10/14/how-accidental-discovery-led-to-gene-editing-breakthrough-and-maybe-to-nobel-prize/>.

Pollack, Andrew. "Jennifer Doudna, a Pioneer Who Helped Simplify Genome Editing." The New York Times. The New York Times, 11 May 2015. Web. <http://www.nytimes.com/2015/05/12/science/jennifer-doudna-crispr-cas9-genetic-engineering.html>.

Johnson, Carolyn Y. "Control of CRISPR, Biotech’s Most Promising Breakthrough, Is in Dispute." Washington Post. The Washington Post, 13 Jan. 2016. Web. <https://www.washingtonpost.com/news/wonk/wp/2016/01/13/control-of-crispr-biotechs-most-promising-breakthrough-is-up-for-grabs/>.

"How CRISPR Lets Us Edit Our DNA." Online video clip. TED. TED, Sep. 2015. Web.

Sunday, May 29, 2016

Rhetoric of Effortlessness

When I say "rhetoric of effortlessness," what do you think of? Do you think of rhetoric that "consists in conveying the impression that, whereas a particular investigator was responsible for a finding, establishing that finding cost that investigator little mental, physical or social effort" (McAllister 148)? Because if you do, then you are absolutely correct! Rhetoric of effortlessness, developed by James McAllister, is pretty self-explanatory. It's rhetoric that makes scientific discoveries seem as though they were made, well, without effort. It's related to, but different from, two other rhetorical concepts, which McAllister calls rhetoric of effort and rhetoric of self-effacemnent. I'm not talking about either of those things in the next post, so I won't go into the nitty gritty details. All you need to know is that rhetoric of effort makes it seem like the scientist poured their heart and soul into a finding, whereas rhetoric of self-effacement is almost the complete opposite; it implies that the scientist put in no effort, and that they basically just happened to be in the right place at the right time when the discovery decided to make itself known to the world.

Rhetoric of effortlessness, then, differs in the regard that it portrays effort as still being required to discover something, and the scientist is as still the one who discovered it, but the amount of effort required is very clearly minimal.

Rhetoric of effortlessness that is successfully executed (i.e. rhetoric that the audience believes) results in the discovery having more credibility as "objective" and "true," the scientist having a better reputation for being skilled enough to come upon the discovery without overexerting her/himself, and science in general being seen in a more positive light as a result of the first two.

According to McAllister, these effects occur for three main reasons.

The first reason depends on the assumption that "truths are natural and discovered, whereas departures from the truth are artificial and constructed" (148). If that is the case, truths are perceived to be easy to discover, because "[additional] effort would raise suspicions that one had constructed a falsehood instead" (148). For instance, it wasn't hard to discover that a particular plant was poisonous if ingested; it really only took one sucker to die from eating a berry for the whole village to be like, "Okay, not that one." These kinds of truths were learned through common sense and/or observation. However, more "artificial" truths are met with more resistance because they are less obvious and often involve technical aspects that the average person doesn't necessarily understand. (I still only sort of know how gene splicing works, and I've taken two classes that were specifically about genetics.) This way of thinking is similar to a "resistance to the artificial" that people feel towards inventions such as GMOs or artificial intelligence when evaluating risk (Kolodziejski). Ease, and the implied "naturalness" that presupposes it, makes discoveries more credible as "natural truths."

In a similar vein, the second reason assumes that effortless discoveries are the natural direction of progression when researching that thing. If something is effortless, people assume that there were likely no unexpected obstacles, complicated problems, or alternatives that would require additional effort to untangle and solve. Those additional factors lead to more chances for error and uncertainty, and since the average person can't possibly know whether or not a study accounted for every single error or potential for uncertainty, the potential of existing errors/uncertainties makes the discovery less credible. Therefore, if a scientist expends little or no effort, the finding is more credible.

The last reason builds off of the second one to give credibility to the scientist: if a researcher was able to discover something in such a fashion that requires very little effort, then they must be incredibly talented and intelligent. And because they chose a direction of research that didn't involve many extraneous problems, and therefore produced something with (presumably) very few errors. These assumptions make the scientist (as opposed to his/her discovery) seem more trustworthy.

References (listed by order of reference)
McAllister, James W. "Rhetoric Of Effortlessness In Science." Perspectives On Science 24.2 (2016): 145-166. Academic Search Premier. Web.

Kolodziejski, Lauren A. "ROSTM & Risk Communication." California Polytechnic State University, San Luis Obispo. Building 186-C201, San Luis Obipso, CA. 25 Apr. 2016. Lecture.