#12, Question 3

Sorry this is late but I was busy celebrating the Resurrection and eating jelly beans this wknd.

(Tis possible that the above is an excuse and I just forgot.)

DMCA essentially prohibits both circumvention and reverse engineering except in specific (and to me, unclear) scenarios where the motivations, process, and precise details of the usage of previously existing software is taken into account. In reading the EFF’s Coder’s Rights Reverse Engineering FAQ, I could discern no real definition of what was deemed legal and illegal by the DCMA. Instead, every other sentence seemed to say something to the tune of “If you’re anywhere near this gray area, consult a lawyer.” Regarding circumvention, the regulation was presumably constructed to protected publishers of media content from losing profits due to illegal piracy by preventing the bypass of access controls. The reverse engineering guidelines to me were a lot more nuanced and unclear about what was legal and what was not. In the majority of the cases discussed in EFF’s FAQ, the concept in question that determined legality was something called fair use. But even this term is loosely defined: “The fair use doctrine allows users to make unauthorized copies in certain circumstances. Courts have found that reverse engineering for interoperability, for example, can be a fair use.”

As a result, the legal ability to make copies of any digital media for any purpose (piracy, entertainment, research, individual use, historical preservation, etc.) is a swamp of possible traps.

I think that in order to protect entire industries (movies, music, etc.) in an age where companies’ output is trending toward being almost exclusively sold in digital format and over the Internet, there has to be some control over how people can access the files and information put out by these companies. As is the case with a lot of lawmaking, the problem is not with the act itself but with the intent and the next steps, which is hard to monitor or control in policymaking. If a family purchases several CDs and then rips the audio to put on their phones so they don’t have to bother with taking 10 CDs on a roadtrip, there are few problems. However, when the same process of ripping a CD results in the CD tracks being distributed online such that the need to purchase anything becomes obsolete and the record company has no way of profiting, many agree that an act of theft has occurred. Yet several of the articles argue how total control over access to digital media stunts the possible creative and productive power of society, putting limitations on how media can be used and who can access them, as well as how material can be studied and recorded for scholarly and historical purposes.

Anecdotal support for the creativity argument:
At the risk a SWAT team immediately arriving at Lewis Hall in the next 5 minutes, I confess to creating a pretty next level volleyball warmup soundtrack in high school by mixing a bunch of mp3 files together, and I’ll leave it to the reader to decide whether I paid for all (any) of them.

The same issue of intent is inherent in the question of circumvention and reverse engineering, but the additional factor of safety is perhaps more pertinent. If a programmer or security analyst (as mentioned in one of the articles) can access his or her pacemaker and protect it from outside threats such as being hacked or bugging out and switching to the wrong mode while a patient is climbing the steps of the London Underground, then it makes sense for access to be available. But how do companies protect against access to their products with mal-intent, prevent accidental misuse or inexperienced mistake that leads to tragedy, and protect their own business interests by preventing competitors from copying their products? Giving the masses access to control over a giant, blade-wielding farm machine’s software sounds a bit dangerous to me (sorry I’m so articulate when it comes to farming equipment). Concurrently giving access to the code of a self-driving car with GPS destinations sounds like a recipe for a really fun friend something like the one below that could be directed straight to your house by an Idahoan basement-dweller (sorry, Idaho).

Screen Shot 2017-04-18 at 10.32.38 AM

(I spent a lot of time putting ^^that masterpiece^^ together and am very proud of it.)

It seems that we would have put a lot of trust the competence and good heart of general society in order to give them the power to manipulate what hundreds of talented engineers have spent months or years developing for a defined ethical purpose. It’s a lot harder to monitor the ethics of what’s going on in people’s backyards or basements or underground lairs than at bonafide businesses. I’m not saying that I don’t have that trust, but it definitely gives me pause to think about (see above demon tractor again for perspective).

#11, Question 1

The motivation for developing self-driving cars is fairly clear. For companies, coming out with new and particularly revolutionary products presents an economic edge. In general, self-driving cars can eventually offer safer transportation, more environment-friendly transportation, more on-demand access to to transportation for people who may not own a car, allow people to spend less of their time in transit and more time on other possibly more directly productive or relaxing activities (reading, doing homework, socializing, etc.). Although most of the articles mentioned that at this time and in the near future, a “driver” must be in the driver’s seat to intervene or takeover in certain situations that algorithms are uncertain about, presumably in the future there will be less need for this and all occupants of the car can act as passengers (or perhaps even an empty car could travel to a location to pick people up).

However, similar to arguments against automation in general, self-driving cars eliminate the jobs of people such as truck or taxi drivers. They also make us dependent on the technology such that if the systems crashed and we had lost the ability to drive ourselves, we would no longer be able to achieve a reasonable amount of transportation. Additionally, like was discussed in one of the articles, the algorithms in self-driving cars have to make moral choices about which possible outcome is the most desirable, limiting the number of casualties or the amount of damage. In many scenarios, I think that people would be hesitant to give this decision to a computer program. If a thief has just shot and killed a bank teller and is running across a street and the computer program must decide whether to hit the pedestrian or put myself and my newborn child in terrible danger, I’m not sure that’s a set of variables that can be communicated to a computer program quickly enough such that it will make the same decision that people would agree to be moral.

Eventually, I think that self-driving cars will make the roads safer. Computer programs don’t so easily get distracted or fall asleep, and they can make complicated calculated judgements quite a bit faster than people can. Accidents could be avoided at higher rates, overcorrection could be eliminated, lives could be saved. Computers can also communicate with each other more efficiently than people can, so perhaps when self-driving cars eventually become the norm accidents could be eliminated even further when cars are communicating with each other about their next moves, their surroundings, their passengers and intended routes, the pedestrians around them, etc. Additionally, traffic and routing could become much more manageable with computers driving cars.

The politics and legal issues of accidents will also be drastically changed by self-driving cars. Who is at fault in an accident involving two cars? Certainly neither of the people in the cars bear responsibility if neither had control of their cars. Should the company that developed the car’s software be held responsible for the accident? What about in more serious cases where perhaps death and legality are involved? Can the company be held legally responsible for the death of a driver or a pedestrian if unforeseen circumstances caused the software to make decisions resulting in serious consequences? These are serious questions that need to be addressed by legislators and politicians as the market for self-driving cars expands. Fortunately, as the New York Times article revealed, the government is supportive of the potential for reduced automobile deaths and so very willing to work with companies on regulations, rules, and laws related to self-driving cars.

I don’t think I personally want a self-driving car until they start being more universally adopted. To protect my own safety and the safety of those in the car with me, I think there are quite a few kinks to be worked out before I feel comfortable trusting the system. That being said, most of the articles also mentioned a manual override feature. So, perhaps I would own a self-driving car and only use the automatic functionality as I began to get more comfortable and accustomed to it. I was unclear on whether the manual override occurred only in situations where the automatic driving software detected a situation it found itself unable cope with or if manual override could be selected.

#10, Question 1

Trolling is basically the posting of inflammatory content on the Internet solely in the hopes of inciting argument or adverse reactions. The same thing can be done in person (ex: “Hey Jimmy, your mom looks good today”). Unfortunately, internet trolling gained a title and its own helping of media attention because it tends to outperform its low-tech counterpart in magnitude, tactlessness, and lifespan. Burns can go viral, hundreds of strangers can join in, and fights or posts made in the heat of the moment can live on in perpetuity even after being deleted if anyone has taken a screenshot or something similar. In high school I tweeted something in anger after my friend was expelled that I realized hours later sounded like a threat. I deleted it, but a parent had already printed it off and called the school. Administrators were pretty shocked, and I almost got kicked out of National Honor Society (oops). This isn’t exactly an example of trolling because I wasn’t intending to incite anything, but it is an example of how the Internet as a means for expressing ourselves has some pretty important distinguishing factors.

Additionally, the perceived distance that people feel from targets of their trolling on the Internet dehumanizes the victim. It’s easy for me to laugh at and make stupid jokes about a celebrity because they feel like a character rather than a human being. However, if Tiger Woods were standing in front of me right now I would hesitate to say anything unkind about him. Internet trolling puts people at a distance from others that often allows for cruel objectification. This was demonstrated in the podcast episode and article about the woman who directly contacted and wrote about her Internet troll who had made a fake Twitter account about her recently deceased father. Once he more directly connected his victim to a human being he could interact with, he felt remorseful about unkind things that he had done.

If a tech company such as Facebook or Twitter has rules or policies or agreements that state the terms of use for a user, then they have at least some responsibility to enforce said rules, at the very least when violations are reported to them. At the very least, separate from ethical obligations, it makes business sense to prevent/suppress harassment and stalking on your platform so that people feel more comfortable using it.

Anonymity on the Internet is both a blessing and a curse. It can provide people avenues to seek help, get advice, learn new things, and perhaps even intervene in bullying situations where they wouldn’t have if their identities were made clear. However, it also helps people to feel as though no one will ever know that any of their online interactions or posts originated from them, so it separates people from the responsibility they feel for their own actions.

Trolling is a problem on the Internet because it creates an outlet for bullies that is nearly free from ramifications. Not only that, but I feel that it can take bullying from what we think of on playgrounds or high school text messages into adulthood and people’s careers. Trolling can also change the audience of an act of bullying or targeted attacks. People feel fewer trepidations about liking a hateful Facebook status or showing their friends a girlfight on Twitter than they would if these things were happening in person. In addition, people from other high schools, other states, even other countries can join in online taunting, which creates a dangerous environment if totally unregulated.

Project 3 Individual Reflection

Privacy Paradox, Option 2

I wasn’t particularly shocked by a lot of the information that the podcasts shared, but they did help me to keep thinking about how much I value my own privacy and how to go about protecting that privacy. At the end of Dr. Chawla’s data science course last year we looked at a made up but realistic case study that involved a woman with a chronic invisible health condition, a data brokerage firm, a hacktivist group, and an employer health program that collected employee data voluntarily. That story helped me become more aware of a lot of the ethical and legal complications of collecting and using big data, and why someone would place such a high value on their personal privacy. The challenges and podcasts didn’t really cause me to make any major changes in my technology habits, but I did change a few permissions on the apps installed on my phone and download the Privacy Badger add-on to Chrome so I can see what is tracking me from any given web page. Listening to the podcasts and hearing different perspectives did, however, make me revisit a few concerns I’ve had and will probably make me a bit more wary about keeping track of permissions, privacy agreements, accounts, etc. One reaction I had was to almost immediately think about sending the podcasts to members of my family. I know that they haven’t had the same experience or exposure to technology that I have, and making them more aware could help them protect themselves from feeling violated by personal information collection or usage.

I think it’s hard to choose a side in the Privacy Paradox debate, and I think most of us fall somewhere in the middle. Very rarely are people willing to give up their technological convenience, particularly if some product or service (ex: targeted advertising, Google maps, Alexa) has become a part of their daily lives and they are very accustomed to convenience. Making any kind of reversal in technology advancement is often not worth it to people even if they claim to value their privacy. I tend to fall in that range. I value my privacy and am aware of the information I’m sharing, but to some degree I know that at least some of my personal information being collected, analyzed, bought, and sold is a bit beyond my control.

Technology and its uses have moved forward faster than any kind of regulatory legislation ever could, and a lot of the powerful lobbyists come from big tech. Privacy is something worth protecting in my opinion. I don’t want to have higher insurance rates because I Googled mental health resources in high school, or because three of my friends on Facebook were pregnant before they turned 18. However, there’s not much I can do other than weigh what technologies I value and start cutting off sources of data that I directly share such as Facebook, Uber, and the cloud. Pictures of me will be taken at stoplights, my medical data will be stored online, my purchase and travel history will be neatly packaged up by my credit card company, and my family will all have similar data collected that will contribute to the analysis of me that algorithms can create. If this means that my kids will eventually have incredibly accurate career recommendations from their data snapshots, awesome. If it means that I might not be offered a job because a company can tell that I’m trying to start a family with my husband and they don’t want to have to grant me maternity leave, not so great.

#9, Question 3

~This is the earliest I’ve ever turned anything in.~

My understanding of “Fake News” is basically false information composed to look like a news story and designed to attract attention. I’ve heard it almost exclusively in the context of Facebook. Before reading the articles I had assumed that the articles and sites housing the articles were politically motivated. While this may have been true in some cases, I found out from the articles that dubious publishers were simply taking advantage of inflammatory subjects in order to make money from the web traffic that outright false headlines would attract. Several of the articles mentioned Macedonian teens writing the articles to watch their bank accounts grow. One of the articles featured an interview with a man whose company owns several domains that publish fake news. His claims that his motivations are to expose fake news and “highlight the extremism of the white nationalist alt-right,” but he’s made a career out of publishing fake stories and attracting web traffic without a punchline at the end to reveal that the story is a hoax.

I find the content to me personally to be very annoying, but also fairly harmless. I trust myself to fact check and have discussions with educated people about the truth. However, I think that the content can be dangerous in the hands of someone who might not understand how easy it is to breach journalistic integrity on the Internet. I would venture to put my early high school self in that category.

When scrolling through my Facebook newsfeed, I didn’t notice any fake news. I did get sucked into watching this video, but I’ll consider it time well-spent. I didn’t really see any news at all, other than maybe the trending topics. I mostly saw Notre-Dame related events that friends had said they were attending, videos of animals, and videos of food. Turns out, Facebook knows me embarrassingly well. During election season when I felt like I saw most of the fake news, I stopped clicking on articles after they saturated my feed. Also, as I tend to lean pretty heavily left, I didn’t see as many as some Facebook users may have because, as fake news publisher Jestin Coler said, “We’ve tried to do similar things to liberals. It just has never worked, it never takes off. You’ll get debunked within the first two comments and then the whole thing just kind of fizzles out.” However, if I did see friends sharing fake news stories, I would never make any effort to correct them because I typically don’t feel well-informed enough to talk about politics on Facebook.

I’m pretty sure that I don’t think that social media platform providers should be held responsible for shared content. I think this might be a violation of First Amendment rights. If I want to post a fake news story, I should be allowed. Maybe I’m posting it ironically, or as part of an experiment. Maybe I really believe it, and I’m posting it for my friends to read just as I would talk to them about it in person. That said, I recognize the power Facebook has to make the truth known in cases where, as evidenced by the recent political climate and election season, it so often is not. Perhaps stories could be marked as suspicious with an icon if a Facebook algorithm deems them to be so. The posts could still be made, but with a disclaimer that the information may not be reliable.

I think that news and content aggregators bear more of the responsibility of verifying fake news stories. In a perfect world, people would know to double check their news, particularly if it seems wildly unlikely. I tend to agree with Edward Snowden’s plea: “Stop Using Facebook for Your News.” However, if Facebook and Google are to be new mediums for the transmission of news, perhaps we should have made that transition with a bit more trepidation or caution. They are being treated like radio, television, newspaper, and traditional news sites, seen as reliable and fact-checked sources of honest reporting, by unknowing members of the public (the electorate!). Basically, it seems like a Catch 22. Basically, people need to be made aware of fake news in order for news aggregators to carry on in the way that they haphazardly fire out stories. In order for people to be made aware of the presence of fake news, the news needs to be marked as suspicious by content aggregators. I think that a safer approach than marking news as completely fake is to mark it as unverified or suspicious so that people don’t treat it with the same level of trust but will also be inclined to look into the issue further.

I certainly don’t rely on Facebook for all of my news, but I can see where only seeing friends’ posts that share similar viewpoints to my own would put me in an echo chamber. I do think this is a problem, but I don’t think I face it to the extent that many people do. The “echo chamber” is a problem because it lends itself to minimal discussion, compromise, or healthy debate. Basically, it’s cultivating groups of people who don’t understand or respect each other’s viewpoints and not teaching them how to talk to each other about sensitive and important topics.

In response to the question about our “post-fact” future, as one of the articles said, “Every time a new medium expands the possible audience of mass media, and opens up new spaces for new voices to be heard, it upsets the delicate balances of power that rested upon the previous media structure.”

I like to think that people care enough about the truth to find a new balance with this new medium where people understand their sources a bit better. Unfortunately, this upheaval of structure just so happened to coincide with an important presidential election. :)))))

#8, Question all

I’ll start this post by saying that of all the topics we’ve covered so far, this might be the one that I’ve understood the least. This is probably because corporate personhood is something that I haven’t given much thought to, or even had the term to identify the concept. Disclaimer accomplished, I’ll do my best to use what I learned in the articles to address the questions and communicate my semi-formed thoughts and opinions.

The concept of Corporate Personhood, from what I understand, is that corporations are equivalent to persons in a variety of ways. There was some conspiracy theory and discrepancy about the use of the word persons in the Fourteenth Amendment being used in place of citizens to protect corporations’ powers, but I had never heard of the word person being used to describe anything other than a human being until it was mentioned in the National Review article that the United States Code specifies its use of the word persons to include “corporations, companies, associations, firms, partnerships, societies, and joint stock companies, as well as individuals.” I don’t think that anyone is claiming that corporations should be treated exactly like people. The right to vote is an example mentioned of a right that should not be extended to corporations. However, as Kent Greenfield mentions in his article, to dismiss corporate personhood completely is to remove a lot of what makes our capital markets work: investing and corporate accountability. Separating a corporation from the people that make up that corporation (both the shareholders and the employees) is a difficult thing for me to do, in large part likely because I don’t really understand business and economics, but to free a corporation from the moral obligations that individual people hold seems reckless to me. I think it’s a very complicated web to determine which specific rights and responsibilities of an individual should extend to a corporation and which should not.

Regarding the Muslim Registry case study, I do believe that tech workers and companies are right to pledge not to work on an immigration database based on their moral views. To make the point more clear, if a corporation could stand to gain much in the business world by assassinating the CEO of a competitor, the removal of any moral or ethical weight in the decision would tell them to kill this person. (I recognize that in this hypothetical situation there are also laws and PR to be encountered if the murder was committed, but the point stands.) Beneath any corporation is people, and people cannot disregard their moral systems of belief in making decisions under the guise of a corporation’s grey moral waters because it is not a human person. There is complexity in applying a moral code to an entire company because people’s views vary so drastically, but often to remain silent in ethical matters is to inadvertently take a stance.

As I briefly mentioned before, I think most people would disagree with the notion of total Corporate Personhood or the opposite. There is some middleground to be established, I hope, where corporations continue to operate successfully with some level of accountability while still maintaining and promoting moral practices within themselves.

#7, Question 2

This week’s readings made me think of the trailer for a movie coming out in April called The Circle. I don’t have an eloquent way of working it into my blog post, and I don’t know a ton about the movie, but here! (Also, fun to hear Emma Watson using her American accent.)

I think that a lot of us find it unsettling to know all of the things that Google knows about us. Reading about my purchasing habits being manipulated so thoroughly by habit science was also a bit disconcerting. In theory, I don’t have a problem with my data being collected and analyzed for targeted advertising and the like, but I can somewhat identify with some of the people who felt spooked by Target seeming to be “spying” on them. I may feel violated by a company seeming to treat sensitive life events as it they were public knowledge. However, I don’t find it to be unethical as long as a person is made aware that the data is being collected and will be analyzed. I also think that personal lookup of data should be strictly forbidden.

What seems to bother me the most about data mining and it subsequent uses in advertising and business interactions with clients is that very private information can be disclosed to other parties. I think that the person with the Target credit card has consented to having their information collected and analyzed, but the conclusions that a company draws and acts on can make very private knowledge public, like in the case of the pregnant high schooler who found pregnancy-related ads in the mail for his daughter. What if someone had been looking over the Facebook user’s shoulder when the ad appeared revealing his closeted homosexuality? A toned down example of this is YouTube suggested videos. I appreciate them, I use them, and I get sucked into watching promotional videos as well, but I don’t want my mom knowing that I’m watching videos about how to make killer jungle juice when I’m just trying to show her a video of a dog flopping around in sandals. I think of myself as signing over a lot of the rights to the data that I give companies and I’m not offended by conclusions that they draw from this data, but I think that more care needs to be given to the delivery of these data applications and possible breaches of the confidentiality. One thing that none of the articles emphasized is that we can also benefit a lot from companies’ use of our data in new and innovative ways. Developing medical solutions, identifying concerning mental health trends, suggesting reading material, and yes, even helping families buy diapers when they are expecting are all positive examples.

I would be interested to see further survey results about whether people who are bothered by data collection or angered by the way that companies are analyzing their personal data would continue using free services or make changes in their lifestyles to protect their privacy more heavily. In other words, how much do they value their privacy? From the Atlantic article, it seems like most people don’t care enough to make significant changes (although this could perhaps be associated with the psychology of habit formation). Alternatively, the article mentions that people feel resigned to surveillance and data collection as an unavoidable reality. This would probably sum up the way I feel as well. I don’t think it’s absolutely unethical despite possible repercussions, but I do think that these repercussions need to be considered by companies before taking actions on a case-by-case basis.

Most of the time, I don’t find online advertising to be very invasive. The advertising that does start to bother me is when I am locked out of the content I’m trying to access until I address the ad or watch it for a certain amount of time (ex: YouTube ads). I also find the ads that have sound or expand on mouseover to interfere pretty significantly with my web browsing experience. I do use Adblock because I think it came with my Chrome installation (or I just don’t remember installing it), but whenever a site asks me to turn it off I always comply because I feel bad for the companies whose business models are seriously negatively affected by me being too picky to have a dog food ad on the right side of a page while I’m reading an article. I don’t think it’s unethical, but I do think that it could have a large impact on the availability of free online services. On a somewhat unrelated note, I usually get a laugh out the recent trend of shaming people for clicking no on popup ads.