Christopher Bartel is less worried about the distinction Luck attempts to draw; Bartel argues that virtual pedophilia is real child pornography, which is already morally reprehensible and illegal across the globe.
While violence is easy to see in online games, there is a much more substantial moral value at play and that is the politics of virtual worlds. Ludlow and Wallace chronicle how the players in massive online worlds have begun to form groups and guilds that often confound the designers of the game and are at times in conflict with those that make the game. Their contention is that designers rarely realize that they are creating a space where people intended to live large portions of their lives and engage in real economic and social activity and thus the designers have the moral duties somewhat equivalent to those who may write a political constitution Ludlow and Wallace According to Purcell , there is little commitment to democracy or egalitarianism in online games and this needs to change if more and more of us are going to spend time living in these virtual worlds.
A persistent concern about the use of computers and especially computer games is that this could result in anti-social behavior and isolation. Yet studies might not support these hypotheses Gibba, et al. With the advent of massively multiplayer games as well as video games designed for families the social isolation hypothesis is even harder to believe.
These games do, however, raise gender equality issues. James Ivory used online reviews of games to complete a study that shows that male characters outnumber female characters in games and those female images that are in games tend to be overly sexualized Ivory Soukup suggests that gameplay in these virtual worlds is most often based on gameplay that is oriented to masculine styles of play thus potentially alienating women players.
And those women that do participate in game play at the highest level play roles in gaming culture that are very different from those the largely heterosexual white male gamers, often leveraging their sexuality to gain acceptance Taylor et al. McMahon and Ronnie Cohen have studied how gender plays a role in the making of ethical decisions in the virtual online world, with women more likely to judge a questionable act as unethical then men Marcus Johansson suggests that we may be able to mitigate virtual immorality by punishing virtual crimes with virtual penalties in order to foster more ethical virtual communities Johansson The media has raised moral concerns about the way that childhood has been altered by the use of information technology see for example Jones Many applications are now designed specifically for toddlers encouraging them to interact with computers from as early an age as possible.
Since children may be susceptible to media manipulation such as advertising we have to ask if this practice is morally acceptable or not. Depending on the particular application being used, it may encourage solitary play that may lead to isolation but others are more engaging with both the parents and the children playing Siraj-Blatchford It should also be noted that pediatricians have advised that there are no known benefits to early media use amongst young children but there potential risks Christakis Studies have shown that from to , sedentary lifestyles amongst children in England have resulted in the first measured decline in strength since World War Two Cohen et al.
It is not clear if this decline is directly attributable to information technology use but it may be a contributing factor. Malware and computer virus threats are growing at an astonishing rate. Security industry professionals report that while certain types of malware attacks such as spam are falling out of fashion, newer types of attacks focused on mobile computing devices and the hacking of cloud computing infrastructure are on the rise outstripping any small relief seen in the slowing down of older forms of attack Cisco Systems ; Kaspersky Lab What is clear is that this type of activity will be with us for the foreseeable future.
In addition to the largely criminal activity of malware production, we must also consider the related but more morally ambiguous activities of hacking, hacktivism, commercial spyware, and informational warfare. Each of these topics has its own suite of subtle moral ambiguities. We will now explore some of them here. While there may be wide agreement that the conscious spreading of malware is of questionable morality there is an interesting question as to the morality of malware protection and anti-virus software.
With the rise in malicious software there has been a corresponding growth in the security industry which is now a multi-billion dollar market. Even with all the money spent on security software there seems to be no slowdown in virus production, in fact quite the opposite has occurred.
This raises an interesting business ethics concern, what value are customers receiving for their money from the security industry? The massive proliferation of malware has been shown to be largely beyond the ability of anti-virus software to completely mitigate. There is an important lag in the time between when a new piece of malware is detected by the security community and the eventual release of the security patch and malware removal tools. The anti-virus modus operandi of receiving a sample, analyzing the sample, adding detection for the sample, performing quality assurance, creating an update, and finally sending the update to their users leaves a huge window of opportunity for the adversary … even assuming that anti-virus users update regularly.
Aycock and Sullins In the past most malware creation was motivated by hobbyists and amateurs, but this has changed and now much of this activity is criminal in nature Cisco Systems ; Kaspersky Lab Aycock and Sullins argue that relying on a strong defense is not enough and the situation requires a counteroffensive reply as well and they propose an ethically motivated malware research and creation program.
This idea does run counter to the majority opinion regarding the ethics of learning and deploying malware. Most computer scientists and researchers in information ethics agree that all malware is unethical Edgar ; Himma a; Neumann ; Spafford ; Spinello According to Aycock and Sullins, these worries can be mitigated by open research into understanding how malware is created in order to better fight this threat When malware and spyware is created by state actors, we enter the world of informational warfare and a new set of moral concerns.
Every developed country in the world experiences daily cyber-attacks, with the major target being the United States that experiences a purported 1. The majority of these attacks seem to be just probing for weaknesses but they can devastate a countries internet such as the cyber attacks on Estonia in and those in Georgia which occured in While the Estonian and Georgian attacks were largely designed to obfuscate communication within the target countries more recently informational warfare has been used to facilitate remote sabotage.
The now famous Stuxnet virus used to attack Iranian nuclear centrifuges is perhaps the first example of weaponized software capable of creating remotely damaging physical facilities Cisco Systems The coming decade will likely see many more cyber weapons deployed by state actors along well-known political fault lines such as those between Israel-America-western Europe vs Iran, and America-Western Europe vs China Kaspersky Lab The moral challenge here is to determine when these attacks are considered a severe enough challenge to the sovereignty of a nation to justify military reactions and to react in a justified and ethical manner to them Arquilla ; Denning , Kaspersky Lab The primary moral challenge of informational warfare is determining how to use weaponized information technologies in a way that honors our commitments to just and legal warfare.
Since warfare is already a morally questionable endeavor it would be preferable if information technologies could be leveraged to lessen violent combat. For instance, one might argue that the Stuxnet virus did damage that in generations before might have been accomplished by an air raid incurring significant civilian casualties—and that so far there have been no reported human casualties resulting from Stuxnet. On the other hand, these new informational warfare capabilities might allow states to engage in continual low level conflict eschewing efforts for peacemaking which might require political compromise.
As was mentioned in the introduction above, information technologies are in a constant state of change and innovation. The internet technologies that have brought about so much social change were scarcely imaginable just decades before they appeared. Even though we may not be able to foresee all possible future information technologies, it is important to try to imagine the changes we are likely to see in emerging technologies.
James Moor argues that moral philosophers need to pay particular attention to emerging technologies and help influence the design of these technologies early on before they adversely affect moral change Moor Some potential technological concerns now follow. An information technology has an interesting growth pattern that has been observed since the founding of the industry. Intel engineer Gordon E. Moore noticed that the number of components that could be installed on an integrated circuit doubled every year for a minimal economic cost and he thought it might continue that way for another decade or so from the time he noticed it in Moore History has shown his predictions were rather conservative.
This doubling of speed and capabilities along with a halving of cost has proven to continue every 18 or so months since and shows little evidence of stopping. And this phenomenon is not limited to computer chips but is also present in all information technologies. If this is correct, there could be no more profound change to our moral values.
For example Mary Midgley argues that the belief that science and technology will bring us immortality and bodily transcendence is based on pseudoscientific beliefs and a deep fear of death. In a similar vein Sullins argues that there is a quasi-religious aspect to the acceptance of transhumanism and the acceptance of the transhumanist hypothesis influences the values embedded in computer technologies that are dismissive or hostile to the human body. While many ethical systems place a primary moral value on preserving and protecting the natural, transhumanists do not see any value in defining what is natural and what is not and consider arguments to preserve some perceived natural state of the human body as an unthinking obstacle to progress.
Not all philosophers are critical of transhumanism, as an example Nick Bostrom of the Future of Humanity Institute at Oxford University argues that putting aside the feasibility argument, we must conclude that there are forms of posthumanism that would lead to long and worthwhile lives and that it would be overall a very good thing for humans to become posthuman if it is at all possible. Artificial Intelligence AI refers to the many longstanding research projects directed at building information technologies that exhibit some or all aspects of human level intelligence and problem solving.
Artificial Life ALife is a project that is not as old as AI and is focused on developing information technologies and or synthetic biological technologies that exhibit life functions typically found only in biological entities. A more complete description of logic and AI can be found in the entry on logic and artificial intelligence. ALife essentially sees biology as a kind of naturally occurring information technology that may be reverse engineered and synthesized in other kinds of technologies. Both AI and ALife are vast research projects that defy simple explanation.
Instead the focus here is on the moral values that these technologies impact and the way some of these technologies are programmed to affect emotion and moral concern. In , he made the now famous claim that. I believe that in about fifty years' time…. A description of the test and its implications to philosophy outside of moral values can be found here see entry on The Turing Test. Turing's prediction may have been overly ambitious and in fact some have argued that we are nowhere near the completion of Turing's dream.
For example, Luciano Floridi a argues that while AI has been very successful as a means of augmenting our own intelligence, but as a branch of cognitive science interested in intelligence production, AI has been a dismal disappointment. For argument's sake, assume Turing is correct even if he is off in his estimation of when AI will succeed in creating a machine that can converse with you. Yale professor David Gelernter worries that that there would be certain uncomfortable moral issues raised.
Gelernter suggests that consciousness is a requirement for moral agency and that we may treat anything without it in any way that we want without moral regard. Sullins counters this argument by noting that consciousness is not required for moral agency. For instance, nonhuman animals and the other living and nonliving things in our environment must be accorded certain moral rights, and indeed, any Turing capable AI would also have moral duties as well as rights, regardless of its status as a conscious being Sullins But even if AI is incapable of creating machines that can converse effectively with human beings, there are still many other applications that use AI technology.
Many of the information technologies we discussed above such as, search, computer games, data mining, malware filtering, robotics, etc.
Thus it may be premature to dismiss progress in the realm of AI. Artificial Life ALife is an outgrowth of AI and refers to the use of information technology to simulate or synthesize life functions. The problem of defining life has been an interest in philosophy since its founding.
We need to bring back the draft. McMahon and Ronnie Cohen have studied how gender plays a role in the making of ethical decisions in the virtual online world, with women more likely to judge a questionable act as unethical then men Socrates lived during the long transition from a largely oral tradition to a newer information technology consisting of writing down words and information and collecting those writings into scrolls and books. Written words, …seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you the same thing forever. As life is less based on survival, we become more unbiased, scientific, and pattern-recognizing, as the IQ test measures .
See the entry on life for a look at the concept of life and its philosophical ramifications. If scientists and technologists were to succeed in discovering the necessary and sufficient conditions for life and then successfully synthesize it in a machine or through synthetic biology, then we would be treading on territory that has significant moral impact. Mark Bedau has been tracing the philosophical implications of ALife for some time now and argues that there are two distinct forms of ALife and each would thus have different moral effects if and when we succeed in realizing these separate research agendas Bedau ; Bedau and Parke One form of ALife is completely computational and is in fact the earliest form of ALife studied.
ALife is inspired by the work of the mathematician John von Neumann on self-replicating cellular automata, which von Neumann believed would lead to a computational understanding of biology and the life sciences Artificial Life programs are quite different from AI programs. Where AI is intent on creating or enhancing intelligence, ALife is content with very simple minded programs that display life functions rather than intelligence.
The primary moral concern here is that these programs are designed to self-reproduce and in that way resemble computer viruses and indeed successful ALife programs could become as malware vectors. The second form of ALife is much more morally charged. This form of ALife is based on manipulating actual biological and biochemical processes in such a way as to produce novel life forms not seen in nature. Scientists at the J. While media paid attention to this breakthrough, they tended to focus on the potential ethical and social impacts of the creation of artificial bacteria.
Craig Venter himself launched a public relations campaign trying to steer the conversation about issues relating to creating life.
This first episode in the synthesis of life gives us a taste of the excitement and controversy that will be generated when more viable and robust artificial protocells are synthesized. The ethical concerns raised by Wet ALife, as this kind of research is called, are more properly the jurisdiction of bioethics see entry on Theory and Bioethics. But it does have some concern for us here in that Wet ALife is part of the process of turning theories from the life sciences into information technologies. This will tend to blur the boundaries between bioethics and information ethics.
Just as software ALife might lead to dangerous malware, so too might Wet ALife lead to dangerous bacteria or other disease agents.
Critics suggest that there are strong moral arguments against pursuing this technology and that we should apply the precautionary principle here which states that if there is any chance at a technology causing catastrophic harm, and there is no scientific consensus suggesting that the harm will not occur, then those who wish to develop that technology or pursue that research must prove it to be harmless first see Epstein Mark Bedau and Mark Traint argue against a too strong adherence to the precautionary principle by suggesting that instead we should opt for moral courage in pursuing such an important step in human understanding of life They appeal to the Aristotelian notion of courage, not a headlong and foolhardy rush into the unknown, but a resolute and careful step forward into the possibilities offered by this research.
Information technologies have not been content to remain confined to virtual worlds and software implementations. These technologies are also interacting directly with us through robotics applications. Robotics is an emerging technology but it has already produced a number of applications that have important moral implications.
Technologies such as military robotics, medical robotics, personal robotics and the world of sex robots are just some of the already existent uses of robotics that impact on and express our moral commitments see Capurro and Nagenborg ; Lin et al. There have already been a number of valuable contributions to the growing field of robotic ethics roboethics.
For example, in Wallach and Allen's book Moral Machines: Teaching Robots Right from Wrong , the authors present ideas for the design and programming of machines that can functionally reason on moral questions as well as examples from the field of robotics where engineers are trying to create machines that can behave in a morally defensible way. The introduction of semi and fully autonomous machines into public life will not be simple. Towards this end, Wallach has also contributed to the discussion on the role of philosophy in helping to design public policy on the use and regulation of robotics.
Military robotics has proven to be one of the most ethically charged robotics applications. Today these machines are largely remotely operated telerobots or semi-autonomous, but over time these machines are likely to become more and more autonomous due to the necessities of modern warfare Singer In the first decade of war in the 21 st century robotic weaponry has been involved in numerous killings of both soldiers and noncombatants, and this fact alone is of deep moral concern. Gerhard Dabringer has conducted numerous interviews with ethicists and technologists regarding the implications of automated warfare Dabringer Many ethicists are cautious in their acceptance of automated warfare with the provision that the technology is used to enhance just warfare practices see Lin et al.
A key development in realm of information technologies is that they are not only the object of moral deliberations but they are also beginning to be used as a tool in moral deliberation itself. Since artificial intelligence technologies and applications are a kind of automated problem solvers, and moral deliberations are a kind of problem, it was only a matter of time before automated moral reasoning technologies would emerge.
This is still only an emerging technology but it has a number of very interesting moral implications which will be outlined below. The coming decades are likely to see a number of advances in this area and ethicists need to pay close attention to these developments as they happen. Susan and Michael Anderson have collected a number of articles regarding this topic in their book, Machine Ethics , and Rocci Luppicini has a section of his anthology devoted to this topic in the Handbook of Research on Technoethics Patrick Grim has been a longtime proponent of the idea that philosophy should utilize information technologies to automate and illustrate philosophical thought experiments Grim et al.
Peter Danielson has also written extensively on this subject beginning with his book Modeling Rationality, Morality, and Evolution with much of the early research in the computational theory of morality centered on using computer models to elucidate the emergence of cooperation between simple software AI or ALife agents Sullins Luciano Floridi and J.
Sanders argue that information as it is used in the theory of computation can serve as a powerful idea that can help resolve some of the famous moral conundrums in philosophy such as the nature of evil , The propose that along with moral evil and natural evil, both concepts familiar to philosophy see entry on the Problem of Evil ; we add a third concept they call artificial evil Floridi and Sanders contend that if we do this then we can see that the actions of artificial agents. Floridi and Sanders Evil can then be equated with something like information dissolution, where the irretrievable loss of information is bad and the preservation of information is good Floridi and Sanders This idea can move us closer to a way of measuring the moral impacts of any given action in an information environment.
Early in the twentieth century the American philosopher John Dewey see entry on John Dewey proposed a theory of inquiry based on the instrumental uses of technology. Dewey had an expansive definition of technology which included not only common tools and machines but information systems such as logic, laws and even language as well Hickman This is a helpful standpoint to take as it allows us to advance the idea that an information technology of morality and ethics is not impossible.
As well as allowing us to take seriously the idea that the relations and transactions between human agents and those that exist between humans and their artifacts have important ontological similarities. While Dewey could only dimly perceive the coming revolutions in information technologies, his theory is useful to us still because he proposed that ethics was not only a theory but a practice and solving problems in ethics is like solving problems in algebra Hickman If he is right, then an interesting possibility arises, namely the possibility that ethics and morality are computable problems and therefore it should be possible to create an information technology that can embody moral systems of thought.
Engineers do not argue in terms of reasoning by categorical imperatives but instead they use:. In short, the rules he comes up with are based on fact and value, I submit that this is the way moral rules ought to be fashioned, namely as rules of conduct deriving from scientific statements and value judgments. In short ethics could be conceived as a branch of technology. Taking this view seriously implies that the very act of building information technologies is also the act of creating specific moral systems within which human and artificial agents will, at least occasionally, interact through moral transactions.
Information technologists may therefore be in the business of creating moral systems whether they know it or not and whether or not they want that responsibility. The most comprehensive literature that argues in favor of the prospect of using information technology to create artificial moral agents is that of Luciano Floridi , , , b, b , and Floridi with Jeff W. Sanders , , Floridi recognizes that issues raised by the ethical impacts of information technologies strain our traditional moral theories.
To relieve this friction he argues that what is needed is a broader philosophy of information After making this move, Floridi claims that information is a legitimate environment of its own and that has its own intrinsic value that is in some ways similar to the natural environment and in other ways radically foreign but either way the result is that information is on its own a thing that is worthy of ethical concern.
Floridi uses these ideas to create a theoretical model of moral action using the logic of object oriented programming. His model has seven components; 1 the moral agent a, 2 the moral patient p or more appropriately, reagent , 3 the interactions of these agents, 4 the agent's frame of information, 5 the factual information available to the agent concerning the situation that agent is attempting to navigate, 6 the environment the interaction is occurring in, and 7 the situation in which the interaction occurs Floridi , 3.
Note that there is no assumption of the ontology of the agents concerned in the moral relationship modeled Sullins a. There is additional literature which critiques and expands the idea of automated moral reasoning Adam ; Anderson and Anderson ; Johnson and Powers ; Schmidt ; Wallach and Allen While scholars recognize that we are still some time from creating information technology that would be unequivocally recognized as an artificial moral agent, there are strong theoretical arguments in favor of the eventual possibility and therefore they are an appropriate concern for those interested in the moral impacts of information technologies.
The Moral Challenges of Information Technology 1. Specific Moral Challenges at the Cultural Level 2. Information Technologies of Morality 3. The Moral Challenges of Information Technology The move from one set of dominant information technologies to another is always morally contentious. Thamus is not pleased with the gift and replies, If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.
Phaedrus, section a Socrates, who was adept at quoting lines from poems and epics and placing them into his conversations, fears that those who rely on writing will never be able to truly understand and live by these words. Written words, …seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you the same thing forever. Helen Nissenbaum observes that, [w]here previously, physical barriers and inconvenience might have discouraged all but the most tenacious from ferreting out information, technology makes this available at the click of a button or for a few dollars Nissenbaum and since the time when she wrote this the gathering of data has become more automated and cheaper.
The early web pioneer Stewart Brand sums this up well in his famous quote: Specific Moral Challenges at the Cultural Level In the section above, the focus was on the moral impacts of information technologies on the individual user. James Parrish following Mason recommends four policies that a user of social media should follow to ensure proper ethical concern for other's privacy: When sharing information on SNS social network sites , it is not only necessary to consider the privacy of one's personal information, but the privacy of the information of others who may be tied to the information being shared.
When sharing information on SNS, it is the responsibility of the one desiring to share information to verify the accuracy of the information before sharing it. A user of SNS should not post information about themselves that they feel they may want to retract at some future date. Furthermore, users of SNS should not post information that is the product of the mind of another individual unless they are given consent by that individual.
In both cases, once the information is shared, it may be impossible to retract. It is the responsibility of the SNS user to determine the authenticity of a person or program before allowing the person or program access to the shared information. Parrish These systems are not typically designed to protect individual privacy, but since these services are typically free there is a strong economic drive for the service providers to harvest at least some information about their user's activities on the site in order to sell that information to advertisers for directed marketing.
Aycock and Sullins This lag is constantly exploited by malware producers and in this model there is an everpresent security hole that is impossible to fill. Thus it is important that security professionals do not overstate their ability to protect systems, by the time a new malicious program is discovered and patched, it has already done significant damage and there is currently no way to stop this Aycock and Sullins In , he made the now famous claim that I believe that in about fifty years' time….
Information Technologies of Morality A key development in realm of information technologies is that they are not only the object of moral deliberations but they are also beginning to be used as a tool in moral deliberation itself. Floridi and Sanders Evil can then be equated with something like information dissolution, where the irretrievable loss of information is bad and the preservation of information is good Floridi and Sanders Engineers do not argue in terms of reasoning by categorical imperatives but instead they use: Bunge , Taking this view seriously implies that the very act of building information technologies is also the act of creating specific moral systems within which human and artificial agents will, at least occasionally, interact through moral transactions.
Note that there is no assumption of the ontology of the agents concerned in the moral relationship modeled Sullins a There is additional literature which critiques and expands the idea of automated moral reasoning Adam ; Anderson and Anderson ; Johnson and Powers ; Schmidt ; Wallach and Allen Chadwick eds , Berlin: Highlighting global security threats and trends , San Jose, CA: Austrian Ministry of Defence and Sports.
International Economic Review 21 2: A Very Short Introduction , Oxford: Denis, , The Philosophical Computer: Jones and Bartlett Publishers. Is Hacktivisim Morally Justified? Luppicini and Rebecca Adell Eds.
The Singularity is Near , New York: Heroes of the Computer Revolution , New York: Bekey, , Robot Ethics: Abney, , Autonomous Military Robotics: Wallace, , The Second Life Herald: MIS Quarterly , 10 1: Reprinted in van den Hoven and Weckert , pp. Electronics , 38 8: My opponent defines social interaction as between people who are physically close. However, my opponent has not provided any reason why physical closeness is key to interaction, and admits that MORE interaction is possible when it's easier to contact people you know! My opponent provides no reason why technology increases consumerism, rather than merely the availability of goods.
I'd imagine that living in a capitalist society causes consumerism more than anything. Moreover, my opponent has not shown that consumerism is bad! What comes with this new technology and new wealth? A people less ravaged by hunger, a people where pain and suffering are uncommon, where the average Westerner ie, those who have been more largely exposed to technology lives almost three times as long as their ancestors did a millenium ago .
Reject my opponent's first point. Technology reduces human mental capacity. IQ trends show the exact opposite. As life is less based on survival, we become more unbiased, scientific, and pattern-recognizing, as the IQ test measures .
This upward trend aka the Flynn Effect is most extreme in people in developing countries, who are most rapidly gaining access to technology. Reject my opponent's second point. Technology and its effects cause a loss of altruism and increase of aggression. If this was true, we should see rising crime rates and falling donation rates.
Donation rates are consistently going up, and consistently outpace economic growth . Violent crimes are going down . So are teen pregnancy rates . So are burglaries . Reject my opponent's third point. Pro has failed to uphold the BoP. Reasons for voting decision: Con was much better argued in terms of sourcing, layout, and rightfully calling out Pro for the lack of sourcing. Also, Pro, we don't "imbibe" our children with things unless we mean to force them to drink things.
I'm reasonably sure you meant "Imbue" or "Instill". You are not eligible to vote on this debate. This debate has been configured to only allow voters who meet the requirements set by the debaters. This debate either has an Elo score requirement or is to be voted on by a select panel of judges. Pro "We lived on farms, then we lived in the cities and now we are going to live on the internet".