IT and Ethics

Allikas: KakuWiki
Jump to navigationJump to search

Ain't good and bad the same as in times of old?

Ethic in its classic sense describes the rules and standards that regulate the behaviour of an individual towards others. On the one hand, most 'golden rules' do apply in the age of Internet as well. On the other hand, it has brought along a row of new ethical questions, some of which have made it necessary to reconsider old ideas (everything about 'intellectual property' is a good example). Let us imagine meeting Socrates or any other classical sage and asking questions from him - he would answer those dealing with his known world easily. But what about his opinion about things like spam, trolling or 'letters from a Nigerian prince'?

Or, is it another tempest in a teapot? Not quite: as it has been said before, Internet is, above all, people. It is a huge community where people physically by thousands of miles apart are able to directly influence each other. And as seen before, without a certain critical mass of ethic, we will get a cyberdump instead of cyberspace.

The 'Death by Dung' is most likely if the disregard toward ethic would prevail. Just as there are sociopathic businesspeople and politicians in real life who would damage the environment in the name of personal profit, similar 'lower life forms' do exist online as well.

Different points of view

Herman Tavani in his Ethics and Technology talks about cyberethics, as the terms of computer ethics and even internet ethics would not show enough the human(s) behind the technology. According to him, the cyberethics can have different points of view, for instance

  • IT - ethical challenges stemming from the adoption of new technologies.
  • Philosophy - putting the tech-related ethical questions to a larger, 'Big Picture' context.
  • Social and behavioural sciences - measuring the impact of new technologies to social institutions and various groups in the society.
  • Information sciences - ethical problems related to legal topics (e.g. copyright etc), censorship and freedom of speech online.

Tavani also proposes three different approaches:

  • Professional ethics - predominantly the view of computer, natural and information sciences, the issues include professionalism, responsibilities, risks, safety and reliability, codes of conduct etc.
  • Philosophical ethics - the philosophical and legal view on issues like privacy, anonymity, copyright, freedom of speech etc.
  • Descriptive ethics - the view of social sciences on e.g. the impact of technology on various institutions (government, education etc) and social groups (e.g. by sex/gender, age, ethnicity etc).

Some ethical theories

Michael J. Quinn in his Ethics for the Information Age has listed different ethical theories that might be used in an information society:

  • Subjective Relativism (Moral Relativism) - while Relativism denies the existence of universal morality, Subjective Relativism proposes that each individual has his/her own Right and Wrong (the maxim "What’s right for you may not be right for me").
  • Cultural Relativism - this theory sees the Right and Wrong in the context of specific cultures, capable of changing both in time (different eras) and space (different locations).
  • Divine Command Theory - as the ethical cornerstone of three large 'book religions' (Judaism, Christianity, Islam), this theory bases the Right and Wrong on the divine will and commands conveyed in the Scriptures.
  • Ethical Egoism - this theory is perhaps best seen in the novels by Ayn Rand. According to it, the long-term personal benefit should be the sole criterion for the Right; barter is seen as a foundational principle in human relationships - while Ethical Egoism does not rule out helping others, it is only considered reasonable in case of mutual benefit.
  • Kantianism - the theory stands on the works of the German philosopher Immanuel Kant, who tried to formulate universal ethics setting an universal code of conduct. His main thesis known as the Categorícal Imperative has two formulations:
    1. the principle of autonomy (First Formulation): Act only from moral rules that you can at the same time will to be universal moral laws.
    2. the principle of motives (Second Formulation): Act so that you always treat both yourself and other people as ends in themselves, and never only as a means to an end.
  • Act Utilitarism (also Direct Utilitarism) - the theory by the English philosophers Jeremy Bentham and John Stuart Mill has utility as a central tenet (the greatest happiness principle: An action is right (or wrong) to the extent that it increases (or decreases) the total happiness of the affected parties. Note that according to this principle, it is possible to act right for wrong reasons, and vice versa.
  • Rule Utilitarism (also Indirect Utilitarism) - this theory applies utility as a measuring stick to rules rather than directly to actions; the act is deemed right if the rule mandating it is right. According to the theory, the right rules are the ones that, when used as moral code, bring more happiness (to all parties combined) than other rules. The approach is somewhat similar to Kant's, but while Kant stresses motives, this theory considers the actual results.
  • The Social Contract Theory was first formulated by Thomas Hobbes in his book Leviathan and later added to by Locke and Jean-Jacques Rousseau. According to it, the society should strive to develop a set of rules that make sense to everyone (making people follow them voluntarily). For instance, driving on the right (or in some places, left) could be a common example - drivers keep to the right not for fearing the police but to avoid confusion and possible crashes.
  • The Rawls' Theory of Justice by John Rawls stems from two assumptions:
    1. Each person may claim a “fully adequate” number of basic rights and liberties, so long as these claims are consistent with everyone else having a claim to the same rights and liberties.
    2. Any social and economic inequalities must satisfy two conditions: first, they are associated with positions in society that everyone has a fair and equal opportunity to assume (e.g. by obtaining necessary education); and second, they are "to be to the greatest benefit of the least-advantaged members of society (the difference principle; an example could be gradual taxation).
  • The Virtue Ethics can be traced back to ancient Greece (perhaps most notably, Aristotle). According to it, a right action is an action that a virtuous person, acting in character, would do in the same circumstances. A virtuous person is a person who possesses and lives out the virtues. The virtues are those character traits human beings need in order to flourish and be truly happy. Aristotle also distinguishes between the intellectual and moral virtues, considering the latter more important (as they are innate, personal traits rather than learned behaviour).

Main fields of discussion

The previous theories have become the most contested in three large technology-related fields: rewarding creativity (e.g. copyright and other similar issues), privacy, and censorship. While the latter two have acquired new dimensions during the IT era, the first one faces perhaps the most radical changes of the three. Additionally, the digital gap (where some parts of the world are online and others are not, leaving the latter ones disadvantaged), information security, and social media (journalism is not limited to professional journalists anymore) all pose new ethical challenges.

There are many ethical questions which are totally new. A good example would be domain squatting online - someone would buy a lot of domains in the hopes of someone needing some of them later, or snatch one away before some interested party can act (in both cases, potential profit is the main motive). The later case is already better regulated today, allowing 'returning' the squatted domain to the 'justified party' (e.g. someone registering would likely have to hand it over to the Coca-Cola corporation soon enough). There are still risks of both identity theft and extortion.

Half empty, or half full?

An example of the ambiguity of ethical considerations would be the list of online dangers formulated by Attila Krajci in 2000 (they are included in the book by Pinter; see the references below) where each one can also be added a positive point of view:

  • Trust: "You never know who is on the other side" vs "you can have a carte blanche, ridding you of earlier loads".
  • Authenticity: "What you find cannot be trusted" vs "you can look at the information itself rather than external authority".
  • Sense of reality: "Things go unreal if you are online too much" vs "sometimes, the cyberspace is what someone needs in order to open up".
  • Alienation: "net addicts get alienated from others" vs "sometimes a way to escape is necessary".
  • Identity: "you can be whoever you want until you do not know anymore who you are" vs "you can be whoever you want and stay yourself".
  • Aggression: "computer games make you aggressive" vs "games can teach very different things".
  • Extremes: "Internet has porn, pedophiles and brainwashers" vs "sometimes one needs to see wrong to know right".
  • Communication: "Internet does not allow using the whole spectrum of communication" vs "Internet commmunication adds new ways of communication, sometimes by seemingly truncating them".
  • Noise: "you get lost in the mass of information" vs "there will be totally new ways to extract what you need".

Thus, it is not possible to say that one is right and the other is not. A similar approach is also used by Stephen Northcutt in his IT Ethics Handbook, describing a large number of ethical dilemmas and providing two radically different answers from different viewpoints.

The book by Pinter also describes two approaches to IT - the technophiliac and technophobic views. The former looks at Internet as a kind of Cyber-Athens (in the classic, ancient sense) - the agora or meetup is even more effective in the cyberspace, promoting direct democracy and free society. The latter view, on the contrary, suggests that the result will be an Orwellian surveillance society with the Big Brother watching everywhere and should technology become advanced enough, the machines may come to the question about the necessity of humans ("You're a plague and we are the cure").

The middle way between the two extremes have been sought for a long time. An interesting initiative was the Technorealism movement starting with an eponymous manifesto in 1998. While their ideas had varying weight (see a cricical comment at and the movement predated the social media era, a similar balancing force would still be needed in today's world.

Tavani's phases of cyberethics

The phases of cyberethics formulated by Herman Tavani concur in part with the generations of computing in IT history. To illustrate the difference from today, he also uses a legendary quote attributed to the then-CEO of IBM, Thomas J. Watson: "I think there is a world market for maybe five computers" (there are alternate versions with 4 or 6).

Phase I

1950s and 1960s - standalone (non-networked) mainframes. The first attempts on artificial intelligence would bring along the first ethical questions in IT:

  • Can machines think? If yes, should we build a thinking machine?
  • If machines can be intelligent, then what it means to be human?

Also privacy is mentioned early on, mostly in the context of the Big Brother and large databases.

Phase II

1970s and 1980s - the rise of business sector, first networks (local and wide area networks). The main ethical questions include

  • personal privacy (adding the network and business aspects to the former phase).
  • rise of 'intellectual property' - the problems related to unauthorized copying.
  • beginning of computer crime, at first in the form of pranks and intrusion (unlawful entry).

Phase III

Since around 1990 - the Web era. Additional issues include

  • freedom of speech
  • anonymity
  • legislation
  • trust
  • public vs private information

Phase IV

Near future - merging technologies, ubiquitous computing, smart objects and things, chips, bioinformatics, probably nanocomputing.

Moral transparency of technology

Cyberethics is often descriptive (non-normative; avoids judgement) rather than normative (judges the act or situation as right or wrong). However, the normative approach has its place, and in some cases, the normativity depends on the technology in question:

  • Transparent - everything is clear, the users understand both the technology (at least on the base level) and related moral choices (e.g. phone network and the ethics of surveillance).
  • Non-transparent with known features - the users understand the main principles but may not realize the related moral choices (e.g. Google).
  • Non-transparent with unknown features - the users do not understand neither principles (black box) nor any moral factors (e.g. Internet of Things).

Should ethics be codified?

Some consider it a bureaucratic waste of time, but the internal rules of a company, security policy and various other documents are largely done the same way. Besides having a legal status, such documents would help finding suitable people (someone disagreeing with the code of conduct from day one is likely unsuitable for other reasons as well) and even the drafting process itself can help propagate, introduce and discuss the matters.

An example of a code at a large company is provided by IBM.


The base nature of ethics has not changed in the information era, but there are many new questions and some have received new viewpoints. The importance of the field has grown however - due to the ubiquity of IT, the ethical choices made there will significantly influence many other fields (especially those where the technologies start out as non-transparent). Therefore, the base points of ethics should be codified in the future as well.


  • HIMANEN, Pekka. The Hacker Ethic and the Spirit of the Information Age. Random House Inc. New York, 2001.
  • NORTHCUTT, Stephen. IT Ethics Handbook: Right and Wrong for IT Professionals. Syngress 2004
  • PINTER, Robert (ed). Information Society: Coursebook. Gondolat - Ǔj Mandǎtum 2008.
  • QUINN, Michael J. Ethics for the Information Age. International Edition. 6th ed. Pearson 2015
  • TAVANI, Herman T. Ethics & Technology: Ethical Issues in an Age of Infomation and Communication Technology. John Wiley & Sons, Danvers 2007.