The ethics of cyber-enhancements

Transhumanism, n.:  The belief or theory that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology.

Cyborg, n.:  a person whose body contains mechanical or electrical devices and whose abilities are greater than the abilities of normal humans

Humanity, n.:  the quality or state of being human

There are two blogs that I check every day, Brad DeLong’s Grasping Reality with Both Hands and Mike the Mad Biologist.   The other day Brad had a post, (Early) Monday Idiocy: Chris Bertram Wages War on Eyeglasses, Surrenders to Cyborgs.  It was a critique of a critique of a post by econoblogger Noah Smith, Rise of the Cyborgs.  You can follow the links if you’re interested in the details.  The purpose of this post is to clarify comments I left on DeLong’s blog, respond to questions and criticisms, and to provide another forum for continuing the discussion.

The core of Bertram’s complaint was “Employers could make it a condition of employment that workers undergo the necessary cyber-modifications.”  While we’re not a that point yet if we’re going to speculate on the merits of a cyborg future then it’s a point worth raising.   Employers requiring their employees be literate is legitimate.   Requiring that their employees undergo sterilization is not.   To me, requiring a cyborg enhancement would be on par with forced sterilization.   If and when a cyborg future is imminent Bertram’s concern will need to be addressed.   It would behoove us to address it before a cyborg/transhuman future is imminent.  The issues I raise here are more philosophical than practical.   To what extent and under what circumstances are cyber-modifications of human beings ethical?   I think where one stands on the issue depends to a considerable degree whether one considers a cyber-enhanced (cyborg) state a future step in the evolutionary process or whether one regards cyber-enhancements as something fundamentally different from evolution and unnatural.  I belong to the latter camp.  That stated, there are circumstances where I believe cyber-enhancements are reasonable and justified, e.g., control of a prosthetic.

DeLong’s criticism of Bertram is a good starting point for getting to why I believe what I believe.  DeLong:

Bertram … got upset at those of us who pointed out that an argument that calls for the abolition of eyeglasses, canes, literacy, etc. has little purchase.

Bertram made no such argument.   The issue he raised was of cyber-modifications potentially being forced (extorted) by employers.  Eyeglasses, canes, and literacy are not cyber-modifications.  With respect to Bertram’s complaint, DeLong’s criticism is a non sequitur.  This is not a subtle point and I responded to that effect:  

Skepticism of cyborgs does not imply a negative outlook towards eyeglasses, canes, literacy, etc. People who seek eyeglasses or artificial hips are seeking to be fully human – to gain or regain functions that the vast majority of their peers possess. In contrast, to become a cyborg would be to reject one’s humanity. A cyborg possesses traits that no human possesses or will ever possess.

A world which features the extrapolations of technology that Smith describes would be Hell.

The extrapolations called out by Smith were:  direct mental control of machines/mind-internet interface, augmented intelligence, augmented learning, mood modification, artificial sensory input.  My concluding sentence is not an anti-tech argument.  It was the specific extrapolations of technology that I found objectionable not technology in general.  (NB:  It is physical modifications when the body is not injured or otherwise non-performing that I find objectionable.)  My corollary to “If it ain’t broke then don’t fix it.” is “If it is broke then fix it.”  If you’re deaf then by all means get a hearing aid.  If you’re depressed then take a long lunch and go for a walk in the park or, if it’s a clinical depression, get some counseling.  If you have ADD then by all means get the assistance you need to be able to focus on your schoolwork and have the same opportunities for success as your non-ADD peers do.

I’m a scientist.  I appreciate the human urge to solve problems and understand how the world works.  There’s lots of technology that I think is great – but not all of it:

Using one’s ingenuity or the ingenuity of others to work around the limitations of the body is a very human trait, an admirable one I think. I admire Lilienthal and the Wright brothers. In contrast, while I might respect the knowledge and command of science that it would take to genetically engineer winged ‘humans’ I would not admire the result. The idea of genetic engineering or technological augmentation to confer non-human traits strikes me as escapist.

The last point is obviously a matter of philosophy.   Technology can be very beneficial but not always.  There are limits.  That outlook begs question, Where do you draw the line?  Some commenters probed to assess –  Byomtov on DeLong’s blog:

So does the cataract surgery I had earlier this year make me a cyborg, because my distance vision is now excellent at an age when it would not normally be?

No. Your eyes are still your tissue so you are not a cyborg.  Being above-average for your demographic group doesn’t qualify as transhuman.

Maynard Handley:

Suppose I engineered a human with green skin, who can synthesize. Is that human less human than a human with old-fashioned white/yellow/brown/black skin? How or why?
How’s he different from a human who’s been engineered to fix his pancreas so that he no longer has diabetes? Or a human that’s been engineered to fix her broke BRCA1 gene so that she won’t get breast cancer?

Saying that the essence of humanity dwells in their genome seems a crazy position. How is that different from flat out racism Nazi style, or the most benighted forms of monarchic/aristocratic thinking (like Egyptian and Persian monarchs marrying their sisters because nobody else’s bloodline is good enough)?

I think that engineered photosynthetic skin would be akin to a tattoo rather than the skin color you end up with as a result of your parents genetics.   Adopting the former would be a matter of choice whereas the latter is not.   That stated, the answer to the green skin question is easy.  An engineered human with photosynthetic skin would be less human than a non-modified person because humans and plants are from different Kingdoms.   The human-plant hybrid would be less than fully human by virtue of its genome.   (I’m presuming that a human-plant hybrid would never occur naturally, i.e., without human action to create it.)  The possibility of a human-plant hybrid begs the question, how much human genetic material is required to for a hybrid organism to qualify as “human”?  Greater than 50%?  “One drop”?  How low does the percentage of human DNA need to be before you could harvest the hybrid, serve it for dinner, and not face any legal sanctions? Even if you could legally serve it for dinner, could you consider yourself morally and ethically clean?

Back to Maynard’s questions:  The answer to his ‘humanity in the genome’ question is also easy.  The difference between believing the essence of humanity resides in the genome and Nazi-style racism is that the former is the belief that your genome makes you human irrespective of the details of that genome and the latter is the belief that some genomes are superior to others because of physical traits they give rise to.

Maynard’s second question re use of genetic engineering to cure or prevent disease is a bit harder.  Genetic engineering and cyber-enhancements are not one in the same but the ethical issues overlap.   Addressing genetic engineering issues is relevant to addressing cyber-enhancement ones.   Engineering a pancreas to cure diabetes or modifying a gene to prevent breast cancer are both worthwhile undertakings as far as I’m concerned.  As with use of eyeglasses, canes, cataract surgery, and artificial hips, the engineered cures in those case would be reasonable because they would enable the recipient to “gain or regain functions that the vast majority of their peers possess”.    That does get me to think harder about where I draw the line though.   What’s an engineered cure and what’s an artificial enhancement?  Right now I’m in Potter Stewart territory, “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description, and perhaps I could never succeed in intelligibly doing so. But I know it when I see it…”

A little digression on evolution, genetic engineering, and cyber-enhancements:

  1. Genetically-engineered traits may be heritable.  Cyber-enhancements are not heritable.  If a trait is not heritable can it legitimately be considered part of the evolutionary process?
  2. While evolution is modification of the genome the modifications do not follow from active intervention.  They come about as a secondary effect.   The activities an organism undertakes in combination with environmental feedbacks results in preferential selection of traits.  The selection of traits follows from the organism’s interaction with the environment.   They are not driven unilaterally by the will of the organism.   Set aside ethical versus unethical for the moment, that changes follow from interaction between organism and environment seems to me an essential distinction between “natural” and “unnatural”.   If changes do not follow from interaction with the environment I don’t believe you can legitimately call them evolutionary.

In the interest of better establishing where I draw the line between ethical and unethical modifications I spent some time reading.   A piece by Margaret Somerville was helpful in clarifying my thoughts (emphasis mine):

There is an enormous amount of good that can be achieved with our new technoscience, especially regenerative medicine. The issue is where we draw the line between ethical and unethical use of it. One approach I find helpful is to ask whether we are using it to repair nature when it fails or to do something that is impossible in nature. The former usually raises far fewer ethical concerns than the latter, although, of course, it’s not the only relevant question in deciding on ethics. However, as I know from personal experience and criticism when I’ve used this approach, “transhumanists see the very concept of the specifically ‘natural’ as problematically nebulous at best, and an obstacle to progress at worst,” and “the natural” as having no inherent moral value.

I do see “the natural” as having inherent moral value.

Another distinction that might help to distinguish ethical technoscience interventions from unethical ones is whether the intervention affects the intrinsic being or essence of a person – for instance, their sense of self or consciousness – or is external to that. The former, I propose, are always unethical, the latter may not be.

Which leads to a related issue at a much more general level: transhumanists do not accept that there is any “essential natural essence to being human” that must be respected, an essence that I believe we must hold on trust, untampered with, for future generations. It is difficult to define what constitutes this essence, without referring to a soul or at least a “human spirit” – the latter of which does not require any religious belief, but does require that we see ourselves as more than just machines.

The distinction between intrinsic and extrinsic is an important one.  I believe that modifications to intrinsic being are problematic – certainly so if they aren’t restorative in nature.  I also share her skepticism of extrinsic modifications.

I also found a post by Massimo Pigliucci to be helpful. He addresses the all-or-nothing fallacy (emphasis mine):

[In his essay ‘Transhumanism F.A.Q.: Is Aging a Moral Good?“] Kyle Munkittrick begins his own response to critics of transhumanism by stating that if anyone has a problem with technology addressing the issues of disease, aging and death then “by this logic no medical intervention or care should be allowed after the age of 30.” This, of course, is a classic logical fallacy known as a false dichotomy. Munkittrick would like his readers to take one of two stands: either no technological improvement of our lives at all, or accept whatever technology can do for you. But this is rather silly, as there are plenty of other, more reasonable, intermediate positions. It is perfectly legitimate to pick and choose which technologies we want (I vote against the atomic bomb, for instance, but in favor of nuclear energy, if it can be pursued in an environmentally sound way).

Pigliucci again:

Technology can surely help us, but it is also (perhaps mostly) a matter of ethical choices: the problem will be seriously addressed only when people abandon the naive and rather dangerous idea that technology can solve all our problems, so that we can continue to indulge in whatever excesses we like.

This post is a working document.  I’ll close this version of it with my (lightly edited) reply to dr2chase who stated (again at DeLong’s blog):

We’re completely stupid about change — we oppose it tooth and nail, till we suddenly forget that it ever happened.

Sure. Many of us instinctively oppose change – at least to the extent that it affects us directly – and to uniformly oppose change would be poor practice. Change can be for the better or for the worse. A belief that change will on balance always be for the better and that the future is destined be an improvement over the past is as naive as categorically opposing change. Just because we can (or can potentially) do something doesn’t mean that we should. We should think through how proposed changes could play out – both in terms of intended consequences and potential unintended effects. Smith’s ledger listed only the assets of a cyber-enhanced future. (Not assets as far as I’m concerned but, for the sake of argument, let’s designate them assets.) Bertram hypothesized a liability. Commenters to Brad’s original post hypothesized some other liabilities. I might have found Smith’s pro-cyber-enhancement post more compelling if he engaged the liabilities.  Probably not, but I might have.

UPDATE#1 12/30/2015:

Maynard responded to my points above on DeLong’s blog – here.   Our views are pretty much diametrically opposed.   Perhaps not tonight but I do plan to respond to his points.

In my reading the last two nights I didn’t find much in the way of humanist arguments against cyber-augmentation.   (The bulk of what I found was written from a religious conservative perspective.)  Pigliucci was about it.  That surprised me a bit.  Maybe there’s a significant body of work that I haven’t tapped into but I think I’m a good enough scout that I would have tapped into it given a couple hours of searching.

Brad had a post today, Today’s Economic History: Writing Is (and Other Things Are) Not Naturally Human.  The lead-in (emphasis mine):

For Homer and his audience, writing is unnatural and un-human: “many deadly signs on a folded tablet…”.  What is natural to humans–what we were back in the environment of evolutionary adaptation when we were in biological equilibrium–is grunting bands of 50 or so making their way across the Horn of Africa with their stone tools. Since then, the language Singularity, the agriculture Singularity, the writing Singularity, and perhaps now a fourth have changed human life in many ways beyond all recognition. “What is natural to humans” almost invariably means “what I expect to happen”, which is roughly the same as “what I learned about how things were, were done, and ‘ought’ to be done back when I was a child”.

That’s a poor definition of “natural”.  It casts “natural” as a purely social construct rather than a characterization of the material properties of an object.  (“Natural” things may involve change.   Change may or may not be natural.)  Where we draw the line between natural and unnatural is a social construct but the basis of the partition is material.   I think you’re hard-pressed to find something more naturally human than language and, by extension, literacy and writing.  We are not born literate but our capacity for literacy is innate.   We don’t need to take some drug or magic foodstuff to achieve it let alone require any cyber-augmentation to do so.   The vast majority of us are born with the capacity to become literate.   All we need to acquire the skill is for our elders to train us.   That’s about as natural as it gets.

UPDATE#2  12/30/2015:

What not-purely-restorative cyber-modifications to the human body might qualify as ethical and what ones would not?   One criterion that comes to mind is that in order to be ethical self-awareness would have to remain independent of the cyber-enhancement.   In order to qualify as ethical the cyber-enhancement would need to be separable from the recipient in that its removal would not affect self-awareness.  Removal might have major physical consequences but, presuming survival, self-awareness and sense of agency would not be affected.  Sensory inputs that the owner received might change as a result of removal and their perception of the world would change as a result but their self-awareness would not.  Where one would cross from human into not-fully-human territory would be in ceding some or all of one’s sense of agency to the cyber-enhancement.   Restating my point made above, I do not believe that cyber-enhancements can be legitimately considered to be evolutionary unless the resulting traits are heritable.   I’m not sure what the catchall term for non-heritable enhancements is but they are not evolutionary.

One thought on “The ethics of cyber-enhancements

  1. Re: “If a trait is not heritable can it legitimately be considered part of the evolutionary process?”

    Answer: depends on which evolutionary process you are talking about. My general definition is that evolution consists of:

    1) generation of lots of variation in “somethings”.

    2) a selection process that filters out “bad somethings” from “good somethings”.

    3) memory, or a way or ways to pass along the “good somethings” to the future.

    Example: “something” = hypotheses; selection process = the scientific method (experimental verification, peer-review, statistical analysis to guard against coincidences, etc.); memory = scientific lectures, seminars, papers and texts; result = science evolves.

    In other words, I see human technology and human intellectual development as forms (perhaps meta-forms) of evolution – how else would creatures developed by evolution work? Evidence: we have all seen cars and phones and computers evolve in our lifetimes. (And Microsoft Vista rapidly go extinct.)

    So cybernetic technology can be inherited, in the sense that it is passed on to the future via technical papers and drawings similar to biological traits being passed on by genes. It is not biological evolution, but it is evolution (all the way down). Natural and unnatural remind me of elan vital thinking. Biological and non-biological seem better terms to use to me. Not too long from now gene therapies will be able to make inheritable biological changes and again these could either be by choice or against people’s will depending on what society evolves into. In principle biology is just nanotech machinery so eventually perhaps all cybernetic additions will be achievable by gene therapy and your objection will be moot. Perhaps it would be best to focus on how to evolve better societies which will not use technologies unethically rather than on restricting technological development.

    (Just some random thoughts from my random-thought generator, as yet not rigorously filtered by a selection process.)

Comments are closed.