Which components of human cognition to target first?


Posted by

What are things that we can improve about a brain, that will help make sure that potential future intelligence-amplified humans are nice?

This post is an initial examination of mental components that we could target to improve the ethical reasoning of intelligence-amplified humans. It is a rough sketch and will probably be updated as I understand more.


 

Say that we are developing tools that can change and improve how different parts of the brain functions.

Say that we also have a bit of time before these tools come around, so we can try to apply foresight to figure out which parts of the human brain we want to target for improvement first. 1

It would be useful to know if there are particular abilities that would be strategic to target first, and use this to guide the development of intelligence-amplifying neurotechnology.

Technologies that could do this include iterated embryo selection, gene therapy to increase neurogenesis or neuroplasticity, and an advanced brain-computer interface like neural dust (there may be others that are unknown to me or yet-to-be-imagined). (A future post will explore these in more detail).

Mental abilities that could be improved include:  

  • empathy
  • ethical reasoning or philosophical success
  • metacognition
  • intelligence / problem solving ability

“Ethical reasoning” requires some clarification. Assuming the development of IA technology, we want the augmented humans to make good decisions, to be aligned with humans values, or the set of values that humans would have if they thought long and hard about it. 2

Let’s start with this assumption:

It is strongly preferred to increase a human’s moral reasoning, or “ability and commitment to develop good goals” before or at least at the same time as increasing a human’s intelligence or  “ability to accomplish a set of goals”.

So, how do we accomplish this for a human?

Some hypotheses:

Increase empathy? It could be that through experiencing a System-1 sense of empathy, feeling and relating to the other person, they are more inclined to take into account the preferences of other people. But it could be that those preferences, even when well understood, only take up a fraction of the intelligence-amplified human’s values (an example of successful perspective-taking but low empathy). 3

Increase metacognition? By examining one’s own thought processes, noticing uncertainty, developing a tight update loop between prediction and observation, one may more quickly be able to develop positive goals and make good decisions. 4

Perhaps moral reasoning will naturally increase with intelligence? It could also be the case that moral reasoning (in the sense of knowing what is best to do) will only be improved through intelligence. Morality is hard to figure out, it’s more complicated than “do what feels nice”, because human intuition isn’t equipped to deal with scope. Some progress in moral reasoning is made by high-IQ philosophers and economists, and if we had more of the same working to figure out morality, we would have a better idea of what to do with augmented intelligence.

 

Some followup questions:

  • What is the correlation between rationality and intelligence?  If it is true that rationality and intelligence are correlated, then that’s great, but we’re not sure yet. 5
  • What are possible failure modes I haven’t considered here? 6
  • Perhaps ‘intelligence” is too broad of a term. Human intelligence is composed of many interacting submodules. What are the most relevant submodules, and can we increase them independently of other submodules?
    • It could be the case that some component of intelligence is easiest to augment first, such as long-term memory. But this will probably not lead to improved ethical reasoning.

Notes

  1. This is similar to asking: how does the value-loading problem in AI changes when applied to humans? I want to understand what is different about starting from a human brain – what opportunities we may have.
  2. Learning what to value is a problem in itself: see indirect normativity  (https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) and one approach to indirect normativity, CEV ( http://intelligence.org/files/CEV.pdf ).
  3. More has been written about the neural bases of empathy here: http://ssnl.stanford.edu/sites/default/files/pdf/zaki2012_neuroscienceEmpathy.pdf  and here: http://ssnl.stanford.edu/sites/default/files/pdf/zaki2014_motivatedEmpathy.pdf. This incidentally comes from the lab where I used to work.
  4. See http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3318765/ and http://en.wikipedia.org/wiki/Metacognition#Components 
  5. See Project guide: How IQ predicts metacognition and philosophical success and Stanovich – Rationality and Intelligence
  6. One way to find more failure modes for human augmentation would be to look into sci fi novels. Not that we assign huge weight to these – for any given specific sci fi scenario I’d assign a <1% probability to it happening as predicted, but could give us classes of failure modes to think about.

2 thoughts on “Which components of human cognition to target first?

  1. Diego Caleiro

    Empathy increase correlates inversely with choosing correctly in moral dilemmas, because it correlates strongly with outgroup hatred.

    Most of the characteristics you have laid out there are correlated with high levels of activity in the cortical frontal lobe. Maybe a sheer increase in frontal lobe size can generate most of the easily accessible benefits.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *