Monthly Archives: March 2015

Which components of human cognition to target first?


Posted by

What are things that we can improve about a brain, that will help make sure that potential future intelligence-amplified humans are nice?

This post is an initial examination of mental components that we could target to improve the ethical reasoning of intelligence-amplified humans. It is a rough sketch and will probably be updated as I understand more.


 

Say that we are developing tools that can change and improve how different parts of the brain functions.

Say that we also have a bit of time before these tools come around, so we can try to apply foresight to figure out which parts of the human brain we want to target for improvement first. 1

It would be useful to know if there are particular abilities that would be strategic to target first, and use this to guide the development of intelligence-amplifying neurotechnology.

Technologies that could do this include iterated embryo selection, gene therapy to increase neurogenesis or neuroplasticity, and an advanced brain-computer interface like neural dust (there may be others that are unknown to me or yet-to-be-imagined). (A future post will explore these in more detail).

Mental abilities that could be improved include:  

  • empathy
  • ethical reasoning or philosophical success
  • metacognition
  • intelligence / problem solving ability

“Ethical reasoning” requires some clarification. Assuming the development of IA technology, we want the augmented humans to make good decisions, to be aligned with humans values, or the set of values that humans would have if they thought long and hard about it. 2

Let’s start with this assumption:

It is strongly preferred to increase a human’s moral reasoning, or “ability and commitment to develop good goals” before or at least at the same time as increasing a human’s intelligence or  “ability to accomplish a set of goals”.

So, how do we accomplish this for a human?

Some hypotheses:

Increase empathy? It could be that through experiencing a System-1 sense of empathy, feeling and relating to the other person, they are more inclined to take into account the preferences of other people. But it could be that those preferences, even when well understood, only take up a fraction of the intelligence-amplified human’s values (an example of successful perspective-taking but low empathy). 3

Increase metacognition? By examining one’s own thought processes, noticing uncertainty, developing a tight update loop between prediction and observation, one may more quickly be able to develop positive goals and make good decisions. 4

Perhaps moral reasoning will naturally increase with intelligence? It could also be the case that moral reasoning (in the sense of knowing what is best to do) will only be improved through intelligence. Morality is hard to figure out, it’s more complicated than “do what feels nice”, because human intuition isn’t equipped to deal with scope. Some progress in moral reasoning is made by high-IQ philosophers and economists, and if we had more of the same working to figure out morality, we would have a better idea of what to do with augmented intelligence.

 

Some followup questions:

  • What is the correlation between rationality and intelligence?  If it is true that rationality and intelligence are correlated, then that’s great, but we’re not sure yet. 5
  • What are possible failure modes I haven’t considered here? 6
  • Perhaps ‘intelligence” is too broad of a term. Human intelligence is composed of many interacting submodules. What are the most relevant submodules, and can we increase them independently of other submodules?
    • It could be the case that some component of intelligence is easiest to augment first, such as long-term memory. But this will probably not lead to improved ethical reasoning.

Notes

  1. This is similar to asking: how does the value-loading problem in AI changes when applied to humans? I want to understand what is different about starting from a human brain – what opportunities we may have.
  2. Learning what to value is a problem in itself: see indirect normativity  (https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) and one approach to indirect normativity, CEV ( http://intelligence.org/files/CEV.pdf ).
  3. More has been written about the neural bases of empathy here: http://ssnl.stanford.edu/sites/default/files/pdf/zaki2012_neuroscienceEmpathy.pdf  and here: http://ssnl.stanford.edu/sites/default/files/pdf/zaki2014_motivatedEmpathy.pdf. This incidentally comes from the lab where I used to work.
  4. See http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3318765/ and http://en.wikipedia.org/wiki/Metacognition#Components 
  5. See Project guide: How IQ predicts metacognition and philosophical success and Stanovich – Rationality and Intelligence
  6. One way to find more failure modes for human augmentation would be to look into sci fi novels. Not that we assign huge weight to these – for any given specific sci fi scenario I’d assign a <1% probability to it happening as predicted, but could give us classes of failure modes to think about.

Research Agenda, Round 1: an initial survey


Posted by

You can view this post in the html text below, or the nicely-formatted pdf here: Research Agenda Version 1: Initial Survey.pdf

This post is a list of questions about various technologies that send information into or out of a brain, or change the way the brain processes information. By defining these questions, I hope to develop a better understanding of the large possibility space in modern brain technology. 1

Background Motivation

One potential way to increase human effectiveness would be to improve the functioning of the certain parts of the human brain. 2 We could examine the input, processing, and output stages of the information flow, and look for ways to understand, improve, and extend each of those stages. Eventually, we may be able to create tools that improve the parts of the brain that make good decisions, solve tough problems, invent new ideas, understand moral reasoning, or experience empathy. If human brains became better at such mental abilities, I believe it would have positive ripple effects into many other areas of human activity.

 Breadth-First Approach

This article examines many possible pathways to that target. The initial approach is take an even survey of the potential tools to add to our toolbox. Our investigation will hold off on getting attached to specific solutions, or discarding broad classes of solutions for lack of known specifics.

It’s organized in rough order of levels of information: perception, language, motion, physiology, cellular biology; then macro- and micro- circuit systems in the brain. 3 Finally, there is an initial list of items to be understood in ethics and strategy.

Giving shape to a possibility space

A larger goal in this document is to provide a starting framework for researching technologies at multiple levels. I think that answers to these questions will help us start to navigate and define the possibility space in  brain technology.

 The questions and categories in this post don’t form a complete survey- they’re the the ones that are fruitful for conversation, conservative enough to post on the internet, and known to me at the time of writing this. 4

Over time, I may return to this post and continue to add more specific questions or categories.

Enjoy!


Technologies

MEDIA

Information- Software [Input + Output]

What are core design principles for software that harnesses group intelligence?

  •         Prediction Markets
  •         Large-scale citizen science i.e. Eyewire, Foldit
  •         Wikis/Forums

Perception [Input]

“Immersive Media” = Virtual Reality + Gesture Tracking + Haptic Feedback

Virtual Reality

  • Are there good examples (prototypes, concept sketches, or from science fiction) of using Virtual Reality or Augmented Reality for the following:
    • Information visualization
    • Data analysis
    • Life and physical science education (biology, chemistry, physics)
    • Math education (linear algebra with actual 4D vectors)
    • Medical operations
    • Clinical psychology
    • Rationality training / real world cognitive bias or puzzle solving exercises
    • Formal research experiments in social psychology or perception
  • Are there efforts to combine avatar control with natural language processing and generation, to create a platform for artificially intelligent character avatars? This could be a service/engine for building many kinds of games / applications.

Hardware

  • Timelines for contact lenses, optical projection.

Natural Language Understanding and Generation [ Input + Output ]

Timelines.

  • Given a long piece of text, ability to generate a natural-sounding summary of most important ideas of the text.
  • Given a psychographic profile, ability to generate a simple story from the perspective of the character.
  • Upcoming milestones.

Motion Sensors [Output]

  • Gesture Tracking: Kinect, Leap, etc.
  • Worn on body: Myo

Motion Actuators [Output]

  • After refining/solving the vision problem (Rift C1?) Haptic feedback will be the bottleneck to immersive VR.
    • Alternative/creative attempts to to glove or suit? Air compression, Sound waves, Nanomaterials?
  • Have there been studies on using haptic feedback for mood regulation, neuroplastic training in healthy adults to develop extra senses, or just information “data sensualization”?

 BODY – EXTERNAL

External Bio Sensors [Output]

  • Wearables – ECG, Respiration, etc.
    • Low information bandwidth, high amount of maker activity already.

Question: Biosensors:

Is it the case that (1) combinations of today’s external sensors (EEG, ECG…) along with Virtual Reality/ haptics can be used in radically different ways? Or is it the case that (2) their applications are confined to ‘meditation / neurofeedback / focus training’, and more advanced types of applications must wait for smaller BioMEMS or implantables? Right now, (2) seems more likely given the amount of people exploring vs. amount of actually new potential applications.

Question: Parasympathetic Nervous System – Regulation

What studies show the benefits of moderating physiology on cognition (as can be done with current biosensors)? Can this actually help people focus better? What is the highest recorded percent increase in concentration, creativity problem solving or related metrics in healthy adults, using biofeedback?


BODY – INTERNAL

Bio-MEMS [Input + Output]

(article)

  • Can BioMEMS also act as actuators/controllers/builders (or are they mostly sensors?)
  • Bioengineering [Input, Processing, Output]

    • What types of genes / how many genes are addressable with modern gene therapy?
    • What kinds of neural tissues have had success with stem cell therapy?
    • Exploratory engineering: the hippocampus continuously generates new cells (neurogenesis). Could an increase in the rate of hippocampal neurogenesis influence its higher-level performance (say, spatial learning)? An initial study shows the brain is resilient to decreased neurogenesis, but the door remains open to experiments that increase neurogenesis.

Synthetic Bio [Input + Output]


 BRAIN

Chemicals [Processing]

(There are a number of chemicals that affect mood and mental state, more and less common. I do not necessarily believe they should be used, but find it useful to understand the principles behind their effects.)

  • Are there studies on the combination of chemical stimulants with macro-scale stimulation ie tMS?
  • What about with immersive media, virtual reality, video games, group therapy circles, CBT, or other high-level psychological interventions?

The following sections are organized according to the general types of neuroengineering technologies in Ed Boyden’s MIT class.

Brain – Macro Circuit Reading [Output]

Noninvasive mapping and measurement.

  • PET, photoacoustic, MEG, EEG, fMRI, infrared imaging, x-rays.

Brain – Macro Circuit Stimulation [Input, Processing ]

Macrocircuit control.

  • Magnetic, electrical, ultrasonic, chemical, pharmacological/pharmacogenetic, thermal.

Brain – Micro Circuit Reading [Output]

Development of invasive mapping and measurement.

  • Electrodes
  • nanoprobes, nanoparticles
  • optical imaging and optical microscopy
  • endoscopy,
  • multiphoton microscopy, electron microscopy, light scattering,
  • bioluminscence,

Brain – Micro Circuit Stimulation [Input, Processing ]

Development of microcircuit control.

  • DBS, infrared optical stimulation, optogenetics,
  • nanoparticle-mediated control, uncaging
  • signaling control.

Ethics and Strategy

  • What is an appropriate target demographic for different levels of brain technology?
    • For discussions specific to cognitive enhancement, this book (Cognitive Enhancement, Hildt and Franke, 2013) offers an excellent, detailed discussion on the ethics of cognitive enhancement from multiple views. The introductory chapter offers an overview discussion.
  • Examine the relationship between neuroscience, intelligence amplification, and artificial intelligence safety.
    • Likelihood of neuro research to contribute to neuromorphic AI (seems likely).
    • Likelihood of various fields in neuroscience to lead to amplification of various forms of intelligence.
      • Opportunities to bolster moral reasoning / empathy in parallel with or before other forms of intelligence. (This would become very important as the strength of the intelligence amplification (IA) technology increases).
      • Amount of overlap between research contributing to intelligence amplification and research contributing to neuromorphic AI (some research areas may be completely separate and safer to pursue).
    • Likelihood of intelligence amplification to lead to improvements in AI safety (seems unlikely by itself, better chance when combined with improved moral reasoning / rationality).
    • Are there feasible ways to make IA tools available only available to select research scientists (such as those advancing technology safety).
      • Advancing activity in all fields in science and technology equally could have a neutral or negative effect, because of the high risks from some emerging technologies.
    • Overall benefits or costs of IA neuro research.
    • See also: Luke Muehlhauser on Intelligence Amplification and Friendly AI
  • Estimating the actual value of technological development, and the replaceability of a particular project.
    • If one desires to make a large social impact, they must take into account expected value of making particular technologies, when (1) very similar things could be made by others a few years down the road, and/or (2) their functionality may eventually be replaced by more advanced technologies. (Example: creating wearables now vs. personally working on biomems now vs waiting for biomems to arrive while doing something else.)
      • Consideration: The value of the project is the value of having the information or use of the tool sooner than we would have otherwise. 5
        • However, counterfactuals (and relative impact) are hard or impossible to compute well.
        • There may be some arguments for why this is not a well-founded concern, or even if it is well-founded, that it may not be practical to give it a lot of weight. For now, I believe this consideration does matter when determining what to prioritize.

 


Notes 6

  1. If you are interested in answering some of these questions, and are meticulous enough to read footnotes, you might be an excellent person to write or coauthor future posts on this blog. Drop me a line if that sounds interesting to you. 🙂
  2. It is my working hypothesis that strategic implementation of technologies that improve brain function would make humans more effective at the activities that matter (and would otherwise have a net positive effect), but this is not guaranteed. The second section on “Ethics and Strategy” offers some initial reasons this might not be true.
  3. A common ordering of levels in neuroscience is Cognitive > Systems > Cellular > Molecular.
  4. Disclaimer: I will acknowledge some potential reasons not to publish a blog post like this one. A detailed discussion about creating certain brain technologies could pose an information hazard (specifically, idea or attention hazards). Another potential pitfall is that it might distract myself or other people from more important activities that we could be doing otherwise (opportunity cost). Because the topics are relatively well-known and the blog has little social momentum, these disclaimers don’t concern me for now, but they may be revisited in the future.
  5. This view is described the first few pages of Chapter 15 in Nick Bostrom’s “Superintelligence.”
  6. Thanks to Madeeha Ghori for helpful feedback on this post.