Category Archives: Ethics

Equipping future humans to solve future problems: support for intelligence amplification

Posted by

Support for intelligence amplification

Why is it important to find ways to improve human’s ability to learn, think, and perform ethical reasoning?

Here is a supporting argument for intelligence amplification that I’ll call “Increasing the amount of future decisions that are made by enhanced decision-makers”.  I haven’t seen it formalized before, so I’ve written it out here for reference. I find it appealing, but I might be missing important perspectives that would suggest otherwise.  What do you think?

Increasing the amount of future decisions that are made by enhanced decision-makers

It could be the case that human brains, no matter how exceptional, are not complex enough to figure out the best long-term course for our species. Technological power will develop faster than our ability to ensure its safety.

One strategy we can follow is to develop further insight about what to value (i.e. through cause prioritization research), and build our social and technical capacity to pursue those values as we discover them. One way to build social and technical capacity is through improving human cognition.

As some humans develop more augmented cognition, they hold much better position to steer technological development in a positive direction. Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers.

There are several steps:


As a general strategic consideration, If you aren’t in a good position to answer a question or accomplish a goal, the next best move is to put yourself in position where you are much more likely to be able to answer the question or accomplish the goal.

You can do this by getting more information and a better definition of the problem, and building up resources or abilities that lead to answering it. 1


The future contains overwhelming problems and enormous opportunities. Like it or not, we are involved in a high-stakes game with the future of earth-originating life. There is no guarantee that the steps to solve the problems or reach those opportunities will be calibrated to our species’ current level of ability. In fact, some seem extremely difficult and beyond our current level of skill. 2 With so much hanging in the balance, we really, really need to radically improve humanity’s capacity to solve problems. 


One way to put ourselves in a better position is to increase human cognition. In this context, I intend “cognition” to refer to a very broad set of things: “the set of information processing systems that we can use to solve problems and pursue opportunities.”3

Some places to increase human intelligence are through collective intelligence (e.g. organizations, incentive structures, discussion platforms), individual tools (e.g. mindmapping) , psychological methods (e.g. education and rationality training), and neural activity (e.g. neurotechnology).

As some humans develop more augmented cognition, they will occupy a much better position to steer the future in a positive direction. 4


Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers. (This does not mean we ought to put off working on those important problems now in the hopes that we can completely “pass the workload” to future augmented humans. Nor is it a claim that, if we reach extraordinary success at intelligence amplification, other problems will be immediately solved. This is merely to support intelligence amplification as one of many possible valuable investments in the longer-term future. )
I welcome commentary on (1), and am fairly confident of (2). The main part I want to focus on is (3).

Questions for further research

Is (3) (increasing human intelligence) a reliably good way of fulfilling the action proposed in (1) (put ourselves in a better position to solve problems)?

If so, which realms of intelligence amplification (say, out of collective intelligence, individual tools, psychological methods, and neural activity) hold the most promise? 5 



  1. See more on my strategy for navigating large open possibility spaces.Actions to Take When Approaching a Large Possibility Space

    Question: Beyond gathering information, skills, and resources, are there other actions you can do to better put yourself in a position to answer the questions or accomplish the goals?

  2. Problems: most importantly, existential risks. See this introduction to existential risk reduction, and a more detailed description of 12 prominent existential risks.

    Opportunities: Among the opportunities that we get to pursue, I’m pretty jazzed about dancing in fully-immersive VR, developing better control over what genes do, figuring out more about where space came from, and the potential to explore mindspace while implemented in another substrate.

  3. See Engelbart’s framework for augmenting the human intellect, which I will expand in a future post. 
  4. Or, it could be that the gains in intelligence that we develop are not enough, but the intelligence-augmented humans can then develop a more advanced IA technology. This chain of successive upgrades could go on for a number of iterations, or could to recursive self-improvement of the initial intelligence-augmented humans. In this case, we would be “Equipping future humans to equip themselves to make more optimal decisions.”
  5. A final note: This blog aims to provide a more technical, actionable projects list towards “upgrading cognition”, so we can spur the necessary scientific and technological development while we still have time. This post is partially motivated by the urgency in Bostrom’s unusually poetic writing in “Utopia”:

    Upgrade cognition!

    Your brain’s special faculties: music, humor, spirituality, mathematics, eroticism, art, nurturing, narration, gossip!  These are fine spirits to pour into the cup of life.  Blessed you are if you have a vintage bottle of any of these.  Better yet, a cask!  Better yet, a vineyard!

    Be not afraid to grow.  The mind’s cellars have no ceilings!

    What other capacities are possible?  Imagine a world with all the music dried up: what poverty, what loss.  Give your thanks, not to the lyre, but to your ears for the music.  And ask yourself, what other harmonies are there in the air, that you lack the ears to hear?  What vaults of value are you witlessly debarred from, lacking the key sensibility?

    Had you but an inkling, your nails would be clawing at the padlock.

    Your brain must grow beyond any genius of humankind, in its special faculties as well as its general intelligence, so that you may better learn, remember, and understand, and so that you may apprehend your own beatitude.

    Oh, stupidity is a loathsome corral!  Gnaw and tug at the posts, and you will slowly loosen them up.  One day you’ll break the fence that held your forebears captive.  Gnaw and tug, redouble your effort!

Responsible Brainwave Technologies workshop notes

Posted by

Last week, we joined the inaugural workshop for the Center for Responsible Brainwave Technologies.

CEREB Symposium May 2015 copy2

Top row: John Chuang (U.C. Berkeley I-School), Brian Behlendorf (EFF), Stephen Frey (Cognitive Technology), Joel Murphy (OpenBCI), Conor Russomanno (OpenBCI), Dirk Rodenburg (CeReB), Yannick Roy (Neurogadget), Johnny Liu (Neurosky). Bottom row: Gary Wolf (Quantified Self), Ramses Alcaide (U. Michigan), Serena Purdy (CeReB), Ariel Garten (Interaxon), Ari Gross (CeReB), Francis X. Shen (U. Minnesota Shen Neurolaw Lab)

We did a “deep dive” into several of the ethical issues in the near-term development of brainwave technologies.

Right now, here are some of the most salient issues with consumer-grade brainwave technology:

Lack of consumer awareness

  • People overestimate the capability of the technology (“can this read my mind?”)
  • People underestimate the capability of the technology, and release their personal brainwave data without understanding how it could be used

Consent to collect brainwave data

  • Monitoring the cognitive and emotional states of employees in transportation, construction, office work, etc. without consent.
  • Monitoring the cognitive and emotional states of people with physical or mental disabilities

Unintended uses of public brainwave data

As described in CeReB’s 2014 white paper:

The situation is analogous to that of DNA data. Information – such as the presence or absence of a predisposition to a particular illness – that is currently captured in DNA data may not as yet be identified or exploited. It is entirely conceivable that the data now available will eventually provide a rich source of information about a broad range of genetically endowed potentialities and predispositions. Knowledge of some of these by third parties might be benign, while knowledge of others may provide third parties with the power to do harm though discrimination and/or unsolicited intervention. The same may well be true of brainwave data.
We believe that the data generated, stored, and used by brainwave technology must be handled in a manner that reflects its current and potential sensitivity, both as a personal “signature” and as a conveyor of information about an individual’s capacity or level of functioning.
CEREB white paper 2014 summary

Summary table from CeReB’s initial white paper (2014).

Considerations for technology developers

A notable amount of the current problems in the graph are (at least partially) countered by:

  • users understanding what the data is
  • users maintaining control over how their data is used

It’s quite nice when we can address problems from the technology side, rather than going through public policy. The Data Locker Project, MIT’s OpenPDS, and Samsung’s SAMI are some nice projects that are working to give users more control over how their data is used, which could be generally applied to biodata. Figshare is designed specifically for sharing data with researchers.

What’s next

As brainwave technology becomes more advanced, CeReB will examine the resulting social and ethical issues, and provide resources for those working at a technological or policy level.

If you are interested in participating in future publications with CeReB, contact Dirk Rodenburg.

Which components of human cognition to target first?

Posted by

What are things that we can improve about a brain, that will help make sure that potential future intelligence-amplified humans are nice?

This post is an initial examination of mental components that we could target to improve the ethical reasoning of intelligence-amplified humans. It is a rough sketch and will probably be updated as I understand more.


Say that we are developing tools that can change and improve how different parts of the brain functions.

Say that we also have a bit of time before these tools come around, so we can try to apply foresight to figure out which parts of the human brain we want to target for improvement first. 1

It would be useful to know if there are particular abilities that would be strategic to target first, and use this to guide the development of intelligence-amplifying neurotechnology.

Technologies that could do this include iterated embryo selection, gene therapy to increase neurogenesis or neuroplasticity, and an advanced brain-computer interface like neural dust (there may be others that are unknown to me or yet-to-be-imagined). (A future post will explore these in more detail).

Mental abilities that could be improved include:  

  • empathy
  • ethical reasoning or philosophical success
  • metacognition
  • intelligence / problem solving ability

“Ethical reasoning” requires some clarification. Assuming the development of IA technology, we want the augmented humans to make good decisions, to be aligned with humans values, or the set of values that humans would have if they thought long and hard about it. 2

Let’s start with this assumption:

It is strongly preferred to increase a human’s moral reasoning, or “ability and commitment to develop good goals” before or at least at the same time as increasing a human’s intelligence or  “ability to accomplish a set of goals”.

So, how do we accomplish this for a human?

Some hypotheses:

Increase empathy? It could be that through experiencing a System-1 sense of empathy, feeling and relating to the other person, they are more inclined to take into account the preferences of other people. But it could be that those preferences, even when well understood, only take up a fraction of the intelligence-amplified human’s values (an example of successful perspective-taking but low empathy). 3

Increase metacognition? By examining one’s own thought processes, noticing uncertainty, developing a tight update loop between prediction and observation, one may more quickly be able to develop positive goals and make good decisions. 4

Perhaps moral reasoning will naturally increase with intelligence? It could also be the case that moral reasoning (in the sense of knowing what is best to do) will only be improved through intelligence. Morality is hard to figure out, it’s more complicated than “do what feels nice”, because human intuition isn’t equipped to deal with scope. Some progress in moral reasoning is made by high-IQ philosophers and economists, and if we had more of the same working to figure out morality, we would have a better idea of what to do with augmented intelligence.


Some followup questions:

  • What is the correlation between rationality and intelligence?  If it is true that rationality and intelligence are correlated, then that’s great, but we’re not sure yet. 5
  • What are possible failure modes I haven’t considered here? 6
  • Perhaps ‘intelligence” is too broad of a term. Human intelligence is composed of many interacting submodules. What are the most relevant submodules, and can we increase them independently of other submodules?
    • It could be the case that some component of intelligence is easiest to augment first, such as long-term memory. But this will probably not lead to improved ethical reasoning.


  1. This is similar to asking: how does the value-loading problem in AI changes when applied to humans? I want to understand what is different about starting from a human brain – what opportunities we may have.
  2. Learning what to value is a problem in itself: see indirect normativity  ( and one approach to indirect normativity, CEV ( ).
  3. More has been written about the neural bases of empathy here:  and here: This incidentally comes from the lab where I used to work.
  4. See and 
  5. See Project guide: How IQ predicts metacognition and philosophical success and Stanovich – Rationality and Intelligence
  6. One way to find more failure modes for human augmentation would be to look into sci fi novels. Not that we assign huge weight to these – for any given specific sci fi scenario I’d assign a <1% probability to it happening as predicted, but could give us classes of failure modes to think about.

Research Agenda, Round 1: an initial survey

Posted by

You can view this post in the html text below, or the nicely-formatted pdf here: Research Agenda Version 1: Initial Survey.pdf

This post is a list of questions about various technologies that send information into or out of a brain, or change the way the brain processes information. By defining these questions, I hope to develop a better understanding of the large possibility space in modern brain technology. 1

Background Motivation

One potential way to increase human effectiveness would be to improve the functioning of the certain parts of the human brain. 2 We could examine the input, processing, and output stages of the information flow, and look for ways to understand, improve, and extend each of those stages. Eventually, we may be able to create tools that improve the parts of the brain that make good decisions, solve tough problems, invent new ideas, understand moral reasoning, or experience empathy. If human brains became better at such mental abilities, I believe it would have positive ripple effects into many other areas of human activity.

 Breadth-First Approach

This article examines many possible pathways to that target. The initial approach is take an even survey of the potential tools to add to our toolbox. Our investigation will hold off on getting attached to specific solutions, or discarding broad classes of solutions for lack of known specifics.

It’s organized in rough order of levels of information: perception, language, motion, physiology, cellular biology; then macro- and micro- circuit systems in the brain. 3 Finally, there is an initial list of items to be understood in ethics and strategy.

Giving shape to a possibility space

A larger goal in this document is to provide a starting framework for researching technologies at multiple levels. I think that answers to these questions will help us start to navigate and define the possibility space in  brain technology.

 The questions and categories in this post don’t form a complete survey- they’re the the ones that are fruitful for conversation, conservative enough to post on the internet, and known to me at the time of writing this. 4

Over time, I may return to this post and continue to add more specific questions or categories.




Information- Software [Input + Output]

What are core design principles for software that harnesses group intelligence?

  •         Prediction Markets
  •         Large-scale citizen science i.e. Eyewire, Foldit
  •         Wikis/Forums

Perception [Input]

“Immersive Media” = Virtual Reality + Gesture Tracking + Haptic Feedback

Virtual Reality

  • Are there good examples (prototypes, concept sketches, or from science fiction) of using Virtual Reality or Augmented Reality for the following:
    • Information visualization
    • Data analysis
    • Life and physical science education (biology, chemistry, physics)
    • Math education (linear algebra with actual 4D vectors)
    • Medical operations
    • Clinical psychology
    • Rationality training / real world cognitive bias or puzzle solving exercises
    • Formal research experiments in social psychology or perception
  • Are there efforts to combine avatar control with natural language processing and generation, to create a platform for artificially intelligent character avatars? This could be a service/engine for building many kinds of games / applications.


  • Timelines for contact lenses, optical projection.

Natural Language Understanding and Generation [ Input + Output ]


  • Given a long piece of text, ability to generate a natural-sounding summary of most important ideas of the text.
  • Given a psychographic profile, ability to generate a simple story from the perspective of the character.
  • Upcoming milestones.

Motion Sensors [Output]

  • Gesture Tracking: Kinect, Leap, etc.
  • Worn on body: Myo

Motion Actuators [Output]

  • After refining/solving the vision problem (Rift C1?) Haptic feedback will be the bottleneck to immersive VR.
    • Alternative/creative attempts to to glove or suit? Air compression, Sound waves, Nanomaterials?
  • Have there been studies on using haptic feedback for mood regulation, neuroplastic training in healthy adults to develop extra senses, or just information “data sensualization”?


External Bio Sensors [Output]

  • Wearables – ECG, Respiration, etc.
    • Low information bandwidth, high amount of maker activity already.

Question: Biosensors:

Is it the case that (1) combinations of today’s external sensors (EEG, ECG…) along with Virtual Reality/ haptics can be used in radically different ways? Or is it the case that (2) their applications are confined to ‘meditation / neurofeedback / focus training’, and more advanced types of applications must wait for smaller BioMEMS or implantables? Right now, (2) seems more likely given the amount of people exploring vs. amount of actually new potential applications.

Question: Parasympathetic Nervous System – Regulation

What studies show the benefits of moderating physiology on cognition (as can be done with current biosensors)? Can this actually help people focus better? What is the highest recorded percent increase in concentration, creativity problem solving or related metrics in healthy adults, using biofeedback?


Bio-MEMS [Input + Output]


  • Can BioMEMS also act as actuators/controllers/builders (or are they mostly sensors?)
  • Bioengineering [Input, Processing, Output]

    • What types of genes / how many genes are addressable with modern gene therapy?
    • What kinds of neural tissues have had success with stem cell therapy?
    • Exploratory engineering: the hippocampus continuously generates new cells (neurogenesis). Could an increase in the rate of hippocampal neurogenesis influence its higher-level performance (say, spatial learning)? An initial study shows the brain is resilient to decreased neurogenesis, but the door remains open to experiments that increase neurogenesis.

Synthetic Bio [Input + Output]


Chemicals [Processing]

(There are a number of chemicals that affect mood and mental state, more and less common. I do not necessarily believe they should be used, but find it useful to understand the principles behind their effects.)

  • Are there studies on the combination of chemical stimulants with macro-scale stimulation ie tMS?
  • What about with immersive media, virtual reality, video games, group therapy circles, CBT, or other high-level psychological interventions?

The following sections are organized according to the general types of neuroengineering technologies in Ed Boyden’s MIT class.

Brain – Macro Circuit Reading [Output]

Noninvasive mapping and measurement.

  • PET, photoacoustic, MEG, EEG, fMRI, infrared imaging, x-rays.

Brain – Macro Circuit Stimulation [Input, Processing ]

Macrocircuit control.

  • Magnetic, electrical, ultrasonic, chemical, pharmacological/pharmacogenetic, thermal.

Brain – Micro Circuit Reading [Output]

Development of invasive mapping and measurement.

  • Electrodes
  • nanoprobes, nanoparticles
  • optical imaging and optical microscopy
  • endoscopy,
  • multiphoton microscopy, electron microscopy, light scattering,
  • bioluminscence,

Brain – Micro Circuit Stimulation [Input, Processing ]

Development of microcircuit control.

  • DBS, infrared optical stimulation, optogenetics,
  • nanoparticle-mediated control, uncaging
  • signaling control.

Ethics and Strategy

  • What is an appropriate target demographic for different levels of brain technology?
    • For discussions specific to cognitive enhancement, this book (Cognitive Enhancement, Hildt and Franke, 2013) offers an excellent, detailed discussion on the ethics of cognitive enhancement from multiple views. The introductory chapter offers an overview discussion.
  • Examine the relationship between neuroscience, intelligence amplification, and artificial intelligence safety.
    • Likelihood of neuro research to contribute to neuromorphic AI (seems likely).
    • Likelihood of various fields in neuroscience to lead to amplification of various forms of intelligence.
      • Opportunities to bolster moral reasoning / empathy in parallel with or before other forms of intelligence. (This would become very important as the strength of the intelligence amplification (IA) technology increases).
      • Amount of overlap between research contributing to intelligence amplification and research contributing to neuromorphic AI (some research areas may be completely separate and safer to pursue).
    • Likelihood of intelligence amplification to lead to improvements in AI safety (seems unlikely by itself, better chance when combined with improved moral reasoning / rationality).
    • Are there feasible ways to make IA tools available only available to select research scientists (such as those advancing technology safety).
      • Advancing activity in all fields in science and technology equally could have a neutral or negative effect, because of the high risks from some emerging technologies.
    • Overall benefits or costs of IA neuro research.
    • See also: Luke Muehlhauser on Intelligence Amplification and Friendly AI
  • Estimating the actual value of technological development, and the replaceability of a particular project.
    • If one desires to make a large social impact, they must take into account expected value of making particular technologies, when (1) very similar things could be made by others a few years down the road, and/or (2) their functionality may eventually be replaced by more advanced technologies. (Example: creating wearables now vs. personally working on biomems now vs waiting for biomems to arrive while doing something else.)
      • Consideration: The value of the project is the value of having the information or use of the tool sooner than we would have otherwise. 5
        • However, counterfactuals (and relative impact) are hard or impossible to compute well.
        • There may be some arguments for why this is not a well-founded concern, or even if it is well-founded, that it may not be practical to give it a lot of weight. For now, I believe this consideration does matter when determining what to prioritize.


Notes 6

  1. If you are interested in answering some of these questions, and are meticulous enough to read footnotes, you might be an excellent person to write or coauthor future posts on this blog. Drop me a line if that sounds interesting to you. 🙂
  2. It is my working hypothesis that strategic implementation of technologies that improve brain function would make humans more effective at the activities that matter (and would otherwise have a net positive effect), but this is not guaranteed. The second section on “Ethics and Strategy” offers some initial reasons this might not be true.
  3. A common ordering of levels in neuroscience is Cognitive > Systems > Cellular > Molecular.
  4. Disclaimer: I will acknowledge some potential reasons not to publish a blog post like this one. A detailed discussion about creating certain brain technologies could pose an information hazard (specifically, idea or attention hazards). Another potential pitfall is that it might distract myself or other people from more important activities that we could be doing otherwise (opportunity cost). Because the topics are relatively well-known and the blog has little social momentum, these disclaimers don’t concern me for now, but they may be revisited in the future.
  5. This view is described the first few pages of Chapter 15 in Nick Bostrom’s “Superintelligence.”
  6. Thanks to Madeeha Ghori for helpful feedback on this post.