Category Archives: Strategy

Equipping future humans to solve future problems: support for intelligence amplification


Posted by

Support for intelligence amplification

Why is it important to find ways to improve human’s ability to learn, think, and perform ethical reasoning?

Here is a supporting argument for intelligence amplification that I’ll call “Increasing the amount of future decisions that are made by enhanced decision-makers”.  I haven’t seen it formalized before, so I’ve written it out here for reference. I find it appealing, but I might be missing important perspectives that would suggest otherwise.  What do you think?

Increasing the amount of future decisions that are made by enhanced decision-makers

It could be the case that human brains, no matter how exceptional, are not complex enough to figure out the best long-term course for our species. Technological power will develop faster than our ability to ensure its safety.

One strategy we can follow is to develop further insight about what to value (i.e. through cause prioritization research), and build our social and technical capacity to pursue those values as we discover them. One way to build social and technical capacity is through improving human cognition.

As some humans develop more augmented cognition, they hold much better position to steer technological development in a positive direction. Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers.

There are several steps:

(1)

As a general strategic consideration, If you aren’t in a good position to answer a question or accomplish a goal, the next best move is to put yourself in position where you are much more likely to be able to answer the question or accomplish the goal.

You can do this by getting more information and a better definition of the problem, and building up resources or abilities that lead to answering it. 1

(2)

The future contains overwhelming problems and enormous opportunities. Like it or not, we are involved in a high-stakes game with the future of earth-originating life. There is no guarantee that the steps to solve the problems or reach those opportunities will be calibrated to our species’ current level of ability. In fact, some seem extremely difficult and beyond our current level of skill. 2 With so much hanging in the balance, we really, really need to radically improve humanity’s capacity to solve problems. 

(3)

One way to put ourselves in a better position is to increase human cognition. In this context, I intend “cognition” to refer to a very broad set of things: “the set of information processing systems that we can use to solve problems and pursue opportunities.”3

Some places to increase human intelligence are through collective intelligence (e.g. organizations, incentive structures, discussion platforms), individual tools (e.g. mindmapping) , psychological methods (e.g. education and rationality training), and neural activity (e.g. neurotechnology).

As some humans develop more augmented cognition, they will occupy a much better position to steer the future in a positive direction. 4

=>

Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers. (This does not mean we ought to put off working on those important problems now in the hopes that we can completely “pass the workload” to future augmented humans. Nor is it a claim that, if we reach extraordinary success at intelligence amplification, other problems will be immediately solved. This is merely to support intelligence amplification as one of many possible valuable investments in the longer-term future. )
I welcome commentary on (1), and am fairly confident of (2). The main part I want to focus on is (3).

Questions for further research

Is (3) (increasing human intelligence) a reliably good way of fulfilling the action proposed in (1) (put ourselves in a better position to solve problems)?

If so, which realms of intelligence amplification (say, out of collective intelligence, individual tools, psychological methods, and neural activity) hold the most promise? 5 


 

Notes

  1. See more on my strategy for navigating large open possibility spaces.Actions to Take When Approaching a Large Possibility Space

    Question: Beyond gathering information, skills, and resources, are there other actions you can do to better put yourself in a position to answer the questions or accomplish the goals?

  2. Problems: most importantly, existential risks. See this introduction to existential risk reduction, and a more detailed description of 12 prominent existential risks.

    Opportunities: Among the opportunities that we get to pursue, I’m pretty jazzed about dancing in fully-immersive VR, developing better control over what genes do, figuring out more about where space came from, and the potential to explore mindspace while implemented in another substrate.

  3. See Engelbart’s framework for augmenting the human intellect, which I will expand in a future post. 
  4. Or, it could be that the gains in intelligence that we develop are not enough, but the intelligence-augmented humans can then develop a more advanced IA technology. This chain of successive upgrades could go on for a number of iterations, or could to recursive self-improvement of the initial intelligence-augmented humans. In this case, we would be “Equipping future humans to equip themselves to make more optimal decisions.”
  5. A final note: This blog aims to provide a more technical, actionable projects list towards “upgrading cognition”, so we can spur the necessary scientific and technological development while we still have time. This post is partially motivated by the urgency in Bostrom’s unusually poetic writing in “Utopia”:

    Upgrade cognition!

    Your brain’s special faculties: music, humor, spirituality, mathematics, eroticism, art, nurturing, narration, gossip!  These are fine spirits to pour into the cup of life.  Blessed you are if you have a vintage bottle of any of these.  Better yet, a cask!  Better yet, a vineyard!

    Be not afraid to grow.  The mind’s cellars have no ceilings!

    What other capacities are possible?  Imagine a world with all the music dried up: what poverty, what loss.  Give your thanks, not to the lyre, but to your ears for the music.  And ask yourself, what other harmonies are there in the air, that you lack the ears to hear?  What vaults of value are you witlessly debarred from, lacking the key sensibility?

    Had you but an inkling, your nails would be clawing at the padlock.

    Your brain must grow beyond any genius of humankind, in its special faculties as well as its general intelligence, so that you may better learn, remember, and understand, and so that you may apprehend your own beatitude.

    Oh, stupidity is a loathsome corral!  Gnaw and tug at the posts, and you will slowly loosen them up.  One day you’ll break the fence that held your forebears captive.  Gnaw and tug, redouble your effort!

Neurotechnology forecasting: stepping stones in financial and technical viability


Posted by

Context: I’m working to better understand how neurotechnology will develop in the upcoming decades. This post outlines a potential pathway where neurotechnology is developed first for medical purposes, shortly followed by cognitive enhancement. 1

An Initial Assumption: Eventually, neurotechnology (say, from neural implants or neural stem cells) will be developed enough to restore people with Alzheimers disease to normal-level memory capacity (not necessarily restoring older memories, just retaining new ones). We can make this estimate based on current knowledge about:
– incentives: current economic incentive for research to treat Alzheimers
– knowledge: current scientific knowledge
– base cases: success of initial research
– players: current organizations and people who are working to further develop this research

(I would guess the timeline for this is somewhere around 20 years, although the next claim does not depend on that number.)

Medical Applications, then Enhancement Applications:  Once the above-mentioned medical neurotechnology is developed:
– the remaining technical hurdles to turn it into a cognitive-enhancing product will be significantly smaller, and
– the social and economic incentives to do so will be large enough,
such that it is very likely the technical hurdles will be overcome to adapt it into a cognitive-enhancing neurotechnology. 2

(The timeline for the step from medical to cognitive-enhancing applications is somewhere around 2-10 years. The general prediction is that the medical->consumer step will be shorter than the research->medical step).

So it is very likely that the development of medical-grade neurotechnology will be followed shortly after by consumer-grade enhancement neurotechnology.

Can anyone drill holes in my example, or suggest ways to *test* this hypothesis? How can we develop better methods for making this type of tech forecasting claim?


 

Notes

  1. This post tries to model “stepping stones” in technological progress. I’m guessing there is some kind of model that would predict that once Technology X is developed, Technology Y is very likely to also be developed within N years. I’m currently trying to learn about methods in technology forecasting, and so this may not actually work. Let me know if you can tell me why.
  2. “Cognitive-enhancing” isn’t an all-or-nothing state. This is more of a statement that cognitive enhancing neurotechnology that is significantly more advanced than today’s will probably not be developed through a direct effort, because social and economic incentives for strong cognitive enhancement are not large enough yet. It seems more likely to be developed indirectly, first passing through the medical research route.

Which components of human cognition to target first?


Posted by

What are things that we can improve about a brain, that will help make sure that potential future intelligence-amplified humans are nice?

This post is an initial examination of mental components that we could target to improve the ethical reasoning of intelligence-amplified humans. It is a rough sketch and will probably be updated as I understand more.


 

Say that we are developing tools that can change and improve how different parts of the brain functions.

Say that we also have a bit of time before these tools come around, so we can try to apply foresight to figure out which parts of the human brain we want to target for improvement first. 1

It would be useful to know if there are particular abilities that would be strategic to target first, and use this to guide the development of intelligence-amplifying neurotechnology.

Technologies that could do this include iterated embryo selection, gene therapy to increase neurogenesis or neuroplasticity, and an advanced brain-computer interface like neural dust (there may be others that are unknown to me or yet-to-be-imagined). (A future post will explore these in more detail).

Mental abilities that could be improved include:  

  • empathy
  • ethical reasoning or philosophical success
  • metacognition
  • intelligence / problem solving ability

“Ethical reasoning” requires some clarification. Assuming the development of IA technology, we want the augmented humans to make good decisions, to be aligned with humans values, or the set of values that humans would have if they thought long and hard about it. 2

Let’s start with this assumption:

It is strongly preferred to increase a human’s moral reasoning, or “ability and commitment to develop good goals” before or at least at the same time as increasing a human’s intelligence or  “ability to accomplish a set of goals”.

So, how do we accomplish this for a human?

Some hypotheses:

Increase empathy? It could be that through experiencing a System-1 sense of empathy, feeling and relating to the other person, they are more inclined to take into account the preferences of other people. But it could be that those preferences, even when well understood, only take up a fraction of the intelligence-amplified human’s values (an example of successful perspective-taking but low empathy). 3

Increase metacognition? By examining one’s own thought processes, noticing uncertainty, developing a tight update loop between prediction and observation, one may more quickly be able to develop positive goals and make good decisions. 4

Perhaps moral reasoning will naturally increase with intelligence? It could also be the case that moral reasoning (in the sense of knowing what is best to do) will only be improved through intelligence. Morality is hard to figure out, it’s more complicated than “do what feels nice”, because human intuition isn’t equipped to deal with scope. Some progress in moral reasoning is made by high-IQ philosophers and economists, and if we had more of the same working to figure out morality, we would have a better idea of what to do with augmented intelligence.

 

Some followup questions:

  • What is the correlation between rationality and intelligence?  If it is true that rationality and intelligence are correlated, then that’s great, but we’re not sure yet. 5
  • What are possible failure modes I haven’t considered here? 6
  • Perhaps ‘intelligence” is too broad of a term. Human intelligence is composed of many interacting submodules. What are the most relevant submodules, and can we increase them independently of other submodules?
    • It could be the case that some component of intelligence is easiest to augment first, such as long-term memory. But this will probably not lead to improved ethical reasoning.

Notes

  1. This is similar to asking: how does the value-loading problem in AI changes when applied to humans? I want to understand what is different about starting from a human brain – what opportunities we may have.
  2. Learning what to value is a problem in itself: see indirect normativity  (https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) and one approach to indirect normativity, CEV ( http://intelligence.org/files/CEV.pdf ).
  3. More has been written about the neural bases of empathy here: http://ssnl.stanford.edu/sites/default/files/pdf/zaki2012_neuroscienceEmpathy.pdf  and here: http://ssnl.stanford.edu/sites/default/files/pdf/zaki2014_motivatedEmpathy.pdf. This incidentally comes from the lab where I used to work.
  4. See http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3318765/ and http://en.wikipedia.org/wiki/Metacognition#Components 
  5. See Project guide: How IQ predicts metacognition and philosophical success and Stanovich – Rationality and Intelligence
  6. One way to find more failure modes for human augmentation would be to look into sci fi novels. Not that we assign huge weight to these – for any given specific sci fi scenario I’d assign a <1% probability to it happening as predicted, but could give us classes of failure modes to think about.

Research Agenda, Round 1: an initial survey


Posted by

You can view this post in the html text below, or the nicely-formatted pdf here: Research Agenda Version 1: Initial Survey.pdf

This post is a list of questions about various technologies that send information into or out of a brain, or change the way the brain processes information. By defining these questions, I hope to develop a better understanding of the large possibility space in modern brain technology. 1

Background Motivation

One potential way to increase human effectiveness would be to improve the functioning of the certain parts of the human brain. 2 We could examine the input, processing, and output stages of the information flow, and look for ways to understand, improve, and extend each of those stages. Eventually, we may be able to create tools that improve the parts of the brain that make good decisions, solve tough problems, invent new ideas, understand moral reasoning, or experience empathy. If human brains became better at such mental abilities, I believe it would have positive ripple effects into many other areas of human activity.

 Breadth-First Approach

This article examines many possible pathways to that target. The initial approach is take an even survey of the potential tools to add to our toolbox. Our investigation will hold off on getting attached to specific solutions, or discarding broad classes of solutions for lack of known specifics.

It’s organized in rough order of levels of information: perception, language, motion, physiology, cellular biology; then macro- and micro- circuit systems in the brain. 3 Finally, there is an initial list of items to be understood in ethics and strategy.

Giving shape to a possibility space

A larger goal in this document is to provide a starting framework for researching technologies at multiple levels. I think that answers to these questions will help us start to navigate and define the possibility space in  brain technology.

 The questions and categories in this post don’t form a complete survey- they’re the the ones that are fruitful for conversation, conservative enough to post on the internet, and known to me at the time of writing this. 4

Over time, I may return to this post and continue to add more specific questions or categories.

Enjoy!


Technologies

MEDIA

Information- Software [Input + Output]

What are core design principles for software that harnesses group intelligence?

  •         Prediction Markets
  •         Large-scale citizen science i.e. Eyewire, Foldit
  •         Wikis/Forums

Perception [Input]

“Immersive Media” = Virtual Reality + Gesture Tracking + Haptic Feedback

Virtual Reality

  • Are there good examples (prototypes, concept sketches, or from science fiction) of using Virtual Reality or Augmented Reality for the following:
    • Information visualization
    • Data analysis
    • Life and physical science education (biology, chemistry, physics)
    • Math education (linear algebra with actual 4D vectors)
    • Medical operations
    • Clinical psychology
    • Rationality training / real world cognitive bias or puzzle solving exercises
    • Formal research experiments in social psychology or perception
  • Are there efforts to combine avatar control with natural language processing and generation, to create a platform for artificially intelligent character avatars? This could be a service/engine for building many kinds of games / applications.

Hardware

  • Timelines for contact lenses, optical projection.

Natural Language Understanding and Generation [ Input + Output ]

Timelines.

  • Given a long piece of text, ability to generate a natural-sounding summary of most important ideas of the text.
  • Given a psychographic profile, ability to generate a simple story from the perspective of the character.
  • Upcoming milestones.

Motion Sensors [Output]

  • Gesture Tracking: Kinect, Leap, etc.
  • Worn on body: Myo

Motion Actuators [Output]

  • After refining/solving the vision problem (Rift C1?) Haptic feedback will be the bottleneck to immersive VR.
    • Alternative/creative attempts to to glove or suit? Air compression, Sound waves, Nanomaterials?
  • Have there been studies on using haptic feedback for mood regulation, neuroplastic training in healthy adults to develop extra senses, or just information “data sensualization”?

 BODY – EXTERNAL

External Bio Sensors [Output]

  • Wearables – ECG, Respiration, etc.
    • Low information bandwidth, high amount of maker activity already.

Question: Biosensors:

Is it the case that (1) combinations of today’s external sensors (EEG, ECG…) along with Virtual Reality/ haptics can be used in radically different ways? Or is it the case that (2) their applications are confined to ‘meditation / neurofeedback / focus training’, and more advanced types of applications must wait for smaller BioMEMS or implantables? Right now, (2) seems more likely given the amount of people exploring vs. amount of actually new potential applications.

Question: Parasympathetic Nervous System – Regulation

What studies show the benefits of moderating physiology on cognition (as can be done with current biosensors)? Can this actually help people focus better? What is the highest recorded percent increase in concentration, creativity problem solving or related metrics in healthy adults, using biofeedback?


BODY – INTERNAL

Bio-MEMS [Input + Output]

(article)

  • Can BioMEMS also act as actuators/controllers/builders (or are they mostly sensors?)
  • Bioengineering [Input, Processing, Output]

    • What types of genes / how many genes are addressable with modern gene therapy?
    • What kinds of neural tissues have had success with stem cell therapy?
    • Exploratory engineering: the hippocampus continuously generates new cells (neurogenesis). Could an increase in the rate of hippocampal neurogenesis influence its higher-level performance (say, spatial learning)? An initial study shows the brain is resilient to decreased neurogenesis, but the door remains open to experiments that increase neurogenesis.

Synthetic Bio [Input + Output]


 BRAIN

Chemicals [Processing]

(There are a number of chemicals that affect mood and mental state, more and less common. I do not necessarily believe they should be used, but find it useful to understand the principles behind their effects.)

  • Are there studies on the combination of chemical stimulants with macro-scale stimulation ie tMS?
  • What about with immersive media, virtual reality, video games, group therapy circles, CBT, or other high-level psychological interventions?

The following sections are organized according to the general types of neuroengineering technologies in Ed Boyden’s MIT class.

Brain – Macro Circuit Reading [Output]

Noninvasive mapping and measurement.

  • PET, photoacoustic, MEG, EEG, fMRI, infrared imaging, x-rays.

Brain – Macro Circuit Stimulation [Input, Processing ]

Macrocircuit control.

  • Magnetic, electrical, ultrasonic, chemical, pharmacological/pharmacogenetic, thermal.

Brain – Micro Circuit Reading [Output]

Development of invasive mapping and measurement.

  • Electrodes
  • nanoprobes, nanoparticles
  • optical imaging and optical microscopy
  • endoscopy,
  • multiphoton microscopy, electron microscopy, light scattering,
  • bioluminscence,

Brain – Micro Circuit Stimulation [Input, Processing ]

Development of microcircuit control.

  • DBS, infrared optical stimulation, optogenetics,
  • nanoparticle-mediated control, uncaging
  • signaling control.

Ethics and Strategy

  • What is an appropriate target demographic for different levels of brain technology?
    • For discussions specific to cognitive enhancement, this book (Cognitive Enhancement, Hildt and Franke, 2013) offers an excellent, detailed discussion on the ethics of cognitive enhancement from multiple views. The introductory chapter offers an overview discussion.
  • Examine the relationship between neuroscience, intelligence amplification, and artificial intelligence safety.
    • Likelihood of neuro research to contribute to neuromorphic AI (seems likely).
    • Likelihood of various fields in neuroscience to lead to amplification of various forms of intelligence.
      • Opportunities to bolster moral reasoning / empathy in parallel with or before other forms of intelligence. (This would become very important as the strength of the intelligence amplification (IA) technology increases).
      • Amount of overlap between research contributing to intelligence amplification and research contributing to neuromorphic AI (some research areas may be completely separate and safer to pursue).
    • Likelihood of intelligence amplification to lead to improvements in AI safety (seems unlikely by itself, better chance when combined with improved moral reasoning / rationality).
    • Are there feasible ways to make IA tools available only available to select research scientists (such as those advancing technology safety).
      • Advancing activity in all fields in science and technology equally could have a neutral or negative effect, because of the high risks from some emerging technologies.
    • Overall benefits or costs of IA neuro research.
    • See also: Luke Muehlhauser on Intelligence Amplification and Friendly AI
  • Estimating the actual value of technological development, and the replaceability of a particular project.
    • If one desires to make a large social impact, they must take into account expected value of making particular technologies, when (1) very similar things could be made by others a few years down the road, and/or (2) their functionality may eventually be replaced by more advanced technologies. (Example: creating wearables now vs. personally working on biomems now vs waiting for biomems to arrive while doing something else.)
      • Consideration: The value of the project is the value of having the information or use of the tool sooner than we would have otherwise. 5
        • However, counterfactuals (and relative impact) are hard or impossible to compute well.
        • There may be some arguments for why this is not a well-founded concern, or even if it is well-founded, that it may not be practical to give it a lot of weight. For now, I believe this consideration does matter when determining what to prioritize.

 


Notes 6

  1. If you are interested in answering some of these questions, and are meticulous enough to read footnotes, you might be an excellent person to write or coauthor future posts on this blog. Drop me a line if that sounds interesting to you. 🙂
  2. It is my working hypothesis that strategic implementation of technologies that improve brain function would make humans more effective at the activities that matter (and would otherwise have a net positive effect), but this is not guaranteed. The second section on “Ethics and Strategy” offers some initial reasons this might not be true.
  3. A common ordering of levels in neuroscience is Cognitive > Systems > Cellular > Molecular.
  4. Disclaimer: I will acknowledge some potential reasons not to publish a blog post like this one. A detailed discussion about creating certain brain technologies could pose an information hazard (specifically, idea or attention hazards). Another potential pitfall is that it might distract myself or other people from more important activities that we could be doing otherwise (opportunity cost). Because the topics are relatively well-known and the blog has little social momentum, these disclaimers don’t concern me for now, but they may be revisited in the future.
  5. This view is described the first few pages of Chapter 15 in Nick Bostrom’s “Superintelligence.”
  6. Thanks to Madeeha Ghori for helpful feedback on this post.

Pathfinding: how to navigate open-ended possibilities


Posted by

Context

Our group currently faces a large possibility space.  We’re excited by the completion of the Exploratorium project. There’s an awesome network of people, and interesting projects that could come next. Before jumping ahead, I want to explore (and get feedback on) a few general strategies that a group can take to navigate a large possibility space. This post is less about our group in particular, and more about general strategy for making decisions. Later, we’ll discuss how to apply this approach to the cognitive technology field.

Intro

The early beginnings of most creative activities face a large possibility space, with many open-ended options. You might be looking for a good career, choosing a college major, designing a product for a start-up, or planning a new research project. You may also have very complex goals such trying to ensure humanity survives the next century, increasing the amount of information that can flow in and out of a brain, or being able to interpret the subtleties of the human genome.

There are many possible sub-goals, design decisions, and activities that could be explored, perhaps within some larger constraints.

When faced with many possibilities, a common failure mode is decision paralysis. Actual progress grinds to a halt. Mental resources are spent on “waiting to find more information” and “waiting for the pieces to crystallize in my head a bit more.”

These are important and necessary. The next step is to intentionally push forward the collection of information and crystallization of pieces in your head.

Eventually, you aim to reduce the space to more discrete options, where you are presented with path A, B, or C and can compare values between them. Once the problem is formalized, you can apply decision theory to decide between the options. In the large possibility space, we’re still defining what the options are.

I’ll start by looking at three (of many possible) general actions:

 1) Develop a better understanding of the possibility space.
  • A better definition of the game could come from:
    • more information about the opportunities and constraints that you face
    • a better understanding of your own possible moves and actions
2) Build up resources that will still be useful in most relevant future projects.
  • Resources could include social or technical skills, a social network, money, a portfolio of completed projects, etc.
3) Increase your intelligence and ability to solve problems. 
  • One way to approach this is to improve your tools/algorithms, and increase your speed/experience at using your set of tools/algorithms.

Pathfinding - Actions to Take When Approaching a Large Possibility Space - 2016Click the image to expand to full size. You can, share, and modify the original XMind file for this flowchart here

Chipping Away At Uncertainty

Question generation is especially interesting, so let’s explore that for a minute.

The general goal is to find questions that will give shape to the space of possibilities, and cast light on the constraints. The mental motion is: “Come up with a list of fairly specific questions, where if I knew the answer to those questions, I would have a clearer idea of what a better strategy would be.”

In other words, notice where there is uncertainty about your task or the task space. Come up with a list of specific questions that, when answered correctly, will make it better understood.

Example Questions:

  • What is the best possible outcome – what would be a “home run”?
  • If you accomplished nothing else, what is the most important outcome at the end of this project?
  • Are there metrics you can use that approximate the success of the project?
  • What is the worst possible thing that can happen?
  • What are the constraints? What directions can we definitely rule out?
  • What are the fundamental units of the system? What are they made out of? What is the lowest level at which we can work?
  • What activities would multiply / enable / build a springboard for projects in the future?
  • Has someone tried approaching this kind of space before? Are there concrete examples of success or failure? (See reference class forecasting).
  • Are there well-defined processes that could automate part of the plan?
  • What hasn’t been done before, that ought to be?

– If you’re looking for more concrete examples, I wrote two more examples of using this method here as a followup note.

– What else would you add to the list of questions? What other actions would you take in an open possibility space? I would love your feedback: email stephenfrey5 at gmail dot com.

Looking ahead

This was a first rough pass at describing the general approach to large possibility spaces. Specifically, I wanted to introduce the method of “generating and answering questions.”

(I’ll aim to provide more concrete examples in a followup post. This page may be updated once I have more feedback and information. If you know anything about decision theory and want to email me your thoughts, that would be lovely.)

Future posts will use method a tool to explore future actions the cognitive technology field.

Thanks to Madeeha Ghori for giving helpful feedback on this post. 

 

Cognitive Technology: a beginning and a question


Posted by

What is Cognitive Technology?
The current stated goal of the Cognitive Technology group is to “create tools to understand, extend, and improve human cognition.” This statement is quite broad (intentionally so). This post will discuss what this means so far, and how we can further refine the target.

The word “cognitive” represents an interest in a range of technologies, from low-level monitoring and stimulation of neural circuits, to higher-level interfaces such as virtual reality. As the technology becomes more powerful, I anticipate these levels will become more connected to each other, and I want to start a design conversation about how to pursue that in safe and extremely positive ways.

Cognitive Technology could be described as an extension of Cognitive Science. It asks a question: “How can we apply the knowledge from cognitive science to create tools that help improve mental ability?” Many people are working on different parts of this question, but we aim to intentionally pursue interdisciplinary research and development.

Understand, Extend, and Improve
How does the brain work, and how can we improve it?

We can create tools to understand the brain. Brain technologies are like telescopes that give us a window into the brain. As we create better telescopes, we will get a clearer picture of how the brain produces thoughts and feelings. In turn, more understanding will give us more ideas of areas we could extend and improve the brain.

We can create tools to improve the brain – to become better at problem solving, more focused, aware of cognitive biases, empathetic, and creative. The knowledge gained in the cognitive sciences and neuroscience can be applied to intentionally improve the way our brains work.

We can create tools to extend the brain. Through informational tools, haptic devices, immersive environments, robotics, and other actuators, we can amplify the amount of information coming out of the brain and use it to increase our abilities.

What’s Next?
The above writing expresses my current view of an “applied” approach to brain technology. In the future, I hope to get more information, find critical flaws in the way I think about this, and adjust course many times. Please, make suggestions or point out holes in these viewpoints – your feedback is incredibly useful.

In the next post, I will discuss (and look for feedback on) a general approach for taking actions in a large possibility space. Then we can think about how to apply this approach to future group activities in brain technology.

– Stephen