Author Archives: Stephen Frey

Equipping future humans to solve future problems: support for intelligence amplification

Posted by

Support for intelligence amplification

Why is it important to find ways to improve human’s ability to learn, think, and perform ethical reasoning?

Here is a supporting argument for intelligence amplification that I’ll call “Increasing the amount of future decisions that are made by enhanced decision-makers”.  I haven’t seen it formalized before, so I’ve written it out here for reference. I find it appealing, but I might be missing important perspectives that would suggest otherwise.  What do you think?

Increasing the amount of future decisions that are made by enhanced decision-makers

It could be the case that human brains, no matter how exceptional, are not complex enough to figure out the best long-term course for our species. Technological power will develop faster than our ability to ensure its safety.

One strategy we can follow is to develop further insight about what to value (i.e. through cause prioritization research), and build our social and technical capacity to pursue those values as we discover them. One way to build social and technical capacity is through improving human cognition.

As some humans develop more augmented cognition, they hold much better position to steer technological development in a positive direction. Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers.

There are several steps:


As a general strategic consideration, If you aren’t in a good position to answer a question or accomplish a goal, the next best move is to put yourself in position where you are much more likely to be able to answer the question or accomplish the goal.

You can do this by getting more information and a better definition of the problem, and building up resources or abilities that lead to answering it. 1


The future contains overwhelming problems and enormous opportunities. Like it or not, we are involved in a high-stakes game with the future of earth-originating life. There is no guarantee that the steps to solve the problems or reach those opportunities will be calibrated to our species’ current level of ability. In fact, some seem extremely difficult and beyond our current level of skill. 2 With so much hanging in the balance, we really, really need to radically improve humanity’s capacity to solve problems. 


One way to put ourselves in a better position is to increase human cognition. In this context, I intend “cognition” to refer to a very broad set of things: “the set of information processing systems that we can use to solve problems and pursue opportunities.”3

Some places to increase human intelligence are through collective intelligence (e.g. organizations, incentive structures, discussion platforms), individual tools (e.g. mindmapping) , psychological methods (e.g. education and rationality training), and neural activity (e.g. neurotechnology).

As some humans develop more augmented cognition, they will occupy a much better position to steer the future in a positive direction. 4


Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers. (This does not mean we ought to put off working on those important problems now in the hopes that we can completely “pass the workload” to future augmented humans. Nor is it a claim that, if we reach extraordinary success at intelligence amplification, other problems will be immediately solved. This is merely to support intelligence amplification as one of many possible valuable investments in the longer-term future. )
I welcome commentary on (1), and am fairly confident of (2). The main part I want to focus on is (3).

Questions for further research

Is (3) (increasing human intelligence) a reliably good way of fulfilling the action proposed in (1) (put ourselves in a better position to solve problems)?

If so, which realms of intelligence amplification (say, out of collective intelligence, individual tools, psychological methods, and neural activity) hold the most promise? 5 



  1. See more on my strategy for navigating large open possibility spaces.Actions to Take When Approaching a Large Possibility Space

    Question: Beyond gathering information, skills, and resources, are there other actions you can do to better put yourself in a position to answer the questions or accomplish the goals?

  2. Problems: most importantly, existential risks. See this introduction to existential risk reduction, and a more detailed description of 12 prominent existential risks.

    Opportunities: Among the opportunities that we get to pursue, I’m pretty jazzed about dancing in fully-immersive VR, developing better control over what genes do, figuring out more about where space came from, and the potential to explore mindspace while implemented in another substrate.

  3. See Engelbart’s framework for augmenting the human intellect, which I will expand in a future post. 
  4. Or, it could be that the gains in intelligence that we develop are not enough, but the intelligence-augmented humans can then develop a more advanced IA technology. This chain of successive upgrades could go on for a number of iterations, or could to recursive self-improvement of the initial intelligence-augmented humans. In this case, we would be “Equipping future humans to equip themselves to make more optimal decisions.”
  5. A final note: This blog aims to provide a more technical, actionable projects list towards “upgrading cognition”, so we can spur the necessary scientific and technological development while we still have time. This post is partially motivated by the urgency in Bostrom’s unusually poetic writing in “Utopia”:

    Upgrade cognition!

    Your brain’s special faculties: music, humor, spirituality, mathematics, eroticism, art, nurturing, narration, gossip!  These are fine spirits to pour into the cup of life.  Blessed you are if you have a vintage bottle of any of these.  Better yet, a cask!  Better yet, a vineyard!

    Be not afraid to grow.  The mind’s cellars have no ceilings!

    What other capacities are possible?  Imagine a world with all the music dried up: what poverty, what loss.  Give your thanks, not to the lyre, but to your ears for the music.  And ask yourself, what other harmonies are there in the air, that you lack the ears to hear?  What vaults of value are you witlessly debarred from, lacking the key sensibility?

    Had you but an inkling, your nails would be clawing at the padlock.

    Your brain must grow beyond any genius of humankind, in its special faculties as well as its general intelligence, so that you may better learn, remember, and understand, and so that you may apprehend your own beatitude.

    Oh, stupidity is a loathsome corral!  Gnaw and tug at the posts, and you will slowly loosen them up.  One day you’ll break the fence that held your forebears captive.  Gnaw and tug, redouble your effort!

Notes from the 2015 Neurogaming Conference

Posted by

Last week, I presented a poster at the 2015 ESCoNS Summit1 and Neurogaming Conference. 2

Screen Shot 2015-05-21 at 9.08.42 PM

Here are my main takeaways from the speakers and presenters. These are rough notes, loosely organized by concept.

This post offers some unusual business models and design advice for projects that involve neurofeedback, virtual reality, biodata processing, and/or near-term artificial intelligence applications in entertainment and health industries. 3

Business models for Biosensor devices and applications

Tim Chang, Mayfield: Vitamin gummy bears are an interesting deployment strategy. It’s good for you but it’s also fun. For wellness solutions that you want to deliver, you can use games, storytelling, and make it visually appealing.

How to create a “performance enhancing” service? Here is a sort of playbook of how to pull it off:

– A device to track activity
– AI software / cloud to process it and get insights
– A human coach / inspirational figure
– A social support community

I’m a big believer in this hybrid model. You can have machine learning and AI but you also need humans. You’re not scared of notifications on your watch. You need a coach to kick you in the butt, and a sympathetic peer group.

A promising business model is the “Device as a Service.” When the services become attached to a device and comes with subscriptions, App Stores, coaching, etc. For example, you could have a fitness tracking service that is $100 / year. It comes with a device for free, and an app store where developers can add upgrades.

“Future media will consist of the buying and selling of experiences

Augmented Reality – challenges and potential

Brian Selzer, Daqri:

What makes AR different from VR, and how important is that difference? The key word for me is context. VR is an experience –  a context you can completely control. That’s why we’ll see the first big successes come out with VR, with games and experiences.

When you open up the experience with AR, you need to relate the application  to the environment. That is very challenging. You need to deal with the physics of light and object recognition. The difference between AR and VR is the level of transparency. Eventually, I suspect that we’ll be able to tune the transparency dial in headset displays: to go from complete transparency (AR) to complete immersion (VR).

The VR Industry is about to explode

Walter Greenleaf, CSO, Pear Therapeutics: The VR industry is about to Boom. Smartphones will be the VR platforms of the next few years. Everyone who has a smartphone 5 years from now will be using a virtual environment.

Fortunately, because we’ve seen this technology coming, the principles and heuristics for VR have been worked out. The enabling technology is here. Now we can slap on the paradigms that we’ve spent a long time developing in research labs.

Sensors everywhere

We will always have more data than we know what to do with

Stan Williams, Neuromorphic Computing Lead, HP Labs: We will probably always have more data than we know what to do with, because we will continually add more sensors to the world. There’s two levels to that:

  • The more we learn the more we figure out there is to learn.
  • The horizon of what we think is possible is receding from us faster than we’re moving forward.

The simple estimates of how much data were crunching a few years ago are probably off by a few orders of magnitude. The industry will need to find a way to integrate information from a huge amount of sensory data, plus a large amount of processing at many different levels.

AI as a personal trainer

Liesl Capper, Leader, Watson Life, IBM: Human interaction with AI personalities is enabled by the willing suspension of disbelief.” If you put enough stuff together that gives the illusion of human intelligence, people will accept it because they want to believe that someone is taking them on and caring about them.

Tim Chang, Mayfield: (On AI-assisted living) AI transforms the feedback loop and the ability to model anything. The more I use these neurogaming systems, the more the cloud is forming a model of me.

 Design Projects and Business Ideas

Here are two project ideas that occured to me at the conference. If you succeed with one of these, you’re welcome. Email me if you’re interested in this kind of thing so I can connect you with other people.

 “Stickers:” Portable Biosensor Hardware Design

One of the main roadblocks to integrating realtime biodata with other applications is that the design of the EEG or heart rate monitoring hardware is too big and noticeable.

You could enable more casual biodata monitoring with “wireless biosensor stickers.” One for reading EEG on the frontal lobe, one for heart rate variability, and one for skin conductance. They could wirelessly transmit data to a cell phone. Technical challenges to developing these would include battery life for the stickers, and the ability to transmit a strong enough signal. Design challenges would be creating a sticker that contains wires and a small battery, that doesn’t look goofy on your forehead.

Virtual Reality Anything

The KZero report on the Virtual Reality Consumer market estimated that the Virtual reality market would take in $90m in revenue in 2014, $2.3bn in 2015, $3.8bn in 2016, $4.6bn in 2017 and $5.2bn in 2018. Cumulatively across the period of 2014 to 2018 we have forecasted the consumer virtual reality market to be worth $16.2bn.

Note the projected leap from $90 million to $2.3 billion from 2014 to 2015. Oculus, Samsung, and possibly others are planning each a consumer VR headset release in 2015. Since this report was released in early 2014, I would suspect the projections to be pushed back about a year.

Even stupid apps will probably get a lot of downloads while the rest of the developer world realizes VR is a thing and catches up.

Potential VR Design Projects: If I had a Unity developer and 6 months, and I was optimizing only for earning money within the next 18 months, I would aim to create 4-5 stupidly simple games for the mass market to play on a VR cell phone.

If I wanted to feel like the product itself was slightly useful for the world, but still earn money, I would explore applications in psychotherapy, social skill training, and data visualization. Email me if you have ideas or want to connect with others who are interested in therapeutic, educational, or dataviz applications of VR.


  1. “The Entertainment Software and Cognitive Neurotherapeutics Society (ESCoNS) is the premier academic society for scientists and game researchers who are at the forefront of researching novel ways to develop scientifically valid neurosoftware for the treatment of brain disorders”.
  2. The Neurogaming crowd is an eclectic mix of brain-training research from academia, EEG headset companies, other biosensor hardware makers, virtual reality developers, psychotherapists, and educators. It’s very much in the vein of the more immediate potential applications of (what I’ve described as) “cognitive technology.”

    The word “neurogaming” sounds a lot like hype-ish neurobunk, and is used as catch-all term for a large collection of biosensor + media technologies. The word “cognitive technology” describes a similar set of technologies with applications beyond gaming, which is why I prefer it.

  3. A few notes:

    – Sentences that are not in quotation marks are my own paraphrasing of the speaker.

    – Note that these are comments from people who are selected for being a good speaker at a tech conference, and so are backed by the experience / perspective of the speaker rather than research / other forecasting methods.

    – I put more weight on the business advice, and varying amounts of weight on the tech design or tech forecasting comments, depending on the speaker.

Responsible Brainwave Technologies workshop notes

Posted by

Last week, we joined the inaugural workshop for the Center for Responsible Brainwave Technologies.

CEREB Symposium May 2015 copy2

Top row: John Chuang (U.C. Berkeley I-School), Brian Behlendorf (EFF), Stephen Frey (Cognitive Technology), Joel Murphy (OpenBCI), Conor Russomanno (OpenBCI), Dirk Rodenburg (CeReB), Yannick Roy (Neurogadget), Johnny Liu (Neurosky). Bottom row: Gary Wolf (Quantified Self), Ramses Alcaide (U. Michigan), Serena Purdy (CeReB), Ariel Garten (Interaxon), Ari Gross (CeReB), Francis X. Shen (U. Minnesota Shen Neurolaw Lab)

We did a “deep dive” into several of the ethical issues in the near-term development of brainwave technologies.

Right now, here are some of the most salient issues with consumer-grade brainwave technology:

Lack of consumer awareness

  • People overestimate the capability of the technology (“can this read my mind?”)
  • People underestimate the capability of the technology, and release their personal brainwave data without understanding how it could be used

Consent to collect brainwave data

  • Monitoring the cognitive and emotional states of employees in transportation, construction, office work, etc. without consent.
  • Monitoring the cognitive and emotional states of people with physical or mental disabilities

Unintended uses of public brainwave data

As described in CeReB’s 2014 white paper:

The situation is analogous to that of DNA data. Information – such as the presence or absence of a predisposition to a particular illness – that is currently captured in DNA data may not as yet be identified or exploited. It is entirely conceivable that the data now available will eventually provide a rich source of information about a broad range of genetically endowed potentialities and predispositions. Knowledge of some of these by third parties might be benign, while knowledge of others may provide third parties with the power to do harm though discrimination and/or unsolicited intervention. The same may well be true of brainwave data.
We believe that the data generated, stored, and used by brainwave technology must be handled in a manner that reflects its current and potential sensitivity, both as a personal “signature” and as a conveyor of information about an individual’s capacity or level of functioning.
CEREB white paper 2014 summary

Summary table from CeReB’s initial white paper (2014).

Considerations for technology developers

A notable amount of the current problems in the graph are (at least partially) countered by:

  • users understanding what the data is
  • users maintaining control over how their data is used

It’s quite nice when we can address problems from the technology side, rather than going through public policy. The Data Locker Project, MIT’s OpenPDS, and Samsung’s SAMI are some nice projects that are working to give users more control over how their data is used, which could be generally applied to biodata. Figshare is designed specifically for sharing data with researchers.

What’s next

As brainwave technology becomes more advanced, CeReB will examine the resulting social and ethical issues, and provide resources for those working at a technological or policy level.

If you are interested in participating in future publications with CeReB, contact Dirk Rodenburg.

Biofeedback and game-based learning: potential applications

Posted by

How can biofeedback devices and video games be combined to create interactive learning experiences? What is game-based learning with biofeedback? 1

Biofeedback can optimize game content based on the user’s emotional state

Here’s how I would describe the basic premises of “game-based learning with biofeedback”:

– Say we want people to learn the content in a course in math, biology, economics, programming, etc. The audience can be students in a traditional K-12 or university, or adults who want to learn for research or personal enrichment.

– In the past, the content was most often delivered through textbooks. Currently, there is a large effort deliver educational content through well-designed games and interactive media, and ed-tech researchers are working to find ways to make those educational programs more effective. 2

– No matter how great your content and media are, there are still other factors that effect how much you learn; theses factors include the learner’s motivation, attention, and general emotional state. A human teacher can often detect these and adjust his or her interaction with the student accordingly.

– It turns out that neuroscientists and physiologists know a decent amount about how to measure and interpret a person’s emotional state based on physiological data like EEG, heart rate variability, and Galvanic Skin Response. Recently, many consumer-grade biosensor devices have been developed and released.

Biofeedback gaming (2)

Two use cases for biofeedback + gaming + learning

1) Improving cognitive control (direct feedback)

Adam Gazzaley, director of the Neuroscape lab at UCSF, built a custom video game designed to increase cognitive factors (like attention span, multitasking ability, memory, and so one) in older adults. Results showed that 12 hours of playing the game over a month significantly increased the cognitive function of a test group of 60- to 85-year-olds compared to a control group. See the paper: “Video game training enhances cognitive control in older adults.” Anguera et. al. 2013.) A commercial example of this is, which does neurofeedback focus training for students with ADHD. 3

2) Adapt videogame content to the user’s emotional state (indirect feedback).

We can measure the learner’s engagement and emotional state, and feed that information back into the educational game. The game can speed up, slow down, change communication styles, or otherwise interact with the student in a more personalized way to sustain their attention and motivation. 4

One commercial example of this is QNeuro: “It’s an educational game called Axon Infinity: Space Academy with a futuristic, outer space style in which you learn math skills and put them into play in missions when fighting aliens. The catch here is that if you use an EEG, the game can adapt and gets more difficult based on brain readings.”


Game-based learning with biofeedback looks for ways to take advantage of the development of consumer-level biosensors and video games for learning. 5


  1.  Context: Last week I presented a poster at the 2015 Neurogaming Conference. The Neurogaming crowd is an eclectic mix of brain-training research from academia, EEG headset companies, other biosensor hardware makers, virtual reality developers, psychotherapists, and educators. It’s very much in the vein of the more immediate potential applications of (what I’ve described as) “cognitive technology.”

    The word “neurogaming” sounds a lot like hype-ish neurobunk, and is used as catch-all term for a large collection of biosensor + media technologies. The word “cognitive technology” describes a similar set of technologies with applications beyond gaming, which is why I prefer it.

  2. See Stanford’s Learning Analytics Lab and Education’s Digital Future.
  3. I haven’t seen reviews of efficacy of – I’ve only encountered them as an emerging player in the ADHD neurofeedback market.
  4. Other examples outside of education include Nevermind, which increases the scariness of a videogame as it detects increasing levels of stress, and NeuroMage, where you can use your “level of attention” to….cast spells and stuff.
  5.  Of course, there are still huge challenges in corruptly interpreting the data, and turning it into insights that are actually useful enough within the video game to make it worth attaching sensors to your body.

Increased learning and memory through neurogenesis has a likely upper limit: a Q&A with Neurocrates

Posted by

This post is a question-and-answer conversation with an imaginary scholar named Neurocrates.

It uses the Neurotic Method.

Stephen: Hello, Neurocrates.

Neurocrates: You named me that just to make a pun about methodology, didn’t you?

S: Maybe. Thank you for visiting here today. I understand that you study neurogenesis and its effects on cognitive performance.

N: Yes. Take what I say with a grain of salt.

S: Ok. Let’s start with the essentials. What is neurogenesis?

N: Neurogenesis is the creation of new neurons.

The brain has stem cells, just like other parts of the body, and new neurons come from neural stem cells. The neural stem cells produce a handful of progenitor cells, which then divide many times to amplify the population of new neuron. These neural stem cells are found in the olfactory bulb (which is responsible for your sense of smell) and the dentate gyrus of the hippocampus (which is involved in learning and memory).

Neurogenesis is interesting to researchers because it has been shown to cause significant increases in learning, memory, and cognitive performance. When we increase neurogenesis in normal rats, they become much better at solving puzzles and remembering things. 1

S: Okay, so we have a continual stream of new neurons that is generated from neural stem cells. What happens to these neurons as they mature and integrate into the existing neural network?

N: As these new neuron cells mature and begin to integrate into the network of surrounding older cells, about 80% of them are “pruned” and die off intentionally. This pruning is how a functional network is established – the neurons that survive are those that receive the most relevant data, and are activated the most times.

You can compare the formation of the brain’s functional network to sculpting a statue out of a block of stone: the network is what is left after carving out a lot of raw material. The remainder of the raw material – neurons, in this case, is what retains meaning. The function of the network is to parse information meaningfully. When neurons receive input, they do small parts of pattern detection, like finding edges. The most salient information is chiseled into the neural network, after lots of other less relevant data has passed through.

So the more neurons you have, the more features you can represent. I’ll give you a simple example. Here are three pictures of a toy dinosaur that are 50, 150, and 250 pixels across. You can see how pictures with more pixels can represent more details of the toy dinosaur.

walter-toy-dinosaur-real lowres walter-toy-dinosaur-real medreswalter-toy-dinosaur-real

S: The dentate gyrus in the hippocampus is one of the most dense cell layers. But new neurons are continually being added to it. How does it allow for new representations to integrate into a network that is already tightly packed?

N: Here’s one nifty thing about neurons:

After they have matured and incorporate into the network (about 4 weeks after progenation), they are more excitable for a short period of time. If there is new data entering the network that isn’t precisely represented by the old neurons, these newer, more excitable neurons are thus more likely to come to represent it. They will be recruited more often than their neighboring cells, and will form the basis for a unique sparse representation of the new data.

Therefore, at the behavioral level, there is a continual unique critical period for the small set of new cells in the hippocampus that have ‘come of age” and are ready to receive their representations from the outside world.

This also explains part of how we have temporal segregation of memory: I can roughly distinguish events that happened yesterday from events that happened two weeks ago, partially because there were new neurons that had ‘come of age’ and were representing the episodic information in different places.

S: What determines which neuron the new data gets sent to in the first place? What is the ‘routing function’ between sensory input and new neurons?

N: Each neuron detects a feature. So the more neurons you have, the higher resolution your network can detect things.

S: How do you test the relationship between neurogenesis on the biological level and “pattern separation” at the behavioral level?

N: One of the most common memory tasks in rodent studies is contextual spatial navigation. Rats are put into environments with slightly different decorations, and need to remember which hallway the food is stored in. As neurogenesis decreases, the rodent has less fine-grain resolution of contexts that they can distinguish.

But there are still unanswered questions. It could be the case that new neurons are increasing the overall plasticity in the network. Or, it could be the case that for any hippocampal task, the “resolution of pattern recognition” or other output is increased by new neurons, but it is hard to establish that.

S: Are these new neurons being generated at a constant rate, or is there something that makes them generate more or less?

N: Two of the main factors that influence neurogenesis are the level of stress in the environment, and the timing of development in the organism. Stressful environments cause a decrease in neurogenesis. Neurogenesis is especially high during childhood and decreases with age.

S: Cool. How does neurogenesis contribute to learning and forming new memories?

N: The most clear mechanism of new neurons is enabling better “pattern separation”. As more neurons are integrated into the functional network, the network can store higher resolution representations of new kinds of data. This pattern separation occurs with highest frequency in the dentate gyrus in the hippocampus.


S: Now for the key question: It seems like increasing the rate at which these new neurons form could increase the resolution of new information that could be represented in the network. More new information could be stored in the same amount of time. Is that true?

In other words, would simply linearly adding neurons increase learning?

N: Yes, but there is likely to be an upper limit. It’s possible that the rate of learning might plateau as you increased neurogenesis past a certain point. The reason is that  the large amount of new neurons might not fully integrate into the functional network. The neurons need to adjust to form a sparse representation and connect with existing neurons, and need time to do that.

It may be possible to overcome this effect by broadening the locations over which the new neurons are added (instead of adding them only to the hippocampus). It may also be possible to implant neural stem cells in other parts of the cortex to increase its growth.


S: So you’re suggesting that current models of how neurons integrate into the hippocampal network predict that there might be an upper limit to the gains realized by adding new neurons to the hippocampus. Are there ways to overcome that upper limit? Or might we realize significant gains before reaching that upper limit, such as to make it worthwhile to pursue?

N: In either case, one outstanding research question is the relationship between cognition and the size and rate of growth of both the hippocampus and cortex. Significant gains could be realized through linear increases in the growth of new neurons. As a very loose proxy, the the human brain is 4.8 times the size for a hypothetical monkey of the same body weight, and the human neocortex is 35% larger than predicted for a primate with as large a brain. Depending on the test, humans can arguably be much more than 4.8 times smarter than a monkey. 2

But that is a good question. The exact point of the upper limit of neurogenesis has never been tested. There are some things you could do if you wanted to research this more:

First, we could model this computationally! This would be an interesting research question: Based on a computational model of the hippocampus, what is the upper limit of the rate of neurogenesis that you can sustain, before it starts to interfere with the representations? What happens when the rate of neurogenesis is 2x, 5x, 20x the normal rate? Do the new neurons interfere with sparse coding or does the system scale? Does the  require higher amounts of input (as could be achieved through sensory extension) in order to scale, or can it sustainably increase the learning rate while the amount of incoming data stays constant?

If you like digging through literature, here is literature research question: You could find experiments that involve increasing neurogenesis in hippocampus, and find the corresponding % increase in performance on memory task. Then plot the increase of neurogenesis on the X-axis, and compare it to the increase in performance on the Y-axis. You would then ask how to extrapolate the line: would it plateau, stay linear, or curve upwards? This is a good computational question, depending on an accurate model of how the hippocampus works.

If you like playing in the lab, here is a biological experiment: What are the upper limits to the current methods for increasing neurogenesis? Is there a way to produce 5x more than has been done before?

Finally, here’s one more question, which I’m not sure how to answer:  if simply tweaking neurogenesis had big payoffs, wouldn’t evolution have already done it? Maybe not, or maybe the in-between steps to higher neurogenesis weren’t viable. It could be that the correct intervention couldn’t be produced through genetic tweaks – artificial interventions are needed. Also, humans haven’t had that much time to evolve, so the possibilities within our genome haven’t been fully explored.

S: Thanks for the feedback, Neurocrates!

N: No prob. May you spawn many new feature-detecting goo balls.


  • It is clear that adding more neurons to the hippocampus increases learning and memory. It is possible that the increase in learning and memory would asymptote after some point.
  • This asymptote is predicted by current models of how new neurons integrate into the new network: neurons form sparse representations, and there is not a neat one-to-one mapping between neurons and concepts.
  • However, there may still be significant gains realized before hitting this upper limit.



Neurotechnology forecasting: stepping stones in financial and technical viability

Posted by

Context: I’m working to better understand how neurotechnology will develop in the upcoming decades. This post outlines a potential pathway where neurotechnology is developed first for medical purposes, shortly followed by cognitive enhancement. 1

An Initial Assumption: Eventually, neurotechnology (say, from neural implants or neural stem cells) will be developed enough to restore people with Alzheimers disease to normal-level memory capacity (not necessarily restoring older memories, just retaining new ones). We can make this estimate based on current knowledge about:
– incentives: current economic incentive for research to treat Alzheimers
– knowledge: current scientific knowledge
– base cases: success of initial research
– players: current organizations and people who are working to further develop this research

(I would guess the timeline for this is somewhere around 20 years, although the next claim does not depend on that number.)

Medical Applications, then Enhancement Applications:  Once the above-mentioned medical neurotechnology is developed:
– the remaining technical hurdles to turn it into a cognitive-enhancing product will be significantly smaller, and
– the social and economic incentives to do so will be large enough,
such that it is very likely the technical hurdles will be overcome to adapt it into a cognitive-enhancing neurotechnology. 2

(The timeline for the step from medical to cognitive-enhancing applications is somewhere around 2-10 years. The general prediction is that the medical->consumer step will be shorter than the research->medical step).

So it is very likely that the development of medical-grade neurotechnology will be followed shortly after by consumer-grade enhancement neurotechnology.

Can anyone drill holes in my example, or suggest ways to *test* this hypothesis? How can we develop better methods for making this type of tech forecasting claim?



  1. This post tries to model “stepping stones” in technological progress. I’m guessing there is some kind of model that would predict that once Technology X is developed, Technology Y is very likely to also be developed within N years. I’m currently trying to learn about methods in technology forecasting, and so this may not actually work. Let me know if you can tell me why.
  2. “Cognitive-enhancing” isn’t an all-or-nothing state. This is more of a statement that cognitive enhancing neurotechnology that is significantly more advanced than today’s will probably not be developed through a direct effort, because social and economic incentives for strong cognitive enhancement are not large enough yet. It seems more likely to be developed indirectly, first passing through the medical research route.

Can Stem Cells Be Used to Enhance Cognition? – A Survey

Posted by

In the recent article “Can Stem Cells Be Used to Enhance Cognition?” (2015) (html, pdf) , Goldberg and Blurton-Jones examine the potential to enhance cognition through (1) endogenous neurogenesis and (2) stem cell transplantation. This post is a summary of their findings.

This post goes into fairly deep technical detail. If you prefer, you can get the gist of the story in the summary section below.

An important but separate issue is whether neural stem cell implantation is ethical or desirable; social and ethical implications of this research are discussed elsewhere. 1  This post simply examines what is possible so far.

Stem cells are defined by two key attributes:
(i) they can self-renew, dividing to create near-perfect copies of themselves
(ii) they can differentiate to produce distinct mature cell types

The equilibrium between cell loss and cell replacement is well maintained by stem cells in most adult tissues, except for the pancreas, heart, and brain. As the brain gets older, its ability to maintain itself through producing new neurons is reduced.  (Rossi et al., 2008). This loss of this cell equilibrium in the brain is greatly correlated with age-related neurogenerative disorders like Parkinson’s and Alzheimer’s Disease.

Summary: multiple mechanisms connecting neurogenesis and cognition

Stem cells have direct and indirect influences on cognition through multiple mechanisms. 
Enhancing cognition through neural stem cells may more more complicated than “add new stem cells to your favorite areas of the brain to linearly improve their functioning.”
However, there is a growing body of evidence that transplantation of neural stem cells does indeed have reliable positive effects on cognition, and is a promising method for both treating neurological disorders and improving cognition in healthy adults. 2

Figure 8.2. Considerable evidence supports the notion that both adult neurogenesis and neural stem cell transplantation can contribute to cognition. Factors such as exercise, selective serotonin reuptake inhibitors (SSRIs), and inflammation can modulate adult neurogenesis, leading to enhanced or impaired cognition. BDNF, brain-derived neurotrophic factor; bFGF, basic fibroblast growth factor; ChAT, choline acetyltransferase; GDNF, glial cell–derived neurotrophic factor; IFN, interferon; IGF, insulin-like growth factor; IL, interleukin; TNF, tumor necrosis factor; NGF, nerve growth factor.

The Role of Endogenous Neurogenesis in Cognition

Until recently, scientists did not have strong evidence that the brain generates new neurons. Then, in 1998, Eriksson e. al.  demonstrated that humans exhibit adult neurogenesis in two key areas:

    • The dentate gyrus of the hippocampus
    • The Subventricular zone (SVZ) of the lateral ventricles 3

Areas of neurogenesis in the adult rat brain. Red – confirmed neurogenesis; pink – possible neurogenesis. From Gould (2007).

Since then, researchers have found that adult neurogenesis occurs in the hippocampus and SVZ throughout life 4, and that they form connections to other parts of the brain (emphasis is my own):

Furthermore, adult hippocampal neurogenesis seems to be substantial; roughly 700 new neurons are generated in each hippocampus per day and up to one third of all hippocampal granule cell neurons are replaced during one’s adult life. 5 
Ernst et al. (2014) most recently used the radiocarbon dating approach to provide compelling new data that SVZ-derived newborn neurons can also migrate into the adjacent striatum in humans, giving rise to cholinergic interneurons.

The existence of endogenous neural stem cells (NSCs) means that we can potentially insert new, healthy neurons where old neurons are dead or dysfunctional. It also means that we could add neurons to various regions of the brain to enhance their capability.

These profound discoveries continue to provoke the question, What are the functional consequences of adult neurogenesis?

The role of the adult neurogenesis in cognition, as studied in the dentate gyrus of the hippocampus

The hippocampus plays a critical role in encoding and retrieving memories. The hippocampus would be one of the places in the brain where it would be most practical for neurogenesis to occur, since newly-formed neurons integrate more readily into the surrounding functional network:

Support for this finding came in 2006, when studies showed that newborn granule cells of the dentate gyrus are more highly activaeed by a novel exploration task compared with mature neurons of the same region (Ramirez-Amaya et al., 2006). Possibly as a result of this increased excitability, newborn granule neurons also integrate more readily into memory-associated engrams than mature granule cells (Ge et al., 2007 and Tashiro et al., 2006)

Additionally, researchers found the corresponding expected negative correlation: patients with damaged hippocampi (from chemotherapy radiation) experienced significantly decreased neurogenesis, and this correlated with difficulty with memory, executive function, attention, and visuospatial function. 6

Endogenous Stem Cells in Aging and Disease

Neurogenesis seems to be modulated by chemokines (part of the immune system’s signaling proteins) in blood plasma. Age-related decline of neurogenesis seems to be partially caused by an increase in chemokine levels in the blood:

Villeda et al. (2011) showed that aged blood could reduce both neurogenesis and cognitive function in young mice. Furthermore, aged mice injected with the plasma of young mice exhibited increased hippocampal neurogenesis. Interestingly, the effects of old and young plasma seem to be mediated via specific chemokines such as CCL11: injection of this chemokine alone into young mice impaired both neurogenesis and cognition. Thus the peripheral immune system and changes in inflammatory state that occur with age seem to play a critical role in age-associated changes in neurogenesis.

Improving Cognition by Enhancing Neurogenesis

Factors that modulate adult neurogenesis:

  • exercise 7
  • selective serotonin reuptake inhibitors (SSRIs) 8
  • inflammation 9

Things that are known to contribute to adult neurogenesis:

  • Chronic low levels of the microglial-derived insulin-like growth factor 1 (IGF-1) and interferon
  • Radial glia
  • Astrocyte-derived molecules
    • In addition to their direct involvement in neurogenesis, astrocyte-derived molecules also seem to make distinct contributions
    • S100B – infusion of S100B for 7 days led to a significant increase in both neurognesis and hippocampla ependnt function (Kleindienst et al 2005)
    • Reports have suggested, for example, that (CNS) central nervous system–specific T cells are required for spatial learning and the maintenance of hippocampal neurogenesis in adulthood (Ziv et al., 2006).
  • Antidepressants
    • SSRIs
      • However, evidence that the mood-improving effects of antidepressants did not depend on neurogenesis, but rather neuronal remodeling, arose in 2008; the blockade of neurogenesis failed to diminish the antidepressant activity of several SSRIs (Bessa et al., 2009).
  • Trophic Factors
    • Intrahippocampal infusion of BDNF also has been reported to increase neurogenesis in adult rats (Scharfman et al., 2005),
    • other studies have shown that BDNF infusion can improve cognition (Blurton-Jones et al., 2009)
    • HOWEVER the precise mechanistic relationship between BDNF neurogenesis and cognition has yet to be directly examined.
  • Noninvasive methods
  • BDNF and IGF-1 are thought to be the principle factors modulating the effects of exercise on learninga dn mood disorders,
    • IGF-1 and VEGF are more strongly implicated in hippocampal neurogenesis (Ding et al., 2006 and Nichol et al., 2009).
    • Both BDNF and IGF-1gene expression are elevated after only a few days of exercise in rats and are crucial to the cognitive benefits of exercise because blockade of growth factor signaling prevents exercise-induced improvements ( Berchtold et al., 2005 and Trejo et al., 2001)


Improving Cognition with Stem Cell Transplantation

Neural Stem Cell Transplantation in Aging

Injection of neural stem cells (NSCs) into the lateral ventricle improved spatial learning and memory in rats:

One of the first studies to examine the potential effect of NSC transplantation in aging injected human NSCs into the lateral ventricle of aged rats, leading to improved performance in spatial learning and memory as measured in the Morris water maze task (Qu et al., 2001).

Neural Stem Cell Transplantation in Neurological Disease

We found that murine NSC transplantation in the triple transgenic (3xTg-AD) model of AD improves cognition in Morris water maze and novel object recognition tasks via a BDNF-dependent mechanism (Blurton-Jones et al., 2009).

NSC transplantation can improve cognitive function following ablation of CA1 hippocampal neurons (Yamasaki et al., 2007)

Syngeneic NSCs were transplanted bilaterally into the striatum of aged α-synuclein mice, and 1 month later motor and cognitive behavior was examined. NSCs survive in the striatum and begin to differentiate into glia (glial fibrillary acidic protein) and neurons (doublecortin) (Figure 8.4). In these initial studies we found robust improvements not only in motor function but also in cognitive function, and, again, BDNF seems to be central to these improvements.

Stem Cell Transplantation in the Healthy Adult Brain

Until recently, few studies have shown an enhanced benefit of stem cell transplantation in regularly-functioning adult brains.

In 2013, Han et al. (2013) showed that human glial progenitor transplantation into the frontal cortex of immune-deficient neonatal mice led to significant enhancements in the cognitive function of adult and aged mice. In contrast, transplantation of murine glial progenitors had no such effect.

Question: Why did human glial progenitor cells improve cognition in mice but transplantation of the mouse-derived glial progenitor cells didn’t?

One potential explanation for the differential effect between human and murine progenitors may relate to species-specific differences in calcium wave propagation, a mechanism by which astrocytes communicate. Human glial progenitor calcium waves propagated at least threefold faster than did mouse cells, which may be attributable to their much larger size and structural complexity.

These differences also likely contributed to the heightened basal level of excitatory transmission and enhanced long-term potentiation that also were observed in this study.

 Another study (Park et al 2013) modified NSCs to overexpress choline acetyltransferase (ChAT), which contributes to acetylcholine synthesis. Results:

Interestingly, this group found significant improvements in passive avoidance, Morris water maze performance, and spontaneous locomotor activity in aged mice receiving ChAT-expressing stem cells.

However, it is unknown how much the immune system plays a complicating factor in these studies.

Contribution of Different Transplanted Cell Types to Cognition

Neuronal Replacement

  • The traditional goal of NSC was to replace dead/ dysfunctional neurons for neurological disorders. In the context of Parkinsons Disease, this has been disappointing, yielding only mild effects on motor movement and no significant effects on cognition.
  • Arguments can be made for transplantation of NSCs and progenitors of other cell types (like astrocytes) that might influence cognition via more indirect mechanisms. (Again, see the complicating figure).

Glial Precursor and Astrocyte Transplantation

Most studies of glial progenitor transplantation focused on spinal cord disease and injury.

One exception: Bruckner and Arendt, (1992) compared capacity of fetal brain tissue and purified astrocytes to improve ethanol-induced cognitive deficits.

  • Found that astrocytes, but not fetal brain tissue grafts, could restore memory (as measured via radial arm maze). 10
  • Bradbury et al 1995 reported – astrocyte-induced cognitive imporvements likely resulted from altered immune and trophic activity.
  • Finally, recent developments in cell reprogramming research have identified that astrocytes may be a more useful parent cell type than the commonly used skin fibroblast (Tian et al., 2011).

Remaining Risks

  • Tumorigenesis. Because the neural stem cells replicate, they could be the seed of a tumor.
  • Delivering Stem Cells to the Brain. There is no easy way to do it.


There are two currently-known methods for adapting neural stem cells to enhance cognition:
  • increasing endogenous neurogenesis
  • directly transplanting stem cells into the brain
Further research aims to safely adapt these methods for clinical use.

Stem cells have direct and indirect influences on cognition through multiple mechanisms. There is a growing body of evidence that transplantation of neural stem cells does indeed have reliable positive effects on cognition, and is a promising method for both treating neurological disorders and improving cognition in healthy adults.

In future posts, I will examine more potential methods for inducing neurogenesis and neuroplasticity, and their potential effects on cognition.


  1. For an initial look at the ethics of neural stem cell transplantation, see Master 2007. A framework for assessing the ethics of general cognitive enhancement is here
  2. For example, Wu et. al (2008) found that neural stem cells with transgenic expression of human nerve growth factor (hNGF) implanted into rat brains showed remarkably improved capacity to integrate into host tissue, and continued secretion of neurotrophic factor over time. For more details on their hNGF-expressing neural stem cells, see my rough notes from the paper.
  3.  Along with the subgranular zone of dentate gyrus, the subventricular zone serves as a source of neural stem cells in the process of adult neurogenesis. It harbors the largest population of proliferating cells in the adult brain of rodents, monkeys and humans. – Gates (2004) 
  4.  (Ernst et al., 2014 and Spalding et al., 2013)
  5.  Ernst et al. (2014)
  6.  Raffa et al., 2006 and Staat and Segatore, 2005
  7.  reviewed by Cotman et al., 2007, Cotman and Berchtold, 2002, van Praag, 2008 and van Praag et-al, 2005
  8.  (Madsen et al., 2000, Perera et al., 2007, Li et al., 2009 and Peng et al., 2008 
  9.  Belarbi et al., 2012
  10.  Bruckner and Arendt, (1992) 

Brain technology is sexy without neuro-bunk pseudoscience

Posted by

In one line: Technically-grounded, iterative work makes progress in the long run. Projects can be good enough to attract support and excitement without making inflated claims.

Why this matters: While talking with people about some potential product ideas (“an adaptive learning software with neurofeedback”, “a media platform that translates your brain waves into visual art”), I find my words teetering on the verge of sounding like the language of neurobunk:

“makes you smarter”
“makes you more creative”
“reads your mind”
“understands what you’re thinking”
“does MAGIC with BRAIN”

I really, really want these things to exist. It feels fun just to say them. But if we look at the actual technology, we can’t claim that it is so straightforward, or reliable, or has such pronounced effects. At least not yet. The technology is still young. Listen to your bullshit meter.

I believe we can actually make progress on down-to-earth technological objectives that target precise sub-areas of cognition. And that is the way for a cognitive technology group to make a tangible impact.

The following claims are specific, true and still really awesome:

– [A substrate] improves some aspects of working memory, such as digit span, digit manipulation and pattern recognition memory, but the results related to spatial memory, executive function and attention are equivocal. 1

– Fluid intelligence can be increased by physical activity, playing a musical instrument, making art, improving motor skills, meditation, daydreaming, getting a good night’s sleep. 2

– Neurofeedback training has equal or stronger effects at treating ADHD than ritalin or other drugs, for some patients. 3

– “EEG signals can be used to drive a quadracopter.” 4

Let’s examine some case studies of neuro-bunk:

– Groups that do real awesome work creating scientifically-validated cognitive technology.
– Groups that dabble in science-related fields, who attract media attention through inflated stories and feel-good nonsense.


TMS increases the excitability of neurons, enabling some subjects to complete cognitive tasks faster.

Hype article
Allan Snyder wants to make a ‘creativity cap’ that gives anyone savant-like abilities.

Claims to silver-bullet ‘cure-all’ cognitive enhancers like a creativity cap are a strong warning sign for BS.

But holy cow, TMS is still actually a safe, proven way to improve mood and cure depression in some patients. Hooray!

Brain Training

“Luminosity brain training is a simple online tool to allow anyone to achieve their full potential”

“How to add 2.75 IQ points per hour of training”

Those just sounds silly
Brain training exercises make people better at solving specific games that test working memory, but unfortunately do not translate to other tests of general intelligence.

Conclusion: We want to develop tools that are so good that they speak for themselves, without relying on inflated soundbites to attract attention.

The community we attract will be more scientific and rational – the types who are also the most likely to support our projects or partner with us.



Which components of human cognition to target first?

Posted by

What are things that we can improve about a brain, that will help make sure that potential future intelligence-amplified humans are nice?

This post is an initial examination of mental components that we could target to improve the ethical reasoning of intelligence-amplified humans. It is a rough sketch and will probably be updated as I understand more.


Say that we are developing tools that can change and improve how different parts of the brain functions.

Say that we also have a bit of time before these tools come around, so we can try to apply foresight to figure out which parts of the human brain we want to target for improvement first. 1

It would be useful to know if there are particular abilities that would be strategic to target first, and use this to guide the development of intelligence-amplifying neurotechnology.

Technologies that could do this include iterated embryo selection, gene therapy to increase neurogenesis or neuroplasticity, and an advanced brain-computer interface like neural dust (there may be others that are unknown to me or yet-to-be-imagined). (A future post will explore these in more detail).

Mental abilities that could be improved include:  

  • empathy
  • ethical reasoning or philosophical success
  • metacognition
  • intelligence / problem solving ability

“Ethical reasoning” requires some clarification. Assuming the development of IA technology, we want the augmented humans to make good decisions, to be aligned with humans values, or the set of values that humans would have if they thought long and hard about it. 2

Let’s start with this assumption:

It is strongly preferred to increase a human’s moral reasoning, or “ability and commitment to develop good goals” before or at least at the same time as increasing a human’s intelligence or  “ability to accomplish a set of goals”.

So, how do we accomplish this for a human?

Some hypotheses:

Increase empathy? It could be that through experiencing a System-1 sense of empathy, feeling and relating to the other person, they are more inclined to take into account the preferences of other people. But it could be that those preferences, even when well understood, only take up a fraction of the intelligence-amplified human’s values (an example of successful perspective-taking but low empathy). 3

Increase metacognition? By examining one’s own thought processes, noticing uncertainty, developing a tight update loop between prediction and observation, one may more quickly be able to develop positive goals and make good decisions. 4

Perhaps moral reasoning will naturally increase with intelligence? It could also be the case that moral reasoning (in the sense of knowing what is best to do) will only be improved through intelligence. Morality is hard to figure out, it’s more complicated than “do what feels nice”, because human intuition isn’t equipped to deal with scope. Some progress in moral reasoning is made by high-IQ philosophers and economists, and if we had more of the same working to figure out morality, we would have a better idea of what to do with augmented intelligence.


Some followup questions:

  • What is the correlation between rationality and intelligence?  If it is true that rationality and intelligence are correlated, then that’s great, but we’re not sure yet. 5
  • What are possible failure modes I haven’t considered here? 6
  • Perhaps ‘intelligence” is too broad of a term. Human intelligence is composed of many interacting submodules. What are the most relevant submodules, and can we increase them independently of other submodules?
    • It could be the case that some component of intelligence is easiest to augment first, such as long-term memory. But this will probably not lead to improved ethical reasoning.


  1. This is similar to asking: how does the value-loading problem in AI changes when applied to humans? I want to understand what is different about starting from a human brain – what opportunities we may have.
  2. Learning what to value is a problem in itself: see indirect normativity  ( and one approach to indirect normativity, CEV ( ).
  3. More has been written about the neural bases of empathy here:  and here: This incidentally comes from the lab where I used to work.
  4. See and 
  5. See Project guide: How IQ predicts metacognition and philosophical success and Stanovich – Rationality and Intelligence
  6. One way to find more failure modes for human augmentation would be to look into sci fi novels. Not that we assign huge weight to these – for any given specific sci fi scenario I’d assign a <1% probability to it happening as predicted, but could give us classes of failure modes to think about.

Research Agenda, Round 1: an initial survey

Posted by

You can view this post in the html text below, or the nicely-formatted pdf here: Research Agenda Version 1: Initial Survey.pdf

This post is a list of questions about various technologies that send information into or out of a brain, or change the way the brain processes information. By defining these questions, I hope to develop a better understanding of the large possibility space in modern brain technology. 1

Background Motivation

One potential way to increase human effectiveness would be to improve the functioning of the certain parts of the human brain. 2 We could examine the input, processing, and output stages of the information flow, and look for ways to understand, improve, and extend each of those stages. Eventually, we may be able to create tools that improve the parts of the brain that make good decisions, solve tough problems, invent new ideas, understand moral reasoning, or experience empathy. If human brains became better at such mental abilities, I believe it would have positive ripple effects into many other areas of human activity.

 Breadth-First Approach

This article examines many possible pathways to that target. The initial approach is take an even survey of the potential tools to add to our toolbox. Our investigation will hold off on getting attached to specific solutions, or discarding broad classes of solutions for lack of known specifics.

It’s organized in rough order of levels of information: perception, language, motion, physiology, cellular biology; then macro- and micro- circuit systems in the brain. 3 Finally, there is an initial list of items to be understood in ethics and strategy.

Giving shape to a possibility space

A larger goal in this document is to provide a starting framework for researching technologies at multiple levels. I think that answers to these questions will help us start to navigate and define the possibility space in  brain technology.

 The questions and categories in this post don’t form a complete survey- they’re the the ones that are fruitful for conversation, conservative enough to post on the internet, and known to me at the time of writing this. 4

Over time, I may return to this post and continue to add more specific questions or categories.




Information- Software [Input + Output]

What are core design principles for software that harnesses group intelligence?

  •         Prediction Markets
  •         Large-scale citizen science i.e. Eyewire, Foldit
  •         Wikis/Forums

Perception [Input]

“Immersive Media” = Virtual Reality + Gesture Tracking + Haptic Feedback

Virtual Reality

  • Are there good examples (prototypes, concept sketches, or from science fiction) of using Virtual Reality or Augmented Reality for the following:
    • Information visualization
    • Data analysis
    • Life and physical science education (biology, chemistry, physics)
    • Math education (linear algebra with actual 4D vectors)
    • Medical operations
    • Clinical psychology
    • Rationality training / real world cognitive bias or puzzle solving exercises
    • Formal research experiments in social psychology or perception
  • Are there efforts to combine avatar control with natural language processing and generation, to create a platform for artificially intelligent character avatars? This could be a service/engine for building many kinds of games / applications.


  • Timelines for contact lenses, optical projection.

Natural Language Understanding and Generation [ Input + Output ]


  • Given a long piece of text, ability to generate a natural-sounding summary of most important ideas of the text.
  • Given a psychographic profile, ability to generate a simple story from the perspective of the character.
  • Upcoming milestones.

Motion Sensors [Output]

  • Gesture Tracking: Kinect, Leap, etc.
  • Worn on body: Myo

Motion Actuators [Output]

  • After refining/solving the vision problem (Rift C1?) Haptic feedback will be the bottleneck to immersive VR.
    • Alternative/creative attempts to to glove or suit? Air compression, Sound waves, Nanomaterials?
  • Have there been studies on using haptic feedback for mood regulation, neuroplastic training in healthy adults to develop extra senses, or just information “data sensualization”?


External Bio Sensors [Output]

  • Wearables – ECG, Respiration, etc.
    • Low information bandwidth, high amount of maker activity already.

Question: Biosensors:

Is it the case that (1) combinations of today’s external sensors (EEG, ECG…) along with Virtual Reality/ haptics can be used in radically different ways? Or is it the case that (2) their applications are confined to ‘meditation / neurofeedback / focus training’, and more advanced types of applications must wait for smaller BioMEMS or implantables? Right now, (2) seems more likely given the amount of people exploring vs. amount of actually new potential applications.

Question: Parasympathetic Nervous System – Regulation

What studies show the benefits of moderating physiology on cognition (as can be done with current biosensors)? Can this actually help people focus better? What is the highest recorded percent increase in concentration, creativity problem solving or related metrics in healthy adults, using biofeedback?


Bio-MEMS [Input + Output]


  • Can BioMEMS also act as actuators/controllers/builders (or are they mostly sensors?)
  • Bioengineering [Input, Processing, Output]

    • What types of genes / how many genes are addressable with modern gene therapy?
    • What kinds of neural tissues have had success with stem cell therapy?
    • Exploratory engineering: the hippocampus continuously generates new cells (neurogenesis). Could an increase in the rate of hippocampal neurogenesis influence its higher-level performance (say, spatial learning)? An initial study shows the brain is resilient to decreased neurogenesis, but the door remains open to experiments that increase neurogenesis.

Synthetic Bio [Input + Output]


Chemicals [Processing]

(There are a number of chemicals that affect mood and mental state, more and less common. I do not necessarily believe they should be used, but find it useful to understand the principles behind their effects.)

  • Are there studies on the combination of chemical stimulants with macro-scale stimulation ie tMS?
  • What about with immersive media, virtual reality, video games, group therapy circles, CBT, or other high-level psychological interventions?

The following sections are organized according to the general types of neuroengineering technologies in Ed Boyden’s MIT class.

Brain – Macro Circuit Reading [Output]

Noninvasive mapping and measurement.

  • PET, photoacoustic, MEG, EEG, fMRI, infrared imaging, x-rays.

Brain – Macro Circuit Stimulation [Input, Processing ]

Macrocircuit control.

  • Magnetic, electrical, ultrasonic, chemical, pharmacological/pharmacogenetic, thermal.

Brain – Micro Circuit Reading [Output]

Development of invasive mapping and measurement.

  • Electrodes
  • nanoprobes, nanoparticles
  • optical imaging and optical microscopy
  • endoscopy,
  • multiphoton microscopy, electron microscopy, light scattering,
  • bioluminscence,

Brain – Micro Circuit Stimulation [Input, Processing ]

Development of microcircuit control.

  • DBS, infrared optical stimulation, optogenetics,
  • nanoparticle-mediated control, uncaging
  • signaling control.

Ethics and Strategy

  • What is an appropriate target demographic for different levels of brain technology?
    • For discussions specific to cognitive enhancement, this book (Cognitive Enhancement, Hildt and Franke, 2013) offers an excellent, detailed discussion on the ethics of cognitive enhancement from multiple views. The introductory chapter offers an overview discussion.
  • Examine the relationship between neuroscience, intelligence amplification, and artificial intelligence safety.
    • Likelihood of neuro research to contribute to neuromorphic AI (seems likely).
    • Likelihood of various fields in neuroscience to lead to amplification of various forms of intelligence.
      • Opportunities to bolster moral reasoning / empathy in parallel with or before other forms of intelligence. (This would become very important as the strength of the intelligence amplification (IA) technology increases).
      • Amount of overlap between research contributing to intelligence amplification and research contributing to neuromorphic AI (some research areas may be completely separate and safer to pursue).
    • Likelihood of intelligence amplification to lead to improvements in AI safety (seems unlikely by itself, better chance when combined with improved moral reasoning / rationality).
    • Are there feasible ways to make IA tools available only available to select research scientists (such as those advancing technology safety).
      • Advancing activity in all fields in science and technology equally could have a neutral or negative effect, because of the high risks from some emerging technologies.
    • Overall benefits or costs of IA neuro research.
    • See also: Luke Muehlhauser on Intelligence Amplification and Friendly AI
  • Estimating the actual value of technological development, and the replaceability of a particular project.
    • If one desires to make a large social impact, they must take into account expected value of making particular technologies, when (1) very similar things could be made by others a few years down the road, and/or (2) their functionality may eventually be replaced by more advanced technologies. (Example: creating wearables now vs. personally working on biomems now vs waiting for biomems to arrive while doing something else.)
      • Consideration: The value of the project is the value of having the information or use of the tool sooner than we would have otherwise. 5
        • However, counterfactuals (and relative impact) are hard or impossible to compute well.
        • There may be some arguments for why this is not a well-founded concern, or even if it is well-founded, that it may not be practical to give it a lot of weight. For now, I believe this consideration does matter when determining what to prioritize.


Notes 6

  1. If you are interested in answering some of these questions, and are meticulous enough to read footnotes, you might be an excellent person to write or coauthor future posts on this blog. Drop me a line if that sounds interesting to you. 🙂
  2. It is my working hypothesis that strategic implementation of technologies that improve brain function would make humans more effective at the activities that matter (and would otherwise have a net positive effect), but this is not guaranteed. The second section on “Ethics and Strategy” offers some initial reasons this might not be true.
  3. A common ordering of levels in neuroscience is Cognitive > Systems > Cellular > Molecular.
  4. Disclaimer: I will acknowledge some potential reasons not to publish a blog post like this one. A detailed discussion about creating certain brain technologies could pose an information hazard (specifically, idea or attention hazards). Another potential pitfall is that it might distract myself or other people from more important activities that we could be doing otherwise (opportunity cost). Because the topics are relatively well-known and the blog has little social momentum, these disclaimers don’t concern me for now, but they may be revisited in the future.
  5. This view is described the first few pages of Chapter 15 in Nick Bostrom’s “Superintelligence.”
  6. Thanks to Madeeha Ghori for helpful feedback on this post.