Monthly Archives: May 2015

Equipping future humans to solve future problems: support for intelligence amplification


Posted by

Support for intelligence amplification

Why is it important to find ways to improve human’s ability to learn, think, and perform ethical reasoning?

Here is a supporting argument for intelligence amplification that I’ll call “Increasing the amount of future decisions that are made by enhanced decision-makers”.  I haven’t seen it formalized before, so I’ve written it out here for reference. I find it appealing, but I might be missing important perspectives that would suggest otherwise.  What do you think?

Increasing the amount of future decisions that are made by enhanced decision-makers

It could be the case that human brains, no matter how exceptional, are not complex enough to figure out the best long-term course for our species. Technological power will develop faster than our ability to ensure its safety.

One strategy we can follow is to develop further insight about what to value (i.e. through cause prioritization research), and build our social and technical capacity to pursue those values as we discover them. One way to build social and technical capacity is through improving human cognition.

As some humans develop more augmented cognition, they hold much better position to steer technological development in a positive direction. Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers.

There are several steps:

(1)

As a general strategic consideration, If you aren’t in a good position to answer a question or accomplish a goal, the next best move is to put yourself in position where you are much more likely to be able to answer the question or accomplish the goal.

You can do this by getting more information and a better definition of the problem, and building up resources or abilities that lead to answering it. 1

(2)

The future contains overwhelming problems and enormous opportunities. Like it or not, we are involved in a high-stakes game with the future of earth-originating life. There is no guarantee that the steps to solve the problems or reach those opportunities will be calibrated to our species’ current level of ability. In fact, some seem extremely difficult and beyond our current level of skill. 2 With so much hanging in the balance, we really, really need to radically improve humanity’s capacity to solve problems. 

(3)

One way to put ourselves in a better position is to increase human cognition. In this context, I intend “cognition” to refer to a very broad set of things: “the set of information processing systems that we can use to solve problems and pursue opportunities.”3

Some places to increase human intelligence are through collective intelligence (e.g. organizations, incentive structures, discussion platforms), individual tools (e.g. mindmapping) , psychological methods (e.g. education and rationality training), and neural activity (e.g. neurotechnology).

As some humans develop more augmented cognition, they will occupy a much better position to steer the future in a positive direction. 4

=>

Therefore, one of the best investments we can make now is to augment cognition as fast as possible – increasing the amount of future decisions that are made by enhanced decision-makers. (This does not mean we ought to put off working on those important problems now in the hopes that we can completely “pass the workload” to future augmented humans. Nor is it a claim that, if we reach extraordinary success at intelligence amplification, other problems will be immediately solved. This is merely to support intelligence amplification as one of many possible valuable investments in the longer-term future. )
I welcome commentary on (1), and am fairly confident of (2). The main part I want to focus on is (3).

Questions for further research

Is (3) (increasing human intelligence) a reliably good way of fulfilling the action proposed in (1) (put ourselves in a better position to solve problems)?

If so, which realms of intelligence amplification (say, out of collective intelligence, individual tools, psychological methods, and neural activity) hold the most promise? 5 


 

Notes

  1. See more on my strategy for navigating large open possibility spaces.Actions to Take When Approaching a Large Possibility Space

    Question: Beyond gathering information, skills, and resources, are there other actions you can do to better put yourself in a position to answer the questions or accomplish the goals?

  2. Problems: most importantly, existential risks. See this introduction to existential risk reduction, and a more detailed description of 12 prominent existential risks.

    Opportunities: Among the opportunities that we get to pursue, I’m pretty jazzed about dancing in fully-immersive VR, developing better control over what genes do, figuring out more about where space came from, and the potential to explore mindspace while implemented in another substrate.

  3. See Engelbart’s framework for augmenting the human intellect, which I will expand in a future post. 
  4. Or, it could be that the gains in intelligence that we develop are not enough, but the intelligence-augmented humans can then develop a more advanced IA technology. This chain of successive upgrades could go on for a number of iterations, or could to recursive self-improvement of the initial intelligence-augmented humans. In this case, we would be “Equipping future humans to equip themselves to make more optimal decisions.”
  5. A final note: This blog aims to provide a more technical, actionable projects list towards “upgrading cognition”, so we can spur the necessary scientific and technological development while we still have time. This post is partially motivated by the urgency in Bostrom’s unusually poetic writing in “Utopia”:

    Upgrade cognition!

    Your brain’s special faculties: music, humor, spirituality, mathematics, eroticism, art, nurturing, narration, gossip!  These are fine spirits to pour into the cup of life.  Blessed you are if you have a vintage bottle of any of these.  Better yet, a cask!  Better yet, a vineyard!

    Be not afraid to grow.  The mind’s cellars have no ceilings!

    What other capacities are possible?  Imagine a world with all the music dried up: what poverty, what loss.  Give your thanks, not to the lyre, but to your ears for the music.  And ask yourself, what other harmonies are there in the air, that you lack the ears to hear?  What vaults of value are you witlessly debarred from, lacking the key sensibility?

    Had you but an inkling, your nails would be clawing at the padlock.

    Your brain must grow beyond any genius of humankind, in its special faculties as well as its general intelligence, so that you may better learn, remember, and understand, and so that you may apprehend your own beatitude.

    Oh, stupidity is a loathsome corral!  Gnaw and tug at the posts, and you will slowly loosen them up.  One day you’ll break the fence that held your forebears captive.  Gnaw and tug, redouble your effort!

Notes from the 2015 Neurogaming Conference


Posted by

Last week, I presented a poster at the 2015 ESCoNS Summit1 and Neurogaming Conference. 2

Screen Shot 2015-05-21 at 9.08.42 PM

Here are my main takeaways from the speakers and presenters. These are rough notes, loosely organized by concept.

This post offers some unusual business models and design advice for projects that involve neurofeedback, virtual reality, biodata processing, and/or near-term artificial intelligence applications in entertainment and health industries. 3

Business models for Biosensor devices and applications

Tim Chang, Mayfield: Vitamin gummy bears are an interesting deployment strategy. It’s good for you but it’s also fun. For wellness solutions that you want to deliver, you can use games, storytelling, and make it visually appealing.

How to create a “performance enhancing” service? Here is a sort of playbook of how to pull it off:

– A device to track activity
– AI software / cloud to process it and get insights
– A human coach / inspirational figure
– A social support community

I’m a big believer in this hybrid model. You can have machine learning and AI but you also need humans. You’re not scared of notifications on your watch. You need a coach to kick you in the butt, and a sympathetic peer group.

A promising business model is the “Device as a Service.” When the services become attached to a device and comes with subscriptions, App Stores, coaching, etc. For example, you could have a fitness tracking service that is $100 / year. It comes with a device for free, and an app store where developers can add upgrades.

“Future media will consist of the buying and selling of experiences

Augmented Reality – challenges and potential

Brian Selzer, Daqri:

What makes AR different from VR, and how important is that difference? The key word for me is context. VR is an experience –  a context you can completely control. That’s why we’ll see the first big successes come out with VR, with games and experiences.

When you open up the experience with AR, you need to relate the application  to the environment. That is very challenging. You need to deal with the physics of light and object recognition. The difference between AR and VR is the level of transparency. Eventually, I suspect that we’ll be able to tune the transparency dial in headset displays: to go from complete transparency (AR) to complete immersion (VR).

The VR Industry is about to explode

Walter Greenleaf, CSO, Pear Therapeutics: The VR industry is about to Boom. Smartphones will be the VR platforms of the next few years. Everyone who has a smartphone 5 years from now will be using a virtual environment.

Fortunately, because we’ve seen this technology coming, the principles and heuristics for VR have been worked out. The enabling technology is here. Now we can slap on the paradigms that we’ve spent a long time developing in research labs.

Sensors everywhere

We will always have more data than we know what to do with

Stan Williams, Neuromorphic Computing Lead, HP Labs: We will probably always have more data than we know what to do with, because we will continually add more sensors to the world. There’s two levels to that:

  • The more we learn the more we figure out there is to learn.
  • The horizon of what we think is possible is receding from us faster than we’re moving forward.

The simple estimates of how much data were crunching a few years ago are probably off by a few orders of magnitude. The industry will need to find a way to integrate information from a huge amount of sensory data, plus a large amount of processing at many different levels.

AI as a personal trainer

Liesl Capper, Leader, Watson Life, IBM: Human interaction with AI personalities is enabled by the willing suspension of disbelief.” If you put enough stuff together that gives the illusion of human intelligence, people will accept it because they want to believe that someone is taking them on and caring about them.

Tim Chang, Mayfield: (On AI-assisted living) AI transforms the feedback loop and the ability to model anything. The more I use these neurogaming systems, the more the cloud is forming a model of me.


 Design Projects and Business Ideas

Here are two project ideas that occured to me at the conference. If you succeed with one of these, you’re welcome. Email me if you’re interested in this kind of thing so I can connect you with other people.

 “Stickers:” Portable Biosensor Hardware Design

One of the main roadblocks to integrating realtime biodata with other applications is that the design of the EEG or heart rate monitoring hardware is too big and noticeable.

You could enable more casual biodata monitoring with “wireless biosensor stickers.” One for reading EEG on the frontal lobe, one for heart rate variability, and one for skin conductance. They could wirelessly transmit data to a cell phone. Technical challenges to developing these would include battery life for the stickers, and the ability to transmit a strong enough signal. Design challenges would be creating a sticker that contains wires and a small battery, that doesn’t look goofy on your forehead.

Virtual Reality Anything

The KZero report on the Virtual Reality Consumer market estimated that the Virtual reality market would take in $90m in revenue in 2014, $2.3bn in 2015, $3.8bn in 2016, $4.6bn in 2017 and $5.2bn in 2018. Cumulatively across the period of 2014 to 2018 we have forecasted the consumer virtual reality market to be worth $16.2bn. http://www.kzero.co.uk/blog/consumer-virtual-reality-market-worth-13bn-2018/

Note the projected leap from $90 million to $2.3 billion from 2014 to 2015. Oculus, Samsung, and possibly others are planning each a consumer VR headset release in 2015. Since this report was released in early 2014, I would suspect the projections to be pushed back about a year.

Even stupid apps will probably get a lot of downloads while the rest of the developer world realizes VR is a thing and catches up.

Potential VR Design Projects: If I had a Unity developer and 6 months, and I was optimizing only for earning money within the next 18 months, I would aim to create 4-5 stupidly simple games for the mass market to play on a VR cell phone.

If I wanted to feel like the product itself was slightly useful for the world, but still earn money, I would explore applications in psychotherapy, social skill training, and data visualization. Email me if you have ideas or want to connect with others who are interested in therapeutic, educational, or dataviz applications of VR.


 Notes

  1. “The Entertainment Software and Cognitive Neurotherapeutics Society (ESCoNS) is the premier academic society for scientists and game researchers who are at the forefront of researching novel ways to develop scientifically valid neurosoftware for the treatment of brain disorders”.
  2. The Neurogaming crowd is an eclectic mix of brain-training research from academia, EEG headset companies, other biosensor hardware makers, virtual reality developers, psychotherapists, and educators. It’s very much in the vein of the more immediate potential applications of (what I’ve described as) “cognitive technology.”

    The word “neurogaming” sounds a lot like hype-ish neurobunk, and is used as catch-all term for a large collection of biosensor + media technologies. The word “cognitive technology” describes a similar set of technologies with applications beyond gaming, which is why I prefer it.

  3. A few notes:

    – Sentences that are not in quotation marks are my own paraphrasing of the speaker.

    – Note that these are comments from people who are selected for being a good speaker at a tech conference, and so are backed by the experience / perspective of the speaker rather than research / other forecasting methods.

    – I put more weight on the business advice, and varying amounts of weight on the tech design or tech forecasting comments, depending on the speaker.

Responsible Brainwave Technologies workshop notes


Posted by

Last week, we joined the inaugural workshop for the Center for Responsible Brainwave Technologies.

CEREB Symposium May 2015 copy2

Top row: John Chuang (U.C. Berkeley I-School), Brian Behlendorf (EFF), Stephen Frey (Cognitive Technology), Joel Murphy (OpenBCI), Conor Russomanno (OpenBCI), Dirk Rodenburg (CeReB), Yannick Roy (Neurogadget), Johnny Liu (Neurosky). Bottom row: Gary Wolf (Quantified Self), Ramses Alcaide (U. Michigan), Serena Purdy (CeReB), Ariel Garten (Interaxon), Ari Gross (CeReB), Francis X. Shen (U. Minnesota Shen Neurolaw Lab)

We did a “deep dive” into several of the ethical issues in the near-term development of brainwave technologies.

Right now, here are some of the most salient issues with consumer-grade brainwave technology:

Lack of consumer awareness

  • People overestimate the capability of the technology (“can this read my mind?”)
  • People underestimate the capability of the technology, and release their personal brainwave data without understanding how it could be used

Consent to collect brainwave data

  • Monitoring the cognitive and emotional states of employees in transportation, construction, office work, etc. without consent.
  • Monitoring the cognitive and emotional states of people with physical or mental disabilities

Unintended uses of public brainwave data

As described in CeReB’s 2014 white paper:

The situation is analogous to that of DNA data. Information – such as the presence or absence of a predisposition to a particular illness – that is currently captured in DNA data may not as yet be identified or exploited. It is entirely conceivable that the data now available will eventually provide a rich source of information about a broad range of genetically endowed potentialities and predispositions. Knowledge of some of these by third parties might be benign, while knowledge of others may provide third parties with the power to do harm though discrimination and/or unsolicited intervention. The same may well be true of brainwave data.
We believe that the data generated, stored, and used by brainwave technology must be handled in a manner that reflects its current and potential sensitivity, both as a personal “signature” and as a conveyor of information about an individual’s capacity or level of functioning.
CEREB white paper 2014 summary

Summary table from CeReB’s initial white paper (2014).

Considerations for technology developers

A notable amount of the current problems in the graph are (at least partially) countered by:

  • users understanding what the data is
  • users maintaining control over how their data is used

It’s quite nice when we can address problems from the technology side, rather than going through public policy. The Data Locker Project, MIT’s OpenPDS, and Samsung’s SAMI are some nice projects that are working to give users more control over how their data is used, which could be generally applied to biodata. Figshare is designed specifically for sharing data with researchers.

What’s next

As brainwave technology becomes more advanced, CeReB will examine the resulting social and ethical issues, and provide resources for those working at a technological or policy level.

If you are interested in participating in future publications with CeReB, contact Dirk Rodenburg.

Biofeedback and game-based learning: potential applications


Posted by

How can biofeedback devices and video games be combined to create interactive learning experiences? What is game-based learning with biofeedback? 1

Biofeedback can optimize game content based on the user’s emotional state

Here’s how I would describe the basic premises of “game-based learning with biofeedback”:

– Say we want people to learn the content in a course in math, biology, economics, programming, etc. The audience can be students in a traditional K-12 or university, or adults who want to learn for research or personal enrichment.

– In the past, the content was most often delivered through textbooks. Currently, there is a large effort deliver educational content through well-designed games and interactive media, and ed-tech researchers are working to find ways to make those educational programs more effective. 2

– No matter how great your content and media are, there are still other factors that effect how much you learn; theses factors include the learner’s motivation, attention, and general emotional state. A human teacher can often detect these and adjust his or her interaction with the student accordingly.

– It turns out that neuroscientists and physiologists know a decent amount about how to measure and interpret a person’s emotional state based on physiological data like EEG, heart rate variability, and Galvanic Skin Response. Recently, many consumer-grade biosensor devices have been developed and released.

Biofeedback gaming (2)

Two use cases for biofeedback + gaming + learning

1) Improving cognitive control (direct feedback)

Adam Gazzaley, director of the Neuroscape lab at UCSF, built a custom video game designed to increase cognitive factors (like attention span, multitasking ability, memory, and so one) in older adults. Results showed that 12 hours of playing the game over a month significantly increased the cognitive function of a test group of 60- to 85-year-olds compared to a control group. See the paper: “Video game training enhances cognitive control in older adults.” Anguera et. al. 2013.) A commercial example of this is neuropl.us, which does neurofeedback focus training for students with ADHD. 3

2) Adapt videogame content to the user’s emotional state (indirect feedback).

We can measure the learner’s engagement and emotional state, and feed that information back into the educational game. The game can speed up, slow down, change communication styles, or otherwise interact with the student in a more personalized way to sustain their attention and motivation. 4

One commercial example of this is QNeuro: “It’s an educational game called Axon Infinity: Space Academy with a futuristic, outer space style in which you learn math skills and put them into play in missions when fighting aliens. The catch here is that if you use an EEG, the game can adapt and gets more difficult based on brain readings.”

Summary

Game-based learning with biofeedback looks for ways to take advantage of the development of consumer-level biosensors and video games for learning. 5


Notes

  1.  Context: Last week I presented a poster at the 2015 Neurogaming Conference. The Neurogaming crowd is an eclectic mix of brain-training research from academia, EEG headset companies, other biosensor hardware makers, virtual reality developers, psychotherapists, and educators. It’s very much in the vein of the more immediate potential applications of (what I’ve described as) “cognitive technology.”

    The word “neurogaming” sounds a lot like hype-ish neurobunk, and is used as catch-all term for a large collection of biosensor + media technologies. The word “cognitive technology” describes a similar set of technologies with applications beyond gaming, which is why I prefer it.

  2. See Stanford’s Learning Analytics Lab and Education’s Digital Future.
  3. I haven’t seen reviews of efficacy of neuropl.us – I’ve only encountered them as an emerging player in the ADHD neurofeedback market.
  4. Other examples outside of education include Nevermind, which increases the scariness of a videogame as it detects increasing levels of stress, and NeuroMage, where you can use your “level of attention” to….cast spells and stuff.
  5.  Of course, there are still huge challenges in corruptly interpreting the data, and turning it into insights that are actually useful enough within the video game to make it worth attaching sensors to your body.

Increased learning and memory through neurogenesis has a likely upper limit: a Q&A with Neurocrates


Posted by

This post is a question-and-answer conversation with an imaginary scholar named Neurocrates.

It uses the Neurotic Method.


Stephen: Hello, Neurocrates.

Neurocrates: You named me that just to make a pun about methodology, didn’t you?

S: Maybe. Thank you for visiting here today. I understand that you study neurogenesis and its effects on cognitive performance.

N: Yes. Take what I say with a grain of salt.

S: Ok. Let’s start with the essentials. What is neurogenesis?

N: Neurogenesis is the creation of new neurons.

The brain has stem cells, just like other parts of the body, and new neurons come from neural stem cells. The neural stem cells produce a handful of progenitor cells, which then divide many times to amplify the population of new neuron. These neural stem cells are found in the olfactory bulb (which is responsible for your sense of smell) and the dentate gyrus of the hippocampus (which is involved in learning and memory).

Neurogenesis is interesting to researchers because it has been shown to cause significant increases in learning, memory, and cognitive performance. When we increase neurogenesis in normal rats, they become much better at solving puzzles and remembering things. 1

S: Okay, so we have a continual stream of new neurons that is generated from neural stem cells. What happens to these neurons as they mature and integrate into the existing neural network?

N: As these new neuron cells mature and begin to integrate into the network of surrounding older cells, about 80% of them are “pruned” and die off intentionally. This pruning is how a functional network is established – the neurons that survive are those that receive the most relevant data, and are activated the most times.

You can compare the formation of the brain’s functional network to sculpting a statue out of a block of stone: the network is what is left after carving out a lot of raw material. The remainder of the raw material – neurons, in this case, is what retains meaning. The function of the network is to parse information meaningfully. When neurons receive input, they do small parts of pattern detection, like finding edges. The most salient information is chiseled into the neural network, after lots of other less relevant data has passed through.

So the more neurons you have, the more features you can represent. I’ll give you a simple example. Here are three pictures of a toy dinosaur that are 50, 150, and 250 pixels across. You can see how pictures with more pixels can represent more details of the toy dinosaur.

walter-toy-dinosaur-real lowres walter-toy-dinosaur-real medreswalter-toy-dinosaur-real

S: The dentate gyrus in the hippocampus is one of the most dense cell layers. But new neurons are continually being added to it. How does it allow for new representations to integrate into a network that is already tightly packed?

N: Here’s one nifty thing about neurons:

After they have matured and incorporate into the network (about 4 weeks after progenation), they are more excitable for a short period of time. If there is new data entering the network that isn’t precisely represented by the old neurons, these newer, more excitable neurons are thus more likely to come to represent it. They will be recruited more often than their neighboring cells, and will form the basis for a unique sparse representation of the new data.

Therefore, at the behavioral level, there is a continual unique critical period for the small set of new cells in the hippocampus that have ‘come of age” and are ready to receive their representations from the outside world.

This also explains part of how we have temporal segregation of memory: I can roughly distinguish events that happened yesterday from events that happened two weeks ago, partially because there were new neurons that had ‘come of age’ and were representing the episodic information in different places.

S: What determines which neuron the new data gets sent to in the first place? What is the ‘routing function’ between sensory input and new neurons?

N: Each neuron detects a feature. So the more neurons you have, the higher resolution your network can detect things.

S: How do you test the relationship between neurogenesis on the biological level and “pattern separation” at the behavioral level?

N: One of the most common memory tasks in rodent studies is contextual spatial navigation. Rats are put into environments with slightly different decorations, and need to remember which hallway the food is stored in. As neurogenesis decreases, the rodent has less fine-grain resolution of contexts that they can distinguish.

But there are still unanswered questions. It could be the case that new neurons are increasing the overall plasticity in the network. Or, it could be the case that for any hippocampal task, the “resolution of pattern recognition” or other output is increased by new neurons, but it is hard to establish that.

S: Are these new neurons being generated at a constant rate, or is there something that makes them generate more or less?

N: Two of the main factors that influence neurogenesis are the level of stress in the environment, and the timing of development in the organism. Stressful environments cause a decrease in neurogenesis. Neurogenesis is especially high during childhood and decreases with age.

S: Cool. How does neurogenesis contribute to learning and forming new memories?

N: The most clear mechanism of new neurons is enabling better “pattern separation”. As more neurons are integrated into the functional network, the network can store higher resolution representations of new kinds of data. This pattern separation occurs with highest frequency in the dentate gyrus in the hippocampus.

F13.large

S: Now for the key question: It seems like increasing the rate at which these new neurons form could increase the resolution of new information that could be represented in the network. More new information could be stored in the same amount of time. Is that true?

In other words, would simply linearly adding neurons increase learning?

N: Yes, but there is likely to be an upper limit. It’s possible that the rate of learning might plateau as you increased neurogenesis past a certain point. The reason is that  the large amount of new neurons might not fully integrate into the functional network. The neurons need to adjust to form a sparse representation and connect with existing neurons, and need time to do that.

It may be possible to overcome this effect by broadening the locations over which the new neurons are added (instead of adding them only to the hippocampus). It may also be possible to implant neural stem cells in other parts of the cortex to increase its growth.

fx1

S: So you’re suggesting that current models of how neurons integrate into the hippocampal network predict that there might be an upper limit to the gains realized by adding new neurons to the hippocampus. Are there ways to overcome that upper limit? Or might we realize significant gains before reaching that upper limit, such as to make it worthwhile to pursue?

N: In either case, one outstanding research question is the relationship between cognition and the size and rate of growth of both the hippocampus and cortex. Significant gains could be realized through linear increases in the growth of new neurons. As a very loose proxy, the the human brain is 4.8 times the size for a hypothetical monkey of the same body weight, and the human neocortex is 35% larger than predicted for a primate with as large a brain. Depending on the test, humans can arguably be much more than 4.8 times smarter than a monkey. 2

But that is a good question. The exact point of the upper limit of neurogenesis has never been tested. There are some things you could do if you wanted to research this more:

First, we could model this computationally! This would be an interesting research question: Based on a computational model of the hippocampus, what is the upper limit of the rate of neurogenesis that you can sustain, before it starts to interfere with the representations? What happens when the rate of neurogenesis is 2x, 5x, 20x the normal rate? Do the new neurons interfere with sparse coding or does the system scale? Does the  require higher amounts of input (as could be achieved through sensory extension) in order to scale, or can it sustainably increase the learning rate while the amount of incoming data stays constant?

If you like digging through literature, here is literature research question: You could find experiments that involve increasing neurogenesis in hippocampus, and find the corresponding % increase in performance on memory task. Then plot the increase of neurogenesis on the X-axis, and compare it to the increase in performance on the Y-axis. You would then ask how to extrapolate the line: would it plateau, stay linear, or curve upwards? This is a good computational question, depending on an accurate model of how the hippocampus works.

If you like playing in the lab, here is a biological experiment: What are the upper limits to the current methods for increasing neurogenesis? Is there a way to produce 5x more than has been done before?

Finally, here’s one more question, which I’m not sure how to answer:  if simply tweaking neurogenesis had big payoffs, wouldn’t evolution have already done it? Maybe not, or maybe the in-between steps to higher neurogenesis weren’t viable. It could be that the correct intervention couldn’t be produced through genetic tweaks – artificial interventions are needed. Also, humans haven’t had that much time to evolve, so the possibilities within our genome haven’t been fully explored.

S: Thanks for the feedback, Neurocrates!

N: No prob. May you spawn many new feature-detecting goo balls.


Summary

  • It is clear that adding more neurons to the hippocampus increases learning and memory. It is possible that the increase in learning and memory would asymptote after some point.
  • This asymptote is predicted by current models of how new neurons integrate into the new network: neurons form sparse representations, and there is not a neat one-to-one mapping between neurons and concepts.
  • However, there may still be significant gains realized before hitting this upper limit.

Notes

 

Neurotechnology forecasting: stepping stones in financial and technical viability


Posted by

Context: I’m working to better understand how neurotechnology will develop in the upcoming decades. This post outlines a potential pathway where neurotechnology is developed first for medical purposes, shortly followed by cognitive enhancement. 1

An Initial Assumption: Eventually, neurotechnology (say, from neural implants or neural stem cells) will be developed enough to restore people with Alzheimers disease to normal-level memory capacity (not necessarily restoring older memories, just retaining new ones). We can make this estimate based on current knowledge about:
– incentives: current economic incentive for research to treat Alzheimers
– knowledge: current scientific knowledge
– base cases: success of initial research
– players: current organizations and people who are working to further develop this research

(I would guess the timeline for this is somewhere around 20 years, although the next claim does not depend on that number.)

Medical Applications, then Enhancement Applications:  Once the above-mentioned medical neurotechnology is developed:
– the remaining technical hurdles to turn it into a cognitive-enhancing product will be significantly smaller, and
– the social and economic incentives to do so will be large enough,
such that it is very likely the technical hurdles will be overcome to adapt it into a cognitive-enhancing neurotechnology. 2

(The timeline for the step from medical to cognitive-enhancing applications is somewhere around 2-10 years. The general prediction is that the medical->consumer step will be shorter than the research->medical step).

So it is very likely that the development of medical-grade neurotechnology will be followed shortly after by consumer-grade enhancement neurotechnology.

Can anyone drill holes in my example, or suggest ways to *test* this hypothesis? How can we develop better methods for making this type of tech forecasting claim?


 

Notes

  1. This post tries to model “stepping stones” in technological progress. I’m guessing there is some kind of model that would predict that once Technology X is developed, Technology Y is very likely to also be developed within N years. I’m currently trying to learn about methods in technology forecasting, and so this may not actually work. Let me know if you can tell me why.
  2. “Cognitive-enhancing” isn’t an all-or-nothing state. This is more of a statement that cognitive enhancing neurotechnology that is significantly more advanced than today’s will probably not be developed through a direct effort, because social and economic incentives for strong cognitive enhancement are not large enough yet. It seems more likely to be developed indirectly, first passing through the medical research route.