Mapping Minds: From Neuronal Cartography To Digital Personhood

Image credit: Ecliptic Graphic on Unsplash

Part 2/2: From Brain Maps to Digital Selves

Questions are raised when it comes to a living being’s digital consciousness: What would “uploading” require? What could it deliver? And what ethical boundaries must be drawn?

The notion of “moving” a person’s mind to a digital substrate recurs in policy discussions and investor pitches. A disciplined assessment starts by defining terms. Whole-brain emulation (WBE) refers to an executable model that reproduces an individual’s cognitive capacities using data derived from that individual’s brain. Classic roadmaps make the requirement explicit: anatomical completeness down to relevant scales, biophysical fidelity or validated abstractions for synapses and ion channels, rules for learning and plasticity, and a means of updating state as experience accrues. Against that bar, current capability is far short; any credible horizon is multi-decadal.

Why such caution? First, no dataset exists for humans that jointly captures complete wiring, dynamics, and neuromodulatory state at the necessary resolution and span. Even for small animals where complete wiring diagrams now exist, translating structure into behaviourally faithful, scalable simulations remains work in progress. The fruit-fly achievements—whole-brain connectomes with network-level insights—underscore progress, but also highlight the gulf between a mapped brain of ~10^5 neurons and a human brain with ~10^11 neurons and far greater biochemical complexity.

Second, the compute and storage budgets for human-scale emulation remain uncertain by orders of magnitude. Plausible paths involve aggressive abstraction—reducing detailed biophysics to validated, learnable units—yet those abstractions must be proven against multi-modal data, not assumed. Existing roadmaps therefore read as boundary-setting documents, not near-term engineering plans.

What is realistic to expect within planning horizons of 10–20 years are computational brain twins for selected disorders: models initialised with patient data to test medication or neurostimulation settings in silico. These are not conscious entities and would be regulated as clinical decision tools or device controllers. Over longer horizons, if measurement, modelling, and compute converge, the question shifts from feasibility to governance.

The space-exploration angle illustrates both promise and uncertainty. If emulations became feasible, missions could dispatch “mind payloads” instead of bodies: gram-scale probes carrying compute and storage, operating at extremely low power for decades, and communicating asynchronously over light-hour or light-year distances. Re-embodiment would depend on robotics and local fabrication. This would change mass budgets and life-support assumptions, making trajectories to the outer planets or interstellar probes more tractable. However, legal personhood, duty of care during transit, and liability for autonomous actions would require international agreement before such missions could be commissioned. These are governance questions, not engineering afterthoughts.

A frequently raised concept is dual-medium consciousness: an individual and an autonomous digital counterpart co-existing. Contemporary science offers no evidence that consciousness can be duplicated or split across substrates; a copied system, if feasible, would most plausibly be an independent continuation with its own experiences, obligations, and rights. Policy frameworks will need to define identity continuity, mental privacy, and economic participation (remuneration and taxation for digital labour) long before such systems are possible, to avoid coercion or exploitation in intermediate technologies. The OECD’s recommendation on responsible neurotechnology and neurorights jurisprudence in Chile both signal the direction of travel: cognitive liberty and mental privacy are becoming legal categories, not academic thought experiments.

Closer to the ground, dense brain maps promise incremental gains that matter now. Understanding attention, memory, emotion, and social cognition at circuit level can improve mental-health care by reducing trial-and-error prescribing, enabling earlier interventions, and making closed-loop neuromodulation safer and more effective. These advances support education and workforce wellbeing through better diagnostics and targeted cognitive training, without invoking emulation. They also inform public-health approaches to aggression and impulse control by addressing underlying conditions—sleep, trauma, neuro-chemistry—under robust safeguards against coercive use.

There is no currently addressable market for “uploads.” Adjacent spend, however, is significant. Space agencies and robotics firms procure mission-critical autonomy inspired by neural principles; hospitals and payers buy clinical digital twins and decision-support tools; insurers and device makers fund closed-loop neurostimulation. These draw on mapping but stop well short of person emulation, with procurement based on certification, safety, and measurable outcomes.

This makes the entire field a series of emerging markets. If feasibility improves over decades, scenarios include data-centre-bound emulated researchers for specific analytic tasks, mind-payload exploration for deep space, and continuity services that securely archive high-fidelity neural state for future scientific use under strict consent. Each scenario requires governance on rights to memories, autonomy boundaries, revocation, and inheritance—areas where international standards bodies and national courts have begun to move.

Ethics cannot be an afterthought. The OECD’s framework, UNESCO and Council of Europe deliberations, and early national moves (Chile) all point to mental privacy, identity, and agency as protected interests. A practical compliance posture for companies includes explicit consent and provenance for neural data; audit trails for model training; restrictions on secondary use; and impact assessments for bias and harm. These are familiar patterns from digital health and AI governance and should be embedded now to avoid regulatory reversals later.

High-quality human circuit atlases and clinically validated brain twins are plausible in 10–20 years, contingent on automation, standards, and outcome-linked validation. Executable emulations of individuals, if ever achievable, sit on a 50–75-year horizon or beyond, requiring breakthroughs in measurement, theory of mind, and computation that cannot be scheduled. Investors should read the opportunity accordingly: near-term returns from medical and device applications; optionality in cognitive science and AI; and a tightly governed research lane for person-scale emulation.

Why invest now? The same data and models that would be prerequisites for any distant emulation deliver value today in treatment precision, device efficacy, and safer neurotechnology. Funding the stack—multi-modal mapping, validated abstractions, and closed-loop control—creates immediate health and productivity benefits, while keeping a long-term option on deeper questions of mind and identity open, but grounded in evidence.

Next
Next

Mapping Minds: From Neuronal Cartography To Digital Personhood