- 1The Latin American Observatory of Human Rights and Enterprises, NeuroRights Research Line, Universidad Externado de Colombia, Bogotá, Colombia
- 2Faculty of Engineering, University of Los Andes, Bogotá, Colombia
In this article, we present comments on the five NeuroRights proposed by the NeuroRights Initiative. This analysis seeks to offer some critical views regarding challenges in free will, enhancement, identity, algorithmic bias, and privacy. Our paper focuses on some conceptual, practical, ethical and logical problems that need to be consider in order to determine whether those NeuroRights should become a global policy. We believe that, although they constitute an innovative proposal from neuroethics and neurolaw, it is possible to glimpse some concerns that cast doubt on the convenience of incorporating them.
Neuroright to Free Will
The NeuroRights Initiative defines it as: “individuals should have ultimate control over their own decision making, without unknown manipulation from external neurotechnologies” (NeuroRights Initiative, 2021). Nevertheless, it seems conceptually problematic to propose a NeuroRight under this label. Free will is a fundamental problem that has been haunting philosophers for more than two millennia (Harris, 2012; Kane, 2012; O'Connor and Franklin, 2021). The debate concerning free will is far from being a peaceful matter. At least for now, there are two main positions: compatibilism and incompatibilism (Muñoz, 2012). On one side, compatibilism is the thesis that it is metaphysically possible that determinism is true, and people have free will (McKenna and Pereboom, 2016; van Inwagen, 2017). On the other side, hard incompatibilism is the thesis that our actions are either deterministic or truly random events and both possibilities exclude free will and moral responsibility (Pereboom, 2003).
Furthermore, if neuroscience has allegedly created a case against free will, since the proposed experiments (Libet et al., 1983; Haggard and Eimer, 1999; Soon et al., 2008, 2013; Fried et al., 2011), it seems contradictory to suggest the creation of a “Neuro” Right to free will. In this perspective, trying to elevate “Free will” as a category of human rights seems deeply complicated in conceptual terms (Muñoz, 2019; Borbón et al., 2020). In this sense, we envision that a right, under the philosophical baggage of free will, should not be incorporated. If we aim to protect consent to the use of neurotechnologies, this protection should be included within the current right to informed consent.
Neuroright to Equal Access to Mental Augmentation
Another concern that should be addressed is the ethical and practical repercussions of promoting access to enhancement. The NeuroRights Initiative proposes to establish guidelines regulating the development and applications of enhancement neurotechnologies based on the principle of justice and guarantee equality of access to all citizens (Yuste et al., 2017; NeuroRights Initiative, 2021).
Right now, neurotechnological research is being conducted in this area toward enhancing the user's cognitive capacities in a variety of tasks (Valeriani et al., 2017; Cinel et al., 2019; Belkacem et al., 2020; Kaimara et al., 2020). However, we find it problematic to create a new right that promotes access to enhancement technologies, as this could lead to possible transhumanist applications that should be treated with caution.
In that direction, as the alteration of the human nature1 becomes a social fact, the freedom of those who do not wish to improve could be significantly affected. This would create new social, labor and academic standards that would forge pressure on people who could not bear to be treated as inferior in these fields compared to their enhanced peers. The foregoing enters in contradiction with the proposed neuro-right to free will in the sense that people would not be giving consent free of vices but falling in front of the new social norms created with this new right (Borbón et al., 2021).
Furthermore, a NeuroRight to enhancement, if not adequately limited, may imply that the State should assume a new financial burden to provide and subsidize these types of technologies to vulnerable groups of citizens. Considering that the majority of public health systems exclusively finance therapeutic interventions, the State should not be assuming this new burden to guarantee enhancement with public resources. This is what seems to be happening in Chile, since the “Neuroprotection Bill,” promoted by the NeuroRights Initiative, provides textually in article 10 that: “The State will guarantee the promotion and equitable access to advances in neurotechnology and neuroscience” (Senado de Chile, 2020). Unfortunately, the way the text was drafted does not provide any clarity on the scope, limits and obligations of the proposal, raising more questions than answers.
In addition, this right could be considered obsolete in developing countries, such as Latin American ones, as some of them can-not even provide access to the most basic needs, such as nutrition or health care, and the guarantee of human rights (Cheru, 2016; Ezzati et al., 2018; Macarayan et al., 2018). Consequently, the gaps between developed and developing countries will widen, increasing power asymmetries. In that sense, an ethical proposal should be guided toward the extensive regulation of enhancement applications, with new laws and international treaties. Failure to do so would possibly involve leaving the door open to unlimited corporate interests for those companies that develop neurotechnologies, since it would be financing, with public funds, the numerous acquisitions of technologies whose purposes are not therapeutic, nor for public health, in the name of a new ambiguous human right (Borbón et al., 2021).
On the other hand, it is relevant to adapt this proposal to the various cultural and social contexts, especially those of Latin American countries. The foregoing in the sense that normalizing the possibility of mental augmentation, and even making it a human right, may go against religious precepts and cosmologies of indigenous groups, where the modification of human nature and the intimate interaction with neurotechnologies would not necessarily be viewed favorably (Borbón et al., 2021). In this sense, we propose not to incorporate this new human right as a subjective faculty to claim mental augmentation.
Neuroright to Personal Identity
The NeuroRights Initiative defines personal identity as “Boundaries must be developed to prohibit technology from disrupting the sense of self” (NeuroRights Initiative, 2021). Nonetheless, as it is assumed that the use of neurotechnology with enhancement purposes is inevitable and that equitable access should be promoted, it is necessary to assess that any intervention in the brain might cause some alteration in the mind and potentially threatens personal identity (Kraemer, 2013; Klein et al., 2015; Mackenzie and Walker, 2015; Iwry et al., 2017; Gilbert et al., 2019).
In that sense, a NeuroRight to cognitive enhancement enters in a problematic antinomy with the right to personal identity. Depending on the definition of self, identity and authenticity, prohibiting technology from altering these personal traits may imply prohibiting neurotechnologies in general. Precisely, one of the great challenges is drawing the limits in the definition of personal identity and its disruption. Furthermore, it is not simple to state a priori in which way neurotechnologies can threaten the self in order to regulate them. In general, we should certainly strive toward establishing some kind of precautionary principle when considering neurorights (Inglese and Lavazza, 2021). In that sense, a broader discussion needs to take place before establishing this right.
Neuroright to Protection From Algorithmic Bias
As technology advances and artificial intelligence algorithms become more intertwined with our daily lives and our mental data, the attention to the potential harm of algorithmic biases has dramatically increased. In this scenario, the NeuroRights Initiative (2021) proposes to establish explicit countermeasures against bias, like employing input from relevant user groups into the training datasets. Although we believe the intention behind the proposal is positive, some aspects need to be taken into consideration.
First, the race against bias should go beyond treating all algorithmic biases as something we shall aspire to eliminate. As mentioned by Danks and London (2017) many of these biases are neutral or can even be beneficial in our efforts to achieve our diverse goals. These authors expose different types of algorithmic biases and highlight how in some cases, an algorithmic bias can be used to mitigate the effect of another and contribute to the system performing according to the relevant ethical and legal standards. In this sense, saying that we must eliminate all biases is an oversimplification of the complex problem they impose.
Moreover, this right mentions the need to include input from user groups to foundationally address bias. In the technological sector, sharing the data used to train intelligent systems should be a baseline definition of transparency, from which others can actively work toward improving the accountability of the algorithms (Buolamwini and Gebru, 2018). However, assembling a nonbiased dataset is not always possible or sustainable. Usually, the data and algorithms deployed commercially are protected by copyright and patents. This turns the process of collecting representative datasets into an expensive and demanding task, among many things, because the sources of information are limited due to privacy issues. In this case, we are not necessarily against establishing this right, but the considerations mentioned need to be considered as well as further study on the matter.
Neuroright to Mental Privacy
As the gap between our mental information and technology narrows, data privacy issues are gaining higher relevance for neurotechnologies. The fact that this sensitive data unveils the intentions and internal states of its users, demands the need for protection by raising awareness and using advanced security techniques (Yuste et al., 2017). In this sense, the NeuroRights Initiative (2021) defines that according to mental privacy: “any data obtained from measuring neural activity should be kept private.”
In this regard, techniques such as federated learning are being developed to protect and secure valuable information. These strategies aim to provide local data processing so that this information can be used by AI algorithms while maintaining the integrity and privacy of the users. Nowadays, reliable federated learning systems are being deployed on mobile networks (Kang et al., 2020), but we are still far from being able to use them in a practical matter with our neural data. Ideally, these promising techniques will work using brain data in the near future, but in the meantime, we can't just stand with folded hands and keep our mental data a secret.
Furthermore, this right can adversely affect the need to protect the users from algorithmic biases. While we strive to secure our individual data, it will be increasingly arduous to obtain databases that are representative enough of the collectives; therefore, making it difficult to develop fairer algorithms without potentially harmful biases. Moreover, this restriction would greatly limit the innovation and development of neurotechnologies as the value of these devices comes from being able to make robust models by comparing data of numerous individuals.
Are Neuro-Rights Necessary?
Here we have shown the conceptual inconvenience of free will, and the practical and ethical issues involving enhancement, privacy, identity and bias. In addition to the antinomies that may arise, it is relevant to question the need to create a new category of human rights. Most national and international legal systems already protect freedom, consent, equality, integrity, privacy, and information. We view with skeptical eyes the advisability of creating a new category of rights. Moreover, considering that the creation of new rights implies a general and not very exhaustive description, we do not see that they can effectively regulate the neurotechnological advance. We propose that it is necessary to prepare justice operators to adequately interpret constitutional rights considering the challenges presented by neurotechnologies. In the same way, clear, extensive, and sufficient legal and international regulations must be established to satisfactorily address the limits to neurotechnologies.
All things considered; we suggest that the proposal of the neuro-rights should be extensively reviewed. Also, the scope and limits of each right should be adequately analyzed before attempting to incorporate them. For this, we propose that the academic and political discussion scenarios be expanded, especially by integrating the views of more Latin American countries, in addition to Chile. Our comments, of course, do not claim to be absolute truth, nor can we answer with certainty whether NeuroRights should become global politics and how, but we hope that these brief considerations will serve to enrich the discussion.
Author Contributions
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Acknowledgments
Some ideas of this manuscript have appeared previously in Spanish in Borbón et al. (2020) and in English in Borbón et al. (2021).
Footnotes
1. ^The concept of human nature is, without a doubt, a difficult term to define or to provoke consensus among scholars. In that sense, we want to guide the conversation above all to the discussions around posthumanism. Fukuyama (2002) is a defender of concepts such as human nature that he defines as: “human nature is the sum of the behavior and characteristics that are typical of the human species, arising from genetic rather than environmental factors” (p.130). Pepperell (2003), defender of posthumanism, maintains that: “the posthuman era begins when we no longer find it necessary, or possible, to distinguish between humans and nature; a time when we truly move from the human to the posthuman condition of existence” (p. 161).
References
Belkacem, A., Jamil, N., Palmer, J., Ouhbi, S., and Chen, C. (2020). Brain computer interfaces for improving the quality of life of older adults and elderly patients. Front. Neurosci. 14:692. doi: 10.3389/fnins.2020.00692
Borbón, D., Borbón, L., and Laverde, J. (2020). Análisis crítico de los NeuroDerechos Humanos al libre albedrío y al acceso equitativo a tecnologías de mejora. IUS ET SCIENTIA 6, 135–161. doi: 10.12795/IETSCIENTIA.2020.i02.10
Borbón, D., Borbón, L., and León, M. (2021). NeuroRight to equal access to mental augmentation: analysis from posthumanism, law and bioethics. Rev Iberoamericana De Bioética 16, 1–15. doi: 10.14422/rib.i16.y2021.006
Buolamwini, J., and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proc. Mach. Learn. Res. 81, 77–91.
Cheru, F. (2016). Developing countries and the right to development: a retrospective and prospective African view. Third World Q. 37, 1268–1283. doi: 10.1080/01436597.2016.1154439
Cinel, C., Valeriani, D., and Poli, R. (2019). Neurotechnologies for human cognitive augmentation: current state of the art and future prospects. Front. Hum. Neurosci. 13:13. doi: 10.3389/fnhum.2019.00013
Danks, D., and London, A. (2017). Algorithmic bias in autonomous systems. Int. Joint Conf. Artif. Intell. 17, 4691–4697. doi: 10.24963/ijcai.2017/654
Ezzati, M., Pearson-Stuttard, J., Bennett, J., and Mathers, C. (2018). Acting on non-communicable diseases in low- and middle-income tropical countries. Nature 559, 507–516. doi: 10.1038/s41586-018-0306-9
Fried, I., Mukamel, R., and Kreiman, G. (2011). Internally generated preactivation of single neurons in human medial frontal cortex predicts volition. Neuron 69, 548–562. doi: 10.1016/j.neuron.2010.11.045
Fukuyama, F. (2002). Our Posthuman Future Consequences of the Biotechnology Revolution. New York, NY: Farrar, Straus and Giroux.
Gilbert, F., Cook, M., and O'Brien, T. (2019). Embodiment and estrangement: results from a first-in-human “intelligent BCI” trial. Sci. Eng. Ethics 25, 83–96 doi: 10.1007/s11948-017-0001-5
Haggard, P., and Eimer, M. (1999). On the relation between brain potentials and the awareness of voluntary movements. Exp. Brain Res. 126, 128–133. doi: 10.1007/s002210050722
Inglese, S., and Lavazza, A. (2021). What should we do with people who cannot or do not want to be protected from neurotechnological threats?. Front. Hum. Neurosci. 15:703092. doi: 10.3389/fnhum.2021.703092
Iwry, J., Yaden, D., and Newberg, A. (2017). Noninvasive brain stimulation and personal identity: ethical considerations. Front. Hum. Neurosci. 11:281. doi: 10.3389/fnhum.2017.00281
Kaimara, P., Plerou, A., and Deliyannis, I. (2020). Cognitive enhancement and brain-computer interfaces: potential boundaries and risks. Adv. Exp. Med. Biol. 1194, 275–283. doi: 10.1007/978-3-030-32622-7_25
Kang, J., Xiong, Z., Niyato, D., Zou, Y., Zhang, Y., and Guizani, M. (2020). Reliable federated learning for mobile networks. IEEE Wireless Commun. 27, 72–80. doi: 10.1109/MWC.001.1900119
Klein, E., Brown, T., Sample, M., Truitt, A. R., and Goering, S. (2015). Engineering the brain: ethical issues and the introduction of neural devices. Hast. Center Rep. 45, 26–35. doi: 10.1002/hast.515
Kraemer, F. (2013). Authenticity or autonomy? When deep brain stimulation causes a dilemma. J. Med. Ethics 39, 757–760. doi: 10.1136/medethics-2011-100427
Libet, B., Gleason, C. A., Wright, E. W., and Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain 106, 623–642. doi: 10.1093/brain/106.3.623
Macarayan, E., Gage, A., Doubova, S., Guanais, F., Lemango, E., Ndiaye, Y., et al. (2018). Assessment of quality of primary care with facility surveys: a descriptive analysis in ten low-income and middle-income countries. Lancet Glob. Health 6, e1176–e1185. doi: 10.1016/S2214-109X(18)30440-6
Mackenzie, C., and Walker, M. (2015). “Neurotechnologies, personal identity, and the ethics of authenticity,” in Handbook of Neuroethics, eds J. Clausen and N. Levy (Dordrecht: Springer).
McKenna, M., and Pereboom, P. (2016). Free Will A Contemporary Introduction. New York, NY: Routledge.
Muñoz, J. (2012). Hacia una sistematización de la relación entre determinismo y libertad. Daimon Rev Int de Filosofía 56, 5–19.
Muñoz, J. (2019). Chile—right to free will needs definition. Nature 574, 634. doi: 10.1038/d41586-019-03295-9
NeuroRights Initiative (2021). The Five Ethical NeuroRights. Available online at: https://neurorights-initiative.site.drupaldisttest.cc.columbia.edu/sites/default/files/content/The%20Five%20Ethical%20NeuroRights%20updated%20pdf_0.pdf (accessed April 3, 2021).
O'Connor, T., and Franklin, C. (2021). “Free Will”, The Stanford Encyclopedia of Philosophy, Spring 2021 Edn. Stanford, CA: Stanford University.
Pepperell, R. (2003). The Posthuman Condition Consciousness Beyond the Brain. Portland, OR: Intellect.
Senado de Chile (2020). Boletín N° 13.828-19. Bill of law Establishing neuroprotection. http://www.senado.cl/appsenado/templates/tramitacion/index.php?boletin_ini=13828-19
Soon, C., Brass, M., Heinze, H.-J., and Haynes, J.-D. (2008). Unconscious determinants of free decisions in the human brain. Nat. Neurosci. 11, 543–545. doi: 10.1038/nn.2112
Soon, C., He, A. H., Bode, S., and Haynes, J.-D. (2013). Predicting free choices for abstract intentions. Proc. Natl. Acad. Sci. U.S.A. 110, 6217–6222. doi: 10.1073/pnas.1212218110
Valeriani, D., Poli, R., and Cinel, C. (2017). Enhancement of group perception via a collaborative brain–computer interface. IEEE Trans. Biomed. Eng. 64, 1238–1248. doi: 10.1109/TBME.2016.2598875
Keywords: NeuroRights, neuroethics, enhancement, free will, mental privacy, bias, personal identity, neurolaw
Citation: Borbón D and Borbón L (2021) A Critical Perspective on NeuroRights: Comments Regarding Ethics and Law. Front. Hum. Neurosci. 15:703121. doi: 10.3389/fnhum.2021.703121
Received: 30 April 2021; Accepted: 30 September 2021;
Published: 25 October 2021.
Edited by:
Eric García-López, Instituto Nacional de Ciencias Penales, MexicoReviewed by:
Manuel Guerrero, Uppsala University, SwedenCopyright © 2021 Borbón and Borbón. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Diego Borbón, diego.borbon01@est.uexternado.edu.co
†These authors have contributed equally to this work