Preimpresión
Legal Provisions on Medical Aid in Dying Encode Moral Intuition
Autores: Ivar R. Hannikainen, Jorge Suárez, Luis Espericueta, Maite Menéndez-Ferreras y David Rodríguez-Arias.
URL: https://www.researchgate.net/publication/376480431
Transitions toward legalizing euthanasia and assisted suicide across numerous jurisdictions have been accompanied by a growing recognition of the moral right to medical aid in dying. Here we draw on a comprehensive quantitative review of assisted dying laws, experimental survey evidence, and four decades of time-series data to explore the connection between legislative changes and shifts in moral attitudes. Laws on medical aid in dying typically place eligibility restrictions on the patient’s age and competence, the nature of their ailment, and their prognosis; and these restrictions shape people’s moral approval of a physician’s provision of aid in dying–in countries where euthanasia is legal (Spain) and illegal (United Kingdom) alike. A historical look at euthanasia attitudes across numerous countries uncovered anticipatory growth in moral approval leading up to legalization, but no accelerated growth thereafter. Collectively, our findings suggests that legal frameworks for medical aid in dying crystallize patterns of moral intuition.
Artículo
De la biomejora moral a la IA para la mejora moral: asistentes morales artificiales en la era de los riesgos globales
Revista: Enrahonar, 2024
Autores: Pablo Neira Castro
DOI: https://doi.org/10.5565/rev/enrahonar.1522
En este artículo argumento que el empleo de tecnologías biomédicas para la mejora moral es inadecuado para evitar riesgos globales, debido a que puede provocar problemas simi-lares a los que se pretenden resolver por medio de su uso. Por este motivo, defiendo la conveniencia de explorar otras tecnologías para la mejora moral que no supongan tantos peligros y argumento en favor de un modelo de mejora moral por medio de la inteligen-cia artificial. En concreto, defiendo el empleo de SocrAI, un asistente moral artificial diseñado para mejorar nuestra deliberación moral. Para ello, propongo tres criterios que permitan evaluar y aumentar su seguridad y eficacia. Asimismo, señalo la importancia de tener en cuenta las cuestiones estructurales e institucionales —i. e., las normas o los incentivos políticos, económicos, sociales y culturales— en las propuestas de mejora moral, y muestro cómo SocrAI puede tener impacto en ellas.
Artículo
Old by obsolescence: The paradox of aging in the digital era
Revista: Bioethics, 2024
Autores: Joan Llorca Albareda y Pablo García-Barranquero
DOI: https://doi.org/10.1111/bioe.13288
Geroscience and philosophy of aging have tended to focus their analyses on the biological and chronological dimensions of aging. Namely, one ages with the passage of time and by experiencing the cellular-molecular deterioration that accompanies this process. However, our concept of aging depends decisively on the social valuations held about it. In this article, we will argue that, if we study social aging in the contemporary world, a novel phenomenon can be identified: the paradox of aging in the digital era. If the social understanding of aging today is linked to unproductivity and obsolescence; then there is a possibility that, given the pace of change of digital technologies, we become obsolete at an early chronological and biological age, and therefore, feel old at a younger age. First, we will present the social dimension of aging based on Rowe and Kahn’s model of successful aging. We will also show that their notion of social aging hardly considers structural aspects and weakens their approach. Second, departing from social aging in its structural sense, we will develop the paradox of aging in the digital era. On the one hand, we will explain how the institutionalization of aging has occurred in modern societies and how it is anchored in the concepts of obsolescence and productivity. On the other hand, we will state the kind of obsolescence that digitalization produces and argue that it can make cohorts of biologically and chronologically young individuals obsolete, and thus they would be personally and socially perceived as old.
Capítulo
Ethics of Autonomous Weapon Systems
Libro: Ethics of Artificial Intelligence, 2024
Autores: Juan Ignacio del Valle y Miguel Moreno
DOI: https://doi.org/10.1007/978-3-031-48135-2_9
The use of weapons without humans-in-the-loop in modern warfare has been a contentious issue for several decades, from land mines to more advanced systems like loitering munitions. With the emergence of artificial intelligence (AI), particularly machine learning (ML) technologies, the ethical difficulties in this complex field have increased. The challenges related to the adherence to International Humanitarian Law (IHL), or human dignity are compounded by ethical concerns related to AI, such as transparency, explainability, human agency, and autonomy.In this chapter, we aim to provide a comprehensive overview of the main issues and current positions in the field of autonomous weapons and technological warfare. We will begin by clarifying the concept of autonomy in warfare, an area that still needs attention, as evidenced by the latest discussions within the United Nations (UN) Convention on Certain Conventional Weapons (CCW). We will also introduce the current legal basis in this field and the problems in its practical application and offer sound philosophical grounds to better understand this highly complex and multifaceted field.
Capítulo
Ethics of Virtual Assistants
Libro: Ethics of Artificial Intelligence, 2024
Autores: Juan Ignacio del Valle, Joan Llorca Albareda y Jon Rueda
DOI: https://doi.org/10.1007/978-3-031-48135-2_5
Among the many applications of artificial intelligence (AI), virtual assistants are one of the tools most likely to grow in the future. The development of these systems may play an increasingly important role in many facets of our lives. Therefore, given their potential importance and present and future weight, it is worthwhile to analyze what kind of challenges they entail. In this chapter, we will provide an overview of the ethical aspects of artificial virtual assistants. First, we provide a conceptual clarification of the term ‘virtual assistant’, including different types of interfaces and recommender systems. Second, we address three ethical issues related to the deployment of these systems: their effects on human agency and autonomy and the subsequent cognitive dependence they would generate; the human obsolescence that may cause a generalized extension of the dependence problem; and the invasions of privacy that virtual assistants may cause. Finally, we outline the debates about the use of virtual assistants to improve human moral decisions and some areas in which these systems can be applied
Capítulo
The Singularity, Superintelligent Machines, and Mind Uploading: The Technological Future?
Libro: Ethics of Artificial Intelligence, 2024
Autores: Antonio Diéguez y Pablo García-Barranquero
DOI: https://doi.org/10.1007/978-3-031-48135-2_12
This chapter discusses the question of whether we will ever have an Artificial General Superintelligence (AGSI) and how it will affect our species if it does so. First, it explores various proposed definitions of AGSI and the potential implications of its emergence, including the possibility of collaboration or conflict with humans, its impact on our daily lives, and its potential for increased creativity and wisdom. The concept of the Singularity, which refers to the hypothetical future emergence of superintelligent machines that will take control of the world, is also introduced and discussed, along with criticisms of this concept. Second, it is considered the possibility of mind uploading (MU) and whether such MU would be a suitable means to achieve (true) immortality in this world—the ultimate goal of the proponents of this approach. It is argued that the technological possibility of achieving something like this is very remote, and that, even if it were ever achieved, serious problems would remain, such as the preservation of personal identity. Third, the chapter concludes arguing that the future we create will depend largely on how well we manage the development of AI. It is essential to develop governance of AI to ensure that critical decisions are not left in the hands of automated decision systems or those who create them. The importance of such governance lies not only in avoiding the dystopian scenarios of a future AGSI but also in ensuring that AI is developed in a way that benefits humanity.
Libro
Ethics of Artificial Intelligence
Editorial: Springer Nature, 2024
Editores: Francisco Lara y Jan Deckers
DOI: https://doi.org/10.1007/978-3-031-48135-2
This book presents the reader with a comprehensive and structured understanding of the ethics of Artificial Intelligence (AI). It describes the main ethical questions that arise from the use of AI in different areas, as well as the contribution of various academic disciplines such as legal policy, environmental sciences, and philosophy of technology to the study of AI. AI has become ubiquitous and is significantly changing our lives, in many cases, for the better, but it comes with ethical challenges. These challenges include issues with the possibility and consequences of autonomous AI systems, privacy and data protection, the development of a surveillance society, problems with the design of these technologies and inequalities in access to AI technologies. This book offers specialists an instrument to develop a rigorous understanding of the main debates in emerging ethical questions around AI. The book will be of great relevance to experts in applied and technology ethics and to students pursuing degrees in applied ethics and, more specifically, in AI ethics.
Capítulo
The moral status of AI entities
Libro: Ethics of Artificial Intelligence, 2024
Autores: Joan Llorca Albareda, Paloma García y Francisco Lara
DOI: https://doi.org/10.1007/978-3-031-48135-2_4
The emergence of AI is posing serious challenges to standard conceptions of moral status. New non-biological entities are able to act and make decisions rationally. The question arises, in this regard, as to whether AI systems possess or can possess the necessary properties to be morally considerable. In this chapter, we have undertaken a systematic analysis of the various debates that are taking place about the moral status of AI. First, we have discussed the possibility that AI systems, by virtue of its new agential capabilities, can be understood as a moral agent. Discussions between those defending mentalist and anti-mentalist positions have revealed many nuances and particularly relevant theoretical aspects. Second, given that an AI system can hardly be an entity qualified to be responsible, we have delved into the responsibility gap and the different ways of understanding and addressing it. Third, we have provided an overview of the current and potential patientist capabilities that AI systems possess. This has led us to analyze the possibilities of AI possessing moral patiency. In addition, we have addressed the question of the moral and legal rights of AI. Finally, we have introduced the two most relevant authors of the relational turn on the moral status of AI, Mark Coeckelbergh and David Gunkel, who have been led to defend a relational approach to moral life as a result of the problems associated with the ontological understanding of moral status.
Capítulo
Exploring the Ethics of Interaction with Care Robots
Libro: Ethics of Artificial Intelligence, 2024
Autores: María Victoria Martínez-López, Gonzalo Díaz-Cobacho, Aníbal M. Astobiza y Blanca Rodríguez
DOI: https://doi.org/10.1007/978-3-031-48135-2_8
The development of assistive robotics and anthropomorphic AI allows machines to increasingly enter into the daily lives of human beings and gradually become part of their lives. Robots have made a strong entry in the field of assistive behaviour. In this chapter, we will ask to what extent technology can satisfy people’s personal needs and desires as compared to human agents in the field of care. The industry of assistive technology burst out of the gate at the beginning of the century with very strong innovation and development and is currently attracting large sources of public and private investment and public attention. We believe that a better-defined and more fundamental philosophical-ethical analysis of the values at stake in care robots is needed. To this end, we will focus on the current status of care robots (types of care robots, their functioning and their design) and we will provide a philosophical-ethical analysis that offers a solid framework for the debate surrounding the potential risks and benefits of implementing assistive robots in people’s daily lives.
Capítulo
AI, Sustainability, and Environmental Ethics
Libro: Ethics of Artificial Intelligence, 2024
Autores: Cristian Moyano-Fernández y Jon Rueda
DOI: https://doi.org/10.1007/978-3-031-48135-2_11
Artificial Intelligence (AI) developments are proliferating at an astonishing rate. Unsurprisingly, the number of meaningful studies addressing the social impacts of AI applications in several fields has been remarkable. More recently, several contributions have started exploring the ecological impacts of AI. Machine learning systems do not have a neutral environmental cost, so it is important to unravel the ecological footprint of these techno-scientific developments. In this chapter, we discuss the sustainability of AI from environmental ethics approaches. We examine the moral trade-offs that AI may cause in different moral dimensions and analyse prominent conflicts that may arise from human and more-than-human-centred concerns.
Artículo
Liberal eugenics, coercion and social pressure
Revista: Enrahonar, 2024
Autores: Blanca Rodríguez
DOI: https://doi.org/10.5565/rev/enrahonar.1520
When discussing genetic prenatal enhancement, we often encounter objections related to “eugenics.” Those who want to defend prenatal enhancement either try to avoid using the term “eugenics” or talk about “liberal eugenics”, implying that what was wrong with the old eugenics was its coercive character, and claiming that while old eugenics went against reproductive freedom, the new liberal eugenics promotes freedom. In this paper we first explore the objection that genetic enhancement is a form of eugenics that limits parental freedom. We then show how the same objection appears in other bioethical debates. Finally, we answer the objection, showing that genetic enhancement does not limit repro-ductive freedom in any important sense.
Comentario
The brain death criterion in light of value-based disagreement versus biomedical uncertainty
Revista: The American Journal of Bioethics, 2024
Autores: Daniel Martin, Gonzalo Díaz-Cobacho e Ivar R. Hannikainen
DOI: https://doi.org/10.1080/15265161.2023.2278566
Artículo
May Artificial Intelligence take health and sustainability on a honeymoon? Towards green technologies for multidimensional health and environmental justice
Revista: Global Bioethics, 2024
Autores: Cristian Moyano-Fernández, Jon Rueda, Janet Delgado y Txetxu Ausín
DOI: https://doi.org/10.1080/11287462.2024.2322208
The application of Artificial Intelligence (AI) in healthcare and epidemiology undoubtedly has many benefits for the population. However, due to its environmental impact, the use of AI can produce social inequalities and long-term environmental damages that may not be thoroughly contemplated. In this paper, we propose to consider the impacts of AI applications in medical care from the One Health paradigm and long-term global health. From health and environmental justice, rather than settling for a short and fleeting green honeymoon between health and sustainability caused by AI, it should aim for a lasting marriage. To this end, we conclude by proposing that, in the upcoming years, it could be valuable and necessary to promote more interconnected health, call for environmental cost transparency, and increase green responsibility.
Artículo
The global governance of genetic enhancement technologies: Justification, proposals, and challenges
Revista: Enrahonar, 2024
Autores: Jon Rueda
DOI: https://doi.org/10.5565/rev/enrahonar.1519
The prospect of human genetic enhancement requires an institutional response, and probably the creation of new institutions. The governance of genetic enhancement technologies, moreover, needs to be global in scope. In this article, I analyze the debate on the global governance of human genetic enhancement. I begin by offering a philosophical justification for the need to adopt a global framework for governance of technologies that would facilitate the improvement of non-pathological genetic traits. I then summarize the main concrete proposals that have recently emerged to govern genome editing at the global level. Finally, I develop some impediments that limit the impetus for global governance of genetic enhancement.
Artículo
Strong bipartisan support for controlled psilocybin use as treatment or enhancement in a representative sample of US Americans: need for caution in public policy persists
Revista: AJOB neuroscience, 2024
Autores: Julian D. Sandbrink, Kyle Johnson, Maureen Gill, David B. Yaden, Julian Savulescu, Ivar R. Hannikainen y Brian D. Earp
DOI: https://doi.org/10.1080/21507740.2024.2303154
The psychedelic psilocybin has shown promise both as treatment for psychiatric conditions and as a means of improving well-being in healthy individuals. In some jurisdictions (e.g., Oregon, USA), psilocybin use for both purposes is or will soon be allowed and yet, public attitudes toward this shift are understudied. We asked a nationally representative sample of 795 US Americans to evaluate the moral status of psilocybin use in an appropriately licensed setting for either treatment of a psychiatric condition or well-being enhancement. Showing strong bipartisan support, participants rated the individual’s decision as morally positive in both contexts. These results can inform effective policy-making decisions around supervised psilocybin use, given robust public attitudes as elicited in the context of an innovative regulatory model. We did not explore attitudes to psilocybin use in unsupervised or non-licensed community or social settings.
Artículo
Advance Medical Decision-Making Differs Across First-and Third-Person Perspectives
Revista: AJOB Empirical Bioethics, 2024
Autores: James Toomey, Jonathan Lewis, Ivar R. Hannikainen y Brian D. Earp
DOI: http://dx.doi.org/10.2139/ssrn.4617951
Advance healthcare decision-making presumes that a prior treatment preference expressed with sufficient mental capacity (“T1 preference”) should trump a contrary preference expressed after significant cognitive decline (“T2 preference”). This assumption is much debated in normative bioethics, but little is known about lay judgments in this domain. This study (N = 1445 US Americans; gender-balanced sample) investigated participants’ judgments about which preference should be followed, and whether these judgments differed depending on a first-person (deciding for one’s future self) versus third-person (deciding for a friend or stranger) perspective. We found that participants were more likely to defer to the incapacitated T2 preference of a third-party, while being more likely to insist on following their own T1 capacitated preference. Further, participants were more likely to conclude that others with substantial cognitive decline were still their “true selves,” which correlated with increased deference to their T2 preferences. These findings add to the growing evidence that lay intuitions concerning the ethical entitlement to have decisions respected is more a function of the relationship between the decision and the decision-maker’s true self, and less a function of the cognitive threshold of mental capacity, as in traditional bioethical accounts.
Artículo
Human stem-cell-derived embryo models: When bioethical normativity meets biological ontology
Revista: Developmental Biology, 2024
Autores: Adrian Villalba, Jon Rueda e Íñigo de Miguel Beriain
DOI: https://doi.org/10.1016/j.ydbio.2024.01.009
The use of human stem-cell-derived embryo models in biomedical research has recently sparked intense bioethical debates. In this article, we delve into the ethical complexities surrounding these models and advocate for a deeper exploration of their biological ontology to discuss their bioethical normativity. We examine the ethical considerations arising from the implementation of these models, emphasizing varying viewpoints on their ethical standing and the ethical obligations associated with their development and utilization. We contend that a nuanced comprehension of their biological ontology is crucial for navigating these ethical quandaries. Furthermore, we underscore the indispensability of interdisciplinary cooperation among bioethicists, biologists, and philosophers to unravel the complex interplay between biological ontology and the normative framework of bioethics. Moreover, this article introduces a novel combinatorial approach to resolve the ethical dilemma surrounding these models. We propose a distinction between models that closely emulate natural embryos, based on the status of synthetic embryos, and those capable of reproducing specific dimensions of embryonic development. Such differentiation allows for nuanced ethical considerations while harnessing the value of these models in scientific research, paving the way for a more comprehensive ethical framework in the context of evolving biotechnologies.
Respuesta
Is ageing still undesirable? A reply to Räsänen
Revista: Journal of Medical Ethics, 2024
Autores: Pablo García-Barranquero, Joan Llorca Albareda y Gonzalo Díaz-Cobacho
DOI: https://doi.org/10.1136/jme-2023-109607
Artículo
Socratic nudges, virtual moral assistants and the problem of autonomy
Revista: AI & SOCIETY, 2024
Autores: Francisco Lara y Blanca Rodríguez
DOI: https://doi.org/10.1007/s00146-023-01846-3
Many of our daily activities are now made more convenient and efficient by virtual assistants, and the day when they can be designed to instruct us in certain skills, such as those needed to make moral judgements, is not far off. In this paper we ask to what extent it would be ethically acceptable for these so-called virtual assistants for moral enhancement to use subtle strategies, known as “nudges”, to influence our decisions. To achieve our goal, we will first characterise nudges in their standard use and discuss the debate they have generated around their possible manipulative character, establishing three conditions of manipulation. Secondly, we ask whether nudges can occur in moral virtual assistants that are not manipulative. After critically analysing some proposed virtual assistants, we argue in favour of one of them, given that by pursuing an open and neutral moral enhancement, it promotes and respects the autonomy of the person as much as possible. Thirdly, we analyse how nudges could enhance the functioning of such an assistant, and evaluate them in terms of their degree of threat to the subject’s autonomy and their level of transparency. Finally, we consider the possibility of using motivational nudges, which not only help us in the formation of moral judgements but also in our moral behaviour.
Artículo
Anthropological crisis or crisis in moral status: a philosophy of technology approach to the moral consideration of artificial intelligence
Revista: Philosophy & Technology, 2024
Autores: Joan Llorca Albareda
DOI: https://doi.org/10.1007/s13347-023-00682-z
The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle of moral consideration. This raises the foreboding that we are at the gates of an anthropological crisis: the properties bound to moral agency have been exclusively possessed in the past by human beings and have shaped the very definition of being human. In this article, I will argue that AI does not lead us to an anthropological crisis and that, if we adhere to the history and philosophy of technology, we will notice that the debate on the moral status of AI uncritically starts from an anthropology of properties and loses sight of the relational dimension of technology. First, I will articulate three criteria for analyzing different anthropological views in philosophy of technology. Second, I will propose six anthropological models: traditional, industrial, phenomenological, postphenomenological, symmetrical, and cyborg. Third, I will show how the emergence of AI breaks with the dynamics of increased relationality in the history and philosophy of technology. I will argue that this aspect is central to debates about the moral status of AI, since it sheds light on an aspect of moral consideration that has been obscured. Finally, I will reject entirely relational approaches to moral status and propose two hybrid possibilities for rethinking it.
Artículo
Introducing Complexity in Anthropology and Moral Status: a Reply to Pezzano
Revista: Philosophy & Technology, 2024
Autores: Joan Llorca Albareda
DOI: https://doi.org/10.1007/s13347-024-00709-z
Pezzano has offered some relevant considerations to my recently published article Anthropological crisis or crisis in moral status. He advocates for the need to address ontologically and anthropologically the relation between human beings and technologies from the concept of property. Despite its centrality, this concept is taken for granted in the debates on the moral status of artificial intelligence (AI). Both proponents and detractors of the anthropology of properties adopt a position towards it without analyzing in depth what exactly we mean by property. In this reply, I intend to take the thesis put forward in my paper a step further on the basis of Pezzano’s commentary. I will defend the urge to explore a complex anthropology, markedly technological, and I will introduce the consequences this may have on the concept of moral status.
Artículo
Re-defining the human embryo: A legal perspective on the creation of embryos in research
Revista: EMBO reports, 2024
Autores: Iñigo De Miguel Beriain, Jon Rueda y Adrian Villalba
DOI: https://doi.org/10.1038/s44319-023-00034-0
The notion of the human embryo is not immutable. Various scientific and technological breakthroughs in reproductive biology have compelled us to revisit the definition of the human embryo during the past 2 decades. Somatic cell nuclear transfer, oocyte haploidisation and, more recently, human stem cell-derived embryo models have challenged this scientific term, which has both ethical and legal repercussions. Here, we offer a legal perspective to identify a universally accepted definition of ‘embryo’ which could help to ease and unify the regulation of such entities in different countries.
Artículo
Pluralism in the determination of death
Revista: Current Opinion in Behavioral Sciences, 2024
Autores: Gonzalo Díaz-Cobacho y Alberto Molina-Pérez
DOI: https://doi.org/10.1016/j.cobeha.2024.101373
Since the neurological criterion of death was established in medical practice in the 1960s, there has been a debate in the academic world about its scientific and philosophical validity, its ethical acceptability, and its political appropriateness. Among the many and varied proposals for revising the criteria for human death, we will focus on those that advocate allowing people to choose their own definition and criteria for death within a range of reasonable or tolerable alternatives. These proposals can be categorized under the rubric of pluralism in the determination of death. In this article, we will outline the main proposals and their rationales and provide a current overview of the state of the controversy.
Artículo
From neurorights to neuroduties: the case of personal identity
Revista: Bioethics Open Research, 2024
Autores: Aníbal Monasterio Astobiza e íñigo de Miguel Beriain
DOI: https://doi.org/10.12688/bioethopenres.17501.1
The neurorights initiative has been postulated as a way of ensuring the protection of individuals from the advances of neurotechnology and artificial intelligence (AI). With the advancement of neurotechnology, the human nervous system may be altered, modified, intervened with, or otherwise controlled. However, how do neurorights safeguard legal interests when an individual consciously chooses to modify their experiences using neurotechnology? Neurorights—the protection of cognitive liberty, psychological continuity, free will, personal identity, and mental privacy—are challenged when individuals opt for ‘artificial memories’, implanted experiences, etc., disrupting their natural cognitive dimensions. The present article examines these complex dilemmas through a legal and ethical lens. Furthermore, it introduces the concept of a ‘neuroduty’ to preserve identity, a moral obligation that stands in stark contrast to the individual’s right to self-determination. In the same way that neurorights protect us from external interference in our nervous system, is it possible to think of a neuroduty to preserve our identity? This article explores the tensions between neurorights, neuroduty, and the potential misuse of neurotechnology.
Artículo
Is ageing undesirable? An ethical analysis.
Revista: Journal of Medical Ethics, 2024
Autores: Pablo García-Barranquero, Joan Llorca Albareda y Gonzalo Díaz-Cobacho
DOI: http://dx.doi.org/10.1136/jme-2022-108823
The technical possibilities of biomedicine open up the opportunity to intervene in ageing itself with the aim of mitigating, reducing or eliminating it. However, before undertaking these changes or rejecting them outright, it is necessary to ask ourselves if what would be lost by doing so really has much value. This article will analyse the desirability of ageing from an individual point of view, without circumscribing this question to the desirability or undesirability of death. First, we will present the three most widely used arguments to reject biomedical interventions against ageing. We will argue that only the last of these arguments provides a consistent answer to the question of the desirability of ageing. Second, we will show that the third argument falls prey to a conceptual confusion that we will call the paradox of ageing: although ageing entails negative health effects, it leads to a life stage with valuable goods. Both valuations, one positive and the other negative, refer to two different dimensions of ageing: the chronological and the biological. We will defend that, by not adequately distinguishing these two types of ageing, it does not become apparent that all the valuable goods exclusive to ageing derive only from its chronological dimension. Third, we will argue that, if we just conceive ageing biologically, it is undesirable. We will elaborate on the two kinds of undesirable effects biological ageing has: direct and indirect. Finally, we will respond to potential objections by adducing that these are insufficient to weaken our argument.
Artículo
AI-powered recommender systems and the preservation of personal autonomy.
Revista: AI & Society, 2023
Autores: Juan Ignacio del Valle y Francisco Lara
DOI: https://doi.org/10.1007/s00146-023-01720-2
Recommender Systems (RecSys) have been around since the early days of the Internet, helping users navigate the vast ocean of information and the increasingly available options that have been available for us ever since. The range of tasks for which one could use a RecSys is expanding as the technical capabilities grow, with the disruption of Machine Learning representing a tipping point in this domain, as in many others. However, the increase of the technical capabilities of AI-powered RecSys did not come with a thorough consideration of their ethical implications and, despite being a well-established technical domain, the potential impacts of RecSys on their users are still under-assessed. This paper aims at filling this gap in regards to one of the main impacts of RecSys: personal autonomy. We first describe how technology can affect human values and a suitable methodology to identify these effects and mitigate potential harms: Value Sensitive Design (VSD). We use VSD to carry out a conceptual investigation of personal autonomy in the context of a generic RecSys and draw on a nuanced account of procedural autonomy to focus on two components: competence and authenticity. We provide the results of our inquiry as a value hierarchy and apply it to the design of a speculative RecSys as an example.
Artículo
Artificial moral experts: asking for ethical advice to artificial intelligent assistants.
Revista: AI & Ethics, 2023
Autores: Blanca Rodríguez y Jon Rueda
DOI: https://doi.org/10.1007/s43681-022-00246-5
In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this article, we begin by arguing that the objections that have tried to deny the existence (and convenience) of moral expertise are unsatisfactory. After that, we show that people have ethical reasons to ask for a piece of moral advice in daily life situations. Then, we argue that some Artificial Intelligence (AI) systems can play an increasing role in human morality by becoming moral experts. Some AI-based moral assistants can qualify as artificial moral experts and we would have good ethical reasons to use them.
Artículo
Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?.
Revista: Science and Engineering Ethics, 2021
Autores: Francisco Lara
DOI: https://doi.org/10.1007/s11948-021-00318-5
Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual’s capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.
Artículo
Do people believe that machines have minds and free will? Empirical evidence on mind perception and autonomy in machines.
Revista: AI & Ethics, 2023
Autores: Aníbal Monasterio Astobiza
DOI: https://doi.org/10.1007/s43681-023-00317-1
Recently, we are witnessing an unprecedented advance and development in Artificial Intelligence (AI). AI systems are capable of reasoning, perceiving, and processing spoken (and written) natural language, and their applications vary from recommen- dation systems, automated translation software, prioritization of news in social media, to self-driving cars and/or robotics. A dystopian narrative predicts that AI may reach a point of singularity or a phase where machines surpass human beings in general intelligence and enslave us, but until that day comes, it is interesting to know how the general public perceive current artificial systems. Do people really attribute mind (i.e., mental states) and/or free will to artificial systems? Knowing how the general public perceive artificial systems is crucial because it could help understand how to apply AI in medicine, law, politics and other areas of human life. One study that I present here with a convenience sample (N = 25) suggests this is not the case. General public do not perceive artificial systems can have mind nor do they attribute free will to them (F (5,57), (dif1 1), (dif2 47,6), p < 0.002).
Artículo
Neurorehabilitation of Offenders, Consent and Consequentialist Ethics.
Revista: Neuroethics, 2022
Autores: Francisco Lara
DOI: https://doi.org/10.1007/s12152-022-09510-1
The new biotechnology raises expectations for modifying human behaviour through its use. This article focuses on the ethical analysis of the not so remote possibility of rehabilitating criminals by means of neurotechnological techniques. The analysis is carried out from a synthetic position of, on the one hand, the consequentialist conception of what is right and, on the other hand, the emphasis on individual liberties. As a result, firstly, the ethical appropriateness of adopting a general predisposition for allowing the neurorehabilitation of prisoners only if it is safe and if they give their consent will be defended. But, at the same time, reasons will be given for requiring, in certain circumstances, the exceptional use of neurotechnology to rehabilitate severely psychopathic prisoners, even against their will, from the same ethical perspective.
Artículo
Divide and Rule? Why Ethical Proliferation is not so Wrong for Technology Ethics.
Revista: Philosophy & Technology, 2023
Autores: Joan Llorca Albareda y Jon Rueda
DOI: https://doi.org/10.1007/s13347-023-00609-8
Although the map of technology ethics is expanding, the growing subdomains within it may raise misgivings. In a recent and very interesting article, Sætra and Danaher have argued that the current dynamic of sub-specialization is harmful to the ethics of technology. In this commentary, we offer three reasons to diminish their concern about ethical proliferation. We argue first that the problem of demarcation is weakened if we attend to other sub-disciplines of technology ethics not mentioned by these authors. We claim secondly that the logic of sub-specializations is less problematic if one does adopt mixed models (combining internalist and externalist approaches) in applied ethics. We finally reject that clarity and distinction are necessary conditions for defining sub-fields within ethics of technology, defending the porosity and constructive nature of ethical disciplines.
Artículo
The morally disruptive future of reprogenetic enhancement technologies.
Revista: Trends in Biotechnology, 2023
Autores: Jon Rueda, Jonathan Pugh y Julian Savulescu
DOI: https://doi.org/10.1016/j.tibtech.2022.10.007
Emerging reprogenetic technologies may enable the enhancement of our offspring’s genes. Beyond raising ethical questions, these biotechnologies may change some aspects of future morality. In the reproductive field, biotechnological innovations may transform moral views about reproductive choices regarding what we consider to be just or even of equal standing.
Artículo
Innocence over utilitarianism: Heightened moral standards for robots in rescue dilemmas.
Revista: European Journal of Social Psychology, 2023
Autores: Jukka Sundvall, Marianna Drosinou, Ivar R. Hannikainen et al.
DOI: https://doi.org/10.1002/ejsp.2936
Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots’ and humans’ decisions in a rescue dilemma with no possibility of deontological inaction. A robot’s decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1–2b). This pattern repeated in a large-scale web survey (Study 3, N = ∼19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans’ and robots’ decisions were largest for norm-violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
Artículo
Science, misinformation and digital technology during the Covid-19 pandemic.
Revista: History and Philosophy of the Life Sciences, 2021
Autores: Aníbal Monasterio Astobiza
DOI: https://link.springer.com/article/10.1007/s40656-021-00424-4
Three interdependent factors are behind the current Covid-19 pandemic distorted narrative: (1) science´s culture of «publish or perish», (2) misinformation spread by traditional media and social digital media and (3) distrust of technology for tracing contacts and its privacy-related issues. In this short paper, I wish to tackle how these three factors have added up to give rise to a negative public understanding of science in times of a health crisis, such as the current Covid-19 pandemic and finally, how to confront all these problems.
Philosophy, Engineering and Technology fPET 2023. Universidad de Delft, 19-21/04/2023
Reprogenetic technologies, future value change, and the axiological underpinnings of reproductive choice – Jon Rueda.
Biorobots as objects, tools or companions?: An ethical approach to understand bio-hybrid systems – Rafael Mestre & Aníbal Monasterio Astobiza.
XVI Congreso Internacional de Ética y Filosofía Política. Universidad Jaume I de Castellón, 2-4/02/2023
Sentido y límites de una ética de las máquinas – Francisco Lara.
Ética para avatares – Blanca Rodríguez López.
Estatus moral e inteligencia artificial: una panorámica de las problemáticas filosóficas de la consideración moral de la inteligencia artificial – Joan Llorca Albareda, Paloma García Díaz.
Ética, Trabajo e Inteligencia Artificial– Gonzalo Díaz-Cobacho, Joan Llorca Albareda.
VI Congreso Iberoamericano de Filosofía. Universidad de Oporto, 23-27/01/2023
La moralidad y la política de las cosas: la disolución político-moral del dualismo sujeto-objeto en las tecnologías de la inteligencia artificial – Joan Llorca Albareda.
Pluralismo en la determinación de la muerte, una posibilidad poco estudiada –Gonzalo Díaz-Cobacho.
Workshop on “Patient autonomy in the face of new technologies in medicine”. Uehiro Centre For Practical Ethics, Universidad de Oxford, 20/01/2023
Reprogenetic technologies, techno-moral changes, and the future of reproductive autonomy – Jon Rueda.
International Workshop “Logic, Philosophy and History of Medicine”. Universidad de Sevilla, 06-07/10/2022
No. We don´t want to get old – Pablo García-Barranquero, Joan Llorca Albareda, Gonzalo Díaz-Cobacho.
Arqus Research Forum “Artificial Intelligence and its applications”. Facultad de Ingenierías Informática y de Telecomunicación de la Universidad de Granada, 26-28/09/2022
Panel Ethics of AI – Joan Llorca Albareda, Juan Ignacio Del Valle.
16th World Congress of Bioethics. Universidad de Basilea, 22/07/2022
“Just” accuracy? Procedural fairness demands explainability in AI-based medical resource allocations – Jon Rueda.
Oxford Bioxphi Summit, Universidad de Oxford, 30-01/06-07/2022
Is the Distinction between Killing and Letting Die Really Universal? – Blanca Rodríguez López.
Workshop “Inteligencia Artificial, Ética de la Mejora Humana y Derechos Fundamentales”. Universidad de La Laguna, 04-05/05/2022
Autonomía y asistentes morales virtuales – Francisco Lara Sánchez & Blanca Rodríguez López.
Expertos morales artificiales. Pidiendo consejo ético a asistentes artificiales inteligentes – Blanca Rodríguez López & Jon Rueda.
Algunas consideraciones éticas y metafísicas sobre la RV – Aníbal Monasterio Astobiza.
Mejoramiento humano y transhumanismo: evaluando sus principales diferencias – Pablo García-Barranquero, Antonio Diéguez, Marcos Alonso.
La ética de hacer ética de la mejora humana – Jon Rueda
Agencia y naturaleza de las agencias morales artificiales – Paloma García Díaz.
Aspectos éticos de sistemas recomendadores relativos a la autonomía humana – Juan Ignacio Del Valle.
Agencia (moral) algorítmica en el paradigma conexionista – Joan Llorca Albareda.
SOPhiA-Salzburg Conference for Young Analytic Philosophy. Universidad de Salzburgo, 11/09/2021
Genetic Enhancement, Human Extinction, and the Best Interests of Posthumanity – Jon Rueda.
Workshop
Inteligencia artificial, Ética de la Mejora humana y Derechos humanos.
GetTEC se reunió durante los días 4 y 5 de mayo de 2022 en La Universidad de La Laguna de Tenerife con el Grupo de investigación en Género, Ciudadanía y Culturas. Aproximaciones desde la Teoría Feminista (GNCACADAF).
En el Workshop se presentó el estado de las investigaciones de los diferentes integrantes del grupo de investigación; asimismo, se mantuvo un diálogo enriquecedor con los componentes del grupo dirigido por la catedrática María José Guerra Palmero.