An Ethical Dilemma in the Technological Era
Exploring the ethical challenges and governance frameworks for research that can be used for both beneficial and harmful purposes in AI, biotechnology, and emerging technologies.
Imagine a drug developed to relieve chronic pain that can be turned into a chemical weapon, an algorithm designed to optimize medical diagnoses that ends up discriminating against population groups, or artificial intelligence research created to improve education but used to generate massive fake news. This is the paradox of "dual use" facing contemporary science: those advances with the greatest potential to positively transform our society are precisely those that carry the greatest risks if misused1 .
The concept of "dual use" describes this ambiguous nature of emerging technologies, where the same applications can serve both beneficial and harmful purposes, complicating ethical decision-making1 . In 2025, as artificial intelligence, biotechnology, and other powerful tools become increasingly integrated into our daily lives, the governance of this dilemma has become one of the most pressing challenges for the global community.
The same technologies with the greatest potential for societal benefit also pose the most significant risks when misused.
Dual-use research refers to scientific knowledge and technological developments that, although generated for beneficial purposes, have the potential to be diverted or repurposed for malicious purposes or that could cause significant harm to society, security, or human rights1 .
This duality is particularly evident in fields such as artificial intelligence, where the same algorithms that enable revolutionary advances in medical diagnosis can be used to create mass surveillance systems or autonomous weapons7 . The transformative nature of these technologies represents a period that demands a reevaluation of ethical frameworks and social norms, introducing complex challenges that require not only ethical considerations but also the redesign of conceptual tools to effectively navigate their implications1 .
Technologies developed for positive societal impact, medical advancement, and improved quality of life.
The same technologies can be repurposed or misused with harmful consequences.
Developing frameworks that maximize benefits while minimizing risks of misuse.
The increasing convergence of technologies amplifies dual-use risks, creating new ethical challenges that existing governance frameworks struggle to address.
The tension between scientific research and its ethical implications is not new. Historical events starkly demonstrated the consequences of research without adequate ethical oversight:
During World War II, prisoners were subjected to brutal procedures without their consent3 .
(1932-1972) Nearly 400 African American men with syphilis were observed without treatment, despite penicillin being available as an effective therapy since 19443 .
(1954) Children with mental disabilities were deliberately inoculated with the hepatitis virus to study the natural history of the disease3 .
These episodes led to the development of fundamental instruments for ethics in research such as the Nuremberg Code (1947), the Declaration of Helsinki (1964), and the Belmont Report (1979)3 . However, the speed and scope of current emerging technologies present unprecedented challenges that these frameworks did not fully anticipate.
In 2024, an international consortium of researchers designed a pioneering study to evaluate the effectiveness of different governance frameworks in artificial intelligence projects with clear dual-use potential. The experiment involved 15 development teams working on a facial recognition algorithm with applications in both medical diagnosis (early detection of rare diseases) and mass surveillance.
| Ethical Performance Indicator | Group A (Reactive) | Group B (Continuous) | Group C (Ethics by Design) |
|---|---|---|---|
| Early Risk Detection |
24%
|
68%
|
92%
|
| Algorithm Transparency | 45% | 72% | 88% |
| Bias Mitigation | 38% | 75% | 91% |
| Effective Contingency Plans | 28% | 65% | 87% |
| Team Satisfaction | 52% | 78% | 85% |
The results were revealing: teams operating under the "ethics by design" framework (Group C) showed significantly superior performance in all measured ethical indicators, particularly in early identification of potential dual-use risks (92% effectiveness compared to 24% in the reactive group)1 .
The qualitative analysis showed that teams in Groups B and C developed what researchers called "ethical muscle" - a strengthened capacity to anticipate and respond to ethical dilemmas that arose during development, something notably absent in Group A.
Various ethical frameworks have emerged to guide stakeholders in navigating the complex landscape of dual-use in research, emphasizing key principles:
| Ethical Principle | Application in Dual-Use Research | Implementation Mechanisms |
|---|---|---|
| Justice and Equity | Prevention of discriminatory biases in algorithms and technologies | Regular equity audits, diverse datasets |
| Transparency | Explainability of algorithms and decision-making processes | Detailed documentation, reporting standards, public disclosure |
| Accountability | Clarity in responsibility chains when malicious uses occur | Accountability frameworks, oversight protocols |
| Privacy | Protection against mass surveillance and intrusions | Encryption, anonymization, strict access controls |
| Non-maleficence | Prevention of harm to vulnerable groups and society | Ethical impact assessments, risk scenario analysis |
| Beneficence | Maximization of positive social benefits | Public good-oriented design, equitable access to benefits |
These principles align well with those used in bioethics, which shares similarities with digital ethics in addressing new forms of agents and environments1 . The convergence highlights the need to incorporate ethical considerations from the beginning in the design process to ensure that emerging systems benefit society while preserving individual rights1 .
To operationalize these ethical principles, researchers and ethics committees have a growing set of tools and approaches:
Systematic and periodic assessments that go beyond the initial project review6 .
Independent mechanisms to evaluate AI systems for biases, discrimination, or potential misuse1 .
Balancing the need for transparency with intellectual property protection through differentiated levels of information access5 .
Perspectives requiring the integration of ethical norms from the initial ideation stage to post-market implementation1 .
| Research Type | Main Dual-Use Risk | Recommended Governance Mechanisms |
|---|---|---|
| Artificial Intelligence | Discriminatory biases, mass surveillance, disinformation | Equity audits, AI ethics review committees, transparency standards |
| Biotechnology | Creation of dangerous pathogens, biological weapons | Biosafety protocols, material control, specific regulations |
| Information Systems | Cyberattacks, privacy violation, manipulation | Robust encryption, privacy by design principles, penetration testing |
| Climate Research | Geoengineering with unknown side effects | Environmental impact assessments, global governance frameworks, moratoriums |
The growing complexity of the technological landscape suggests that no single actor - governments, private sector, academia, or civil society - can address dual-use challenges alone. A collaborative multidisciplinary approach that integrates different perspectives and expertise is required7 .
The PISA 2025 assessment framework reflects this evolution, shifting toward a broader conception of scientific competence that includes skills to "search, evaluate, and use scientific information to make decisions and act"4 . This critical capacity becomes particularly relevant in a social context dominated by information sources on the Internet, many of them scientific or pseudoscientific.
The future trajectory that emerging technologies take will depend on the decisions we make collectively today, establishing governance frameworks that foster innovation while protecting what is most essential to our shared humanity.
Research ethics committees play a crucial role in this ecosystem, but their composition and mandates must evolve to include expertise in emerging technologies, dual-use risk assessment, and diverse global perspectives3 6 .
A multi-stakeholder approach is essential for effective dual-use governance in the technological era.
The governance of dual-use in scientific research represents one of the defining challenges of our time. It is not about impeding progress, but about guiding it with wisdom so that the fruits of human ingenuity serve to elevate our collective condition rather than threaten it.
As philosopher Pilar Llácer recently pointed out, what makes human beings unique is their ability to imagine what is going to happen, and this is possible thanks to ethics8 . Ethics allows us to see beyond - according to our own criteria - what we like or makes us feel good, toward what preserves our shared humanity.
In 2025, facing technologies that seem extracted from science fiction, this ethical capacity becomes our most precious asset. The future is not written, and the trajectory that emerging technologies take will depend on the decisions we make collectively today, establishing governance frameworks that foster innovation while protecting what is most essential to our shared humanity.