This study investigates the relationship between artificial intelligence (AI) and ethical decision-making in intelligent systems. As AI technologies are increasingly deployed in critical domains such as healthcare, law, and autonomous systems, the ethical risks they introduce – such as bias, opacity, and the absence of emotional or contextual judgements – require immediate attention. These risks are classified across three dimensions: technological uncertainty, limitations in human morality, and complex interactions between human and non-human agents. The paper examines the feasibility of embedding ethical reasoning into AI systems using normative theories such as utilitarianism, deontology, and virtue ethics. It also analyses global regulatory frameworks and emerging interdisciplinary approaches, illustrating the importance of culturally responsive, transparent, and accountable AI governance. A multi-domain ethical risk analysis framework is proposed to help developers, policymakers, and ethicists evaluate and mitigate ethical concerns throughout the AI lifecycle. The study concludes with recommendations for future interdisciplinary research, including operationalising ethics in AI designs and developing anticipatory governance models. This work aims to support the creation of intelligent systems that are not only technically robust but also ethically aligned with human values.
Building similarity graph...
Analyzing shared references across papers
Loading...
Rafid Khaleefah
Building similarity graph...
Analyzing shared references across papers
Loading...
Rafid Khaleefah (Wed,) studied this question.
www.synapsesocial.com/papers/68bb3d4e2b87ece8dc955b73 — DOI: https://doi.org/10.61856/6kyazw30
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: