Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Oxf J Leg Stud ; 44(3): 673-701, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39234494

RESUMEN

Machines powered by artificial intelligence (AI) are increasingly taking over tasks previously performed by humans alone. In accomplishing such tasks, they may intentionally commit 'AI crimes', ie engage in behaviour which would be considered a crime if it were accomplished by humans. For instance, an advanced AI trading agent may-despite its designer's best efforts-autonomously manipulate markets while lacking the properties for being held criminally responsible. In such cases (hard AI crimes) a criminal responsibility gap emerges since no agent (human or artificial) can be legitimately punished for this outcome. We aim to shift the 'hard AI crime' discussion from blame to deterrence and design an 'AI deterrence paradigm', separate from criminal law and inspired by the economic theory of crime. The homo economicus has come to life as a machina economica, which, even if cannot be meaningfully blamed, can nevertheless be effectively deterred since it internalises criminal sanctions as costs.

2.
Front Artif Intell ; 6: 1130559, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37731604

RESUMEN

This article investigates the conceptual connection between argumentation and explanation in the law and provides a formal account of it. To do so, the methods used are conceptual analysis from legal theory and formal argumentation from AI. The contribution and results are twofold. On the one hand, we offer a critical reconstruction of the concept of legal argument, justification, and explanation of decision-making as it has been elaborated in legal theory and, above all, in AI and law. On the other hand, we propose some definitions of explanation in the context of formal legal argumentation, showing a connection between formal justification and explanation. We also investigate the notion of stable normative explanation developed elsewhere in Defeasible Logic and extend some complexity results. Our contribution is thus mainly conceptual, and it is meant to show how notions of explanation from literature on explainable AI and legal theory can be modeled in an argumentation framework with structured arguments.

3.
IEEE Trans Technol Soc ; 3(4): 272-289, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36573115

RESUMEN

This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA