Abstract
A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline.
| Original language | English |
|---|---|
| Pages (from-to) | 575–590 |
| Number of pages | 16 |
| Journal | Minds and Machines |
| Volume | 27 |
| Issue number | 4 |
| Early online date | 9 Jan 2017 |
| DOIs | |
| Publication status | Published - Dec 2017 |
Keywords
- Artificial Intelligence
- Socio-technical Systems
- Philosophy of Technology
Fingerprint
Dive into the research topics of 'Reframing AI discourse'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver