Abstract
Recently a number of well-known public figures have expressed concern about the future development of artificial intelligence (AI), by noting that AI could get out of control and affect human beings and society in disastrous ways. Many of these cautionary notes are alarmist and unrealistic, and while there has been some pushback on these concerns, the deep flaws in the thinking that leads to them have not been called out. Much of the fear and trepidation is based on misunderstanding and confusion about what AI is and can ever be. In this work we identify 3 factors that contribute to this “AI anxiety”: an exclusive focus on AI programs that leaves humans out of the picture, confusion about autonomy in computational entities and in humans, and an inaccurate conception of technological development. With this analysis we argue that there are good reasons for anxiety about AI but not for the reasons typically given by AI alarmists.
Original language | English |
---|---|
Pages (from-to) | 2267-2270 |
Number of pages | 4 |
Journal | Journal of the Association for Information Science and Technology |
Volume | 68 |
Issue number | 9 |
Early online date | 22 Jun 2017 |
DOIs |
|
Publication status | Published - 1 Sept 2017 |
Keywords
- Artificial Intelligence
- philosophy of science
- Future