The threats of AI must be taken seriously to prevent harm

The risks of AI use are growing as the technology becomes more pervasive. Rather than laugh off the threats, businesses should move to mitigate them before they become headaches. Technology futurist and entrepreneur Elon Musk has frequently mused on the threats of AI, most recently in a talk at the South by Southwest technology conference in which he called AI "more dangerous than nuclear warheads." While mainstream technologists and social scientists may dismiss Musk's foreboding proclamations, it's worth thinking about the current state of the integration of machine learning and artificial intelligence into everyday applications and to ask whether we are ready to rely on algorithms. AI is not a new phenomenon. Research on AI started back in the late 1950s, with various stops and starts throughout the past six decades. In that time, many machine learning approaches and algorithms have been developed. However, until relatively recently, the AI practice has largely taken place behind the scenes. So what has changed to trigger the renewed interest -- and, in the case of Elon Musk, growing fear -- in artificial intelligence? In the earlier days of AI and machine learning, the roles of the software developer and the analyst were conflated -- to be able to use the algorithm, one had to know how to program it. The tipping point came following two technology advances.

Spotlight

Other News

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More