THE NEGATIVE CoNSEQUENCES
CONTROL LOSS
It will come to no one’s surprise that the most common and mediatized fear surrounding Artificial intelligence regards losing control of the program. Movies, scientists, personalities have all warned society about the dangers of Artificial Intelligence. Stanley Kubrick, Stephen Hawking, and Elon Musk more recently, have stated the same warning: AI will eventually get out its developers’ control. As shown by Google’s Artificial Intelligence AlphaGo, given enough data and complex programming, these programs can surpass any human beings in certain tasks, in this instance at the board game Go. They fear that with enough time, we will achieve Artificial Superintelligence and will regret it as it spirals out of control. (Marr, 2019)
These fears remain theoretical in such societal contexts; yet in the video game industry it is actually possible to program an Artificial Intelligence that will act in contradiction to the developers’ intention. Indeed, incorporating learning capability into a game means that game designers lose the ability to completely control the gaming experience. This is not a good option for game designers as it can ruin the player’s experience and in the end ruin the game itself. Potentially, in a shooting game, a human player could deliberately show up at the same place during each play session, gradually the AI would attack this place without exploring further. The algorithm learnt to predict that the player will get to one place no matter what circumstances. Then, the player can take advantage of AI’s memory to avoid encountering or ambush the AI. Such strategy is beyond the designers’ control; it could make the game too easy in this instance. (Haring, 2017)
Thus, Video games are a weird blessing for AI researchers. Instead of demonstrating the AI’s advantages and improving algorithm techniques, video games can show a fatal flaw in the desired state researchers strive to achieve: an adaptable and personalized AI. Indeed, video games show that AI capable of adapting to situations can be manipulated to take advantage of its prediction patterns. They also show that such algorithm could potentially have undesirable behavior depending on what it has learnt since its inception. This shows the possible flaws that could be exploited by ill-intentioned individuals in AI that were meant to offer personalized experiences to users.
