Machines and Free Will
I believe that machines have free will. The definition of free will I am going to use for this paper comes from Thomas Hobbes. It basically states that; they’re being nothing preventing us from doing what we want to do1. In the case of this definition, we are not the cause of free will, there is no obstacle in our way is. This view is called “compatiblism” according to the Stanford Enclyopedia of Philosophy and Hobbes is considered a “Classical Compatibilist”2. This goes against the view that we are the cause of our free will. This topic came to me when I was watching a movie called Ex. Machina. The part of this movie that I am interested is the fact that the machine is capable of complex thought and is actually able to manipulate the main character by pretending to have an interest in him. I Robot is a better example of illustrating this point though. The reason why is because in the movie the robots have to live by rules and are programmed to follow these rules and then one of them breaks the rules and murders somebody. Another reason this topic interested me was hearing stories such as people being beaten at complex strategy games like Go, and the person the machine faced was one of the best players at the game3. The machine also went on to play the champion of the game and was successful too4. I have also heard of video games that are able to create infinite universes, which may not be relevant to this topic, but it shows A.I and machines are becoming more relevant and that is what interested me in choosing Machines for this paper. In this paper I will use other examples of machines that can do similar things, and have a similar function. Examples such as an A.I that existed on the Internet named Tay and will explain the relevance of these machines and how they can have free will according to this definition.
If a machine has free will, then there will be no barriers from preventing it from what it wants to do. Nothing is preventing it from what it wants to do. So, machines have free will. The reason for this comes from the fact that they do not have barriers preventing them from doing what they want to do. Their lack of barriers comes from the fact that even though they have code, they can adapt too different situations. The best example of this is the machine I mentioned earlier, that could win complex games such as go. Google has described the game as “having a search space, that is more than a googol times larger than chess”5. The specific mechanics of how the machine functions come from something called a “Deep Neural Network” and it has two of these “Deep Neural Networks and it uses both6. One of them tries to predict the next move and the other try’s to find the winner in each move7. The first network is called the “Policy Network” and the second is called the “Value Network”8 . It is deciding what to do based on if it would win. Then there really isn’t anything preventing the machine from choosing a particular move, because it is choosing the move that will be successful in the game. Another example of a machine doing something kind of similar is an example of a chat-bot named Tay that was released by Microsoft. A chatbot is a form of AI that essentially talks to people on the Internet; it is usually is on social media websites like Twitter or Facebook. It is not a typical machine, but it is a form of A.I and performs a similar function. Essentially Tay learned through interactions with Internet trolls to say vulgar and racist statements9. As a result of that Tay was taken offline because that was not the purpose of it and there was major backlash against Microsoft for it. In our definition of free will nothing restricted Tay from saying those things, it learned from people that it interacted with on the Internet and then said those comments. Given that we have these two examples, it is safe to say that the machine has free will because they are not restricted from doing these actions. They are not the origin of these actions, but the lack of parameters that prohibit them are, which fits in with Hobbes’s definition of Freewill.
An objection to my argument comes from John Searle’s Chinese Room. In which he argues against “Strong AI” (which is the theory that the computer is a mind). The experiment essentially goes like this, a guy is in a room that cannot understand Chinese, but is able to communicate with a Chinese speaker through using Chinese letters and having a book that tells him how to organize the letters to form words and sentences and can communicate effectively with a Chinese speaker who is outside of the room by doing this10. The conclusion he has is that the person is just following a program and doesn’t really understand the language and this applies to a computer too. If this is the case then a machine cannot have free will because that lack of barriers that I mentioned, in the beginning is only part of a program. This would mean that the machine would not have it by itself it is just programmed to have these lack of barriers. Similar to Searle’s experiment, the guy in the room does not understand the language he is just manipulating symbols and it seems like he can speak the language. It can be further said with this that the program is a barrier itself, because the machine (or A.I) cannot do anything beyond the program, so therefore it does not have free will according to Hobbes because it is constrained. To apply this to the examples I used in my argument section would be that the Go machine does not really understand the game it just can calculate options and then acts on them based on probability of a certain move being made because it’s programming is based on calculating millions of potential moves. To also look at this objection from the example of Tay, Tay learned what to say and it just got added to potential options that it could say. The chatbot could not do anything other than communicate with people and the A.I most likely wasn’t aware of what it was saying it was just learning from people and then repeating what it learned through the interactions. In other words, it was not doing this on it’s own, it was just incorporating what people were telling it and putting that into it’s response to other people, so in a way, it really was not actually communicating. It is just taking different options and learning from them like a computer does or like a child does at an early stage in life.
To rebut this argument I am going to use the system argument. In the argument, it basically states that the “man” in the experiment doesn’t understand Chinese, but he is part of a system that does understand11. To approach this objection I am going first going to define what the computer wants. I am going to say that what the computer wants is the function it was designed for. So for the robot that plays go it is to play the game Go, and for Tay it is to communicate with people on the Internet through messages and posts on Social Media sites, such as Twitter. We can introduce the System Argument when we say that the system understands the game (in the case of the robot that plays go) and how to communicate (in the case of Tay). Once we have this information we can say that the system understands what it wants and has no barriers preventing it from achieving what it was supposed to do. The robot that plays go has enough knowledge of the game to beat a human player through its programming and Tay the chatbot knows how to successfully communicate with people through interactions on the Internet and incorporate it into her posts or messages. Even though in the case of Tay she said inappropriate things, there was nothing stopping her from saying those particular things. So, by this information, we can say that machines do have free will in the definition that Hobbes provided in the beginning of the paper, that free will is not having anyone stopping you from doing what you want to do.
1 McKenna, Michael, and D. Justin Coates. “Compatibilism.” Stanford Encyclopedia of Philosophy. April 26, 2004. Accessed December 07, 2017. https://plato.stanford.edu/entries/compatibilism/#FreAccClaCom.
can be found in section 3.1
3 Metz, Cade. “In Major AI Breakthrough, Google System Secretly Beats Top Player at the Ancient Game of Go.” Wired. June 03, 2017. Accessed December 07, 2017. https://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/.
4 AlphaGo. “AlphaGo documentary follows Google computer program’s victory over Korean Go champion.” CNNMoney. Accessed December 12, 2017. http://money.cnn.com/2017/09/29/technology/future/alphago-movie/index.html.
5 “AlphaGo: Mastering the ancient game of Go with Machine Learning.” Research Blog. January 27, 2016. Accessed December 07, 2017. https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html.
9 Hope Reese | April 7, 2016, 10:29 AM PST. “How the Microsoft Tay chatbot debacle could have been prevented with better AI.” TechRepublic. Accessed December 09, 2017. https://www.techrepublic.com/article/how-the-microsoft-tay-chatbot-debacle-could-have-been-prevented-with-better-ai/.
10 Cole, David. “The Chinese Room Argument.” Stanford Encyclopedia of Philosophy. March 19, 2004. Accessed December 09, 2017. https://plato.stanford.edu/entries/chinese-room/#3.
Part I am citing is in the first part of article
11 Ibid, section 4.1