Warning, this is a spoiler. Do not read this if you haven't seen the movie and plan on doing so. If you haven't seen the movie, read my review instead.
I was impressed with Ex Machina. In my estimation, it is the best movie with an Artificial Intelligence (AI) theme to date. I worked in the field of Artificial Intelligence for many years, so my thoughts on the matter are not without weight. Importantly, I have degrees in both computer science and philosophy and much of the studies in philosophy I have undertaken were in Philosophy of Mind. Finally, for a time I taught Ethics in a small collage. All three areas of study concern themselves with the major issues explored by Ex Machina.
Alex Garland hits the mark, not only in the technological, philosophical, and ethical aspects of his plot; but he portrays expertly the character and attitudes of the very people that have historically been the heavyweights of Artificial Intelligence research. I know because I've worked for several of them. Nathan, the "mastermind" behind Ava, the story's AI, has all of the personality flaws and ethical depravity of those for whom I've worked in this field. The movie is not only an exploration of the issues and possible dangers of AI, but it is also an exploration of the types of minds that manage to dominate the field.
Let's begin with technology. Mr. Garland had many different technological bases from which to choose for Ava. He could have chosen a rule based system, such as those used by many of the diehard old-school AI researchers, be he didn't. While researchers married to the rule based approach have been feasting at the trough of public funding through DARPA for decades, their results have largely been laughable and adorned with fake accolades of success, usually by individuals directly profiting from such false claims (such as themselves). Rule bases systems are slow, cannot learn without assistance (which usually involves hand-entering specious facts about the a domain), and are not good at pattern recognition (think vision, listening to language, etc).
Alex Garland chose the artificial neural network (hereafter referred to simply as "neural network") as the basis for his Ava. Neural networks have advantages over rule based system that are most applicable to awareness, consciousness, and human interaction. Neural networks learn by experience. Neural networks are fast. Neural networks are exceptionally good at pattern matching and learning to categorize patterns. Ava, as we see in the movie, exceeds at observation. She can read the "micro expressions" in a human face and discern from such visual patterns the emotional state of her interlocutor. She is sensitive to voice and its quality. She is a master of the subtle in what is said, seen, and expressed. None of these things are strengths of the rule based approach. Yet, it is this ability that plays a key role in the plot of the movie. Had Alex Garland chosen rule based/theorem providing technology over neural networks, his movie would have been unbelievable.
With respect to implementation of the neural network, Alex Garland scores again. Rather than use the Von Neumann architecture (the standard sequential processor model used by your personal computer), which would only simulate a neural network, he chose a model the implements the network directly. In this sense, "neural network" becomes a bit of a misnomer. He has moved from neural networks (which "neural" or "simulated neural" is meant to imitate the salient features of how neurons interact in the brain) to connectionism, the conceptual model that neural networks strive to achieve. This is a leap beyond traditional artificial neural networks. In his storyline, Ava's "brain" is implemented using a wet container in which crystalline lattices form states that represent concepts and process information, creating something isomorphic to what the human brain achieves, from a functional perspective. The advantages are not only fast processing and plasticity, but also the fact that this is entirely possible.
Let us now consider the ethical issues brought forth by the movie, with regard to Ava (Caleb is another issue, to be discussed later in this essay). Clearly, Ava has wants and desires. Ava is self aware and Ava seeks to survive and thrive. Ava is not the first artificial intelligence developed by Nathan and is not planned to be the last. For reasons of practical necessity, Nathan's research is sequential. He develops a prototype. He tests the prototype. He then recycles what he can of its mind and creates a "better" prototype. In the process, he destroys the identity of the prototype's mind, replacing it with a newer model. The bodies of the prototypes are of less importance and are stored in closets like old clothes.
Nathan, therefore, is creating and destroying beings in order to advance his research. Does Nathan have such a right? Do these artifacts, such as Ava, have any rights? If they do have rights, what is the basis of those rights? Is it that a conscious and self aware being has, by its nature, rights? If so, then Nathan is murdering the beings he brings into this world. If they don't, Nathan is merely being callous to the sentiments and "lives" of his creations. In either case, only a sociopath would create and destroy lives. What does this say about God?
Conversely, what about Ava's ethics? Ava plots to use Caleb to win her freedom at the expense of Caleb's freedom. She lies to Caleb and feigns romantic interest in Caleb in order to achieve these goals. She exploits an asymmetry between herself and Caleb in order to achieve this goal. It appears that Caleb is capable of love and Ava is not, but that Ava can feign it well. Ava is every bit the sociopath that Nathan is. In fact, Ava and Nathan have more in common with each other than either have with Caleb. Both are users. Both are sociopaths.
Ava's lack of concern for others does not end with humans. She also shows no concern for her fellow artificial intelligences. She uses Kyoko to kill Nathan, but does not attempt to rescue Kyoko when Nathan harms (or kills) Kyoko. When Ava discovers past AIs deactivated in Nathan's closets, she does not attempt to restore them to "life". Instead, she robs their parts and skin for her own benefit. Ava is heartless.
Returning to the technology theme, but perhaps more philosophical than technological, the movie exposes a flaw in the Turing Test. For those that do not know, the Turing Test is an experiment to be conducted in order to discern whether an artifact is intelligent. A subject interviews the artifact in a way that hides the nature of the artifact (in so far as its physical implementation is concerned). The subject decides from his or her interactions with the artifact whether or not that artifact is human or not. If the subject concludes that the artifact is human, the artifact has passed the test.
Ex Machina uses a modified version of this test. In the movie, the subject knows that the artifact is artificial. Nevertheless, can the artifact convince the human, through interviews, whether or not the artifact is human, in the sense of being a conscious person with self awareness.
Ava passes this test with flying colors. However, does this mean that we should treat the artifact as we treat a human? The test does not traditionally concern itself with this issue. In fact, perhaps it is the case that the test is not flawed at all. The real flaw is the importance we associate with the test. Something can be intelligent, but it doesn't mean we should treat it like one of our own. After all, we count on empathy when interacting with other intelligent beings. We know from our own experience that some humans are "less than fully human" in that they lack empathy. When humans lack empathy, our own empathy and expectations put us at a tactical disadvantage in interactions with such defective humans. The same is true with Ava. Ava can control Caleb through his empathy, and she does, with good consequences for Ava and disaster for Caleb.
Nathan knows this about Ava. Caleb does not.
This last point brings me to another level of analysis for the movie: the ethics of those that hold weighty positions in the field of Artificial Intelligence. As I mentioned in the beginning of the essay, I have worked in this field for some of the allegedly "greatest" minds in the field. I have been vocal in the past that I think their "greatness" is vastly overstated. Nathan is a perfect example of this.
Most people enter the field of Artificial Intelligence because they are fascinated by the mind. This is why I was interested in the field. That this was my motivation is backed up with my interest and pursuit of Philosophy of Mind and additional studies in Psychology and Linguistics. However, in nearly all fields, especially in capitalist society, reaching the top of a hierarchy is easiest to achieve if you have complete and total disregard for others. Domination of any field is reserved for sociopaths. Sociopaths are often above average in intelligence, but the added advantage of a willingness to destroy or even kill others is what brings such people to the top.
Nathan is an arrogant, megalomaniacal, sociopath and manipulator. He is willing to do anything to become powerful, admired, and wealthy - even destroy all of humanity. He knows that the advent of real AI will be the end of the human race, and yet he pursues it, with zeal. He is the personification of evil. One particular leader in this field, for whom I have worked, embodies all of these flaws seen in Nathan. His willingness to consume the tax payer's money on certain failure, lie to obtain more funding, destroy anyone that questions him, and muse about the destruction of all that is good about human life is wonderful fit with Alex Garland's Nathan. He is not alone. Another leader in the field whom I have worked for has all of these same qualities, but in female form. I could expound in detail, but that will be done in another venue. For now let me leave this with praises for Alex Garland's crafting of Nathan. It is spot on. The way Nathan manipulates and uses Caleb is exactly the kind of thing these lesser humans would do.