Vivek Pravat

Image credit: JJuni – pixabay.com

Automations that are crafted in our own image have been a staple of human imagination since the ancient Greeks. From Daedalus’ statues and Pygmalion’s bride to Asimov’s robots, literature has come a long way, but that old dream still endures. Now, with the advent of ever more sophisticated machine-learning technologies, the creation of humanoid robots may be just within our reach. Humanoid, as in, human in form and behaviour. And what characteristics better capture the essence of what makes us human than reason and emotion—the happy confluence of which has given us the ability to chart the course of the stars above and wax lyrical over the fluttering of a butterfly in the realm below? Logic has been the one defining trait of computing systems since their inception. Is it time to infuse them with emotion?

A distinction has to be made between displaying learnt or programmed emotional responses and having emotions. A robot can grin without experiencing joy; it can make a frowny face without having an inkling of what sadness feels like. As far as exhibiting outward emotional behaviour is concerned, I don’t think there is any real debate: it is expected that robots whose job involves interacting with people have at least rudimentary social skills. Recent creations such as Pepper and Haru are already capable of displaying some basic emotions and reading subtext off of facial expressions and verbal cues. And, as studies on the uncanny valley show, the more humanoid a robot looks, the more we expect out of it.

Presently, there is a rather large gap between programming a robot with affective responses and letting it have real, experiential emotions. Still, the question of whether we should we give our creations the capacity to experience things like joy, pain, and suffering must be answered, and answered soon, considering the pace of innovation in the field of artificial intelligence might soon exceed our ability to keep up with it.

How do we implement emotions in a robot brain? Lacking is a definitive answer on why emotions feel the way they do.

If we answer in the affirmative, we immediately run into some practical difficulties. Difficulties of implementation. How do we implement emotions in a robot brain? The problem is, there is no consensus on how exactly the qualitative nature of emotions comes about. Don’t get me wrong—since the advent of modern neuroscience, we have a learnt a lot about emotions and their interplay with other cognitive processes; we know a lot about the mechanisms of emotion regulation and the brain structures responsible; we have created designer drugs that can alter mood and help cope with all kinds of mood-related disorders; we have even developed computer models that can read brain MRI scans and make fairly accurate guesses on the emotional state of an individual. What’s lacking is a definitive answer on why emotions feel the way they do. An emotion cannot be isolated to one particular structure or area of the brain; it is spread out—a complex interaction of action potentials and neurotransmitters and other physiological responses that manifests in multiple regions of the brain at once. Neither have we figured out how to separate out at time t, emotional aspects of a state of mind from non-emotional aspects. We don’t know whether it is even possible to apply this level of reductionism.

This gap in our knowledge is concomitant with the hard problem of consciousness, or the problem of qualia. Are emotions always accompanied by qualia? It would certainly appear so, otherwise they wouldn’t be called emotions. Thought laced with emotion feels distinctively different from the kind of thought we apply when say, solving a math problem, or ruminating on what to write in our next blog post. The problem with qualia is that we don’t know what role they play in our reasoning—if they play a role at all. Are they a “higher level” emergent property that has causal, feedback effects on lower level cognitive processes? Or are they merely epiphenomenon, like steam escaping out of an engine—a side effect of the complexity inherent in the cacophony of a hundred billion neurons shouting at each other? Consciousness being inherently subjective, the answers to these questions border on the metaphysical and, for the moment at least, seem impervious to objective scientific evaluation.

It is certainly possible that once we know all there is to know about the functional role emotions play in our cognition, and we implement these functions in a robot brain, it too will feel emotion (or at least something akin to it, since the substrate medium and the particulars of the implementation are likely to be completely different). Neuromorphic architectures come to mind. Silicon instead of nerves and tissue; electronic instead of chemical signalling; logic gates instead of axons and dendrites. Who is to say that a sufficiently advanced architecture cannot feel something analogous to our own moods and affects?

So why the fuss? If we are not sure what role qualia play in our cognition, or even if they are real (some neuroscientists like Daniel Dennett believe that consciousness is an illusion both created and perceived by the brain), why worry about whether a robot feels something or not?

brain-2062057_1920

Image credit: ElisaRiva – pixabay.com

Apart from the most obvious response of not wanting to create beings that suffer needlessly, the concepts of qualia and personhood are tied together intractably, as I have elaborated in the court case section of my novel, Convex. In a nutshell, the argument posits that personhood is contingent on being able to experience negative and positive states such as suffering and happiness. Whether this is a valid argument or not is up for debate, but at the very least, granting personhood to a being without any subjective mental states does raise some interesting ethical and legal issues. Even if we could implement experiential emotions or emotion-analogues in our robots, should we?

Some will say it’s a no-brainer. Within the context of our culture, where displaying one’s feelings is considered healthy, and where being emotionless and cold is often associated with sociopaths and serial killers, it would seem that if we want our robots to be good, we would want them to experience emotions: we would want them to know what being good feels like; we would want them to experience a moral high when they do something right, and feel guilt and remorse when they err.

 But we must dig deeper. To properly answer the question, we must examine in greater detail the interplay between emotion and cognition.

It is commonly believed that emotions provide an evolutionary advantage: emotions are nature’s way of short-circuiting otherwise slow decision-making processes. Emotionally charged reasoning can be fast. When you see a bison rushing you, you don’t want to overanalyse the situation; you just need to let that surge of adrenaline, norepinephrine, and cortisol that you experience as fear let you pilot a quick escape. On the other hand, positive emotions such as love and sympathy help bond with others and let us form stable social structures, which, in turn, boost survivability.

Robots too will need to make quick, inspired decisions every now and then. Does that mean we must incorporate emotions into their cognitive framework? I see two objections. The first is that there is no reason to believe that emotions are the only way to arrive at quick, off the fly decisions, especially in a brain that is not constrained by biology and evolutionary legacy. Nature builds on what it already has; with AI, we have the advantage of designing from the scratch. Modern decision-making systems, which can be a mix of algorithmic and neural-net-based computing methodologies, have a profusion of heuristic and search techniques to draw on; when in doubt, deep learning architectures can always evolve their own algorithms by trial and error. In an advanced robot brain, implementation of such cognitive shortcuts will probably be nothing like how emotions are “implemented” in our own brains. If these cognitive processes are accompanied by qualia, they will likely be alien as well.

The second objection is that emotions don’t always lead to the best outcomes. Emotional reasoning is a double-edged sword. Fear is fine when you are trying to avoid predators in the tall grass, but not very useful when you are trying to work your way up the corporate ladder. You can be angry and play chess; you can be calm and play chess. You can be in love before you propose marriage; you can propose marriage without feeling anything toward your partner. With chess, the presence of strong emotions will probably make you lose the game; with marriage, the absence of strong emotions will most likely see it fail.

What about moral decision making? Surely feelings like love and kindness are necessary in the moral sphere?

Opinion has been divided on this for at least two thousand years. We have travelled from the Greek/Enlightenment ideas of placing reason above emotions to where modern neuroscience tells us that emotions and moral decision making are deeply intertwined. The relationship between emotions and moral decision making is complex, but there is considerable support in scientific circles for some or the other form of a dual-process theory of the mind, where fast, intuitive emotional thinking is contrasted with the slower, more deliberative thinking that utilizes logic and reason as opposed to gut feelings. Furthermore, just as there is no single emotional centre in the human brain, there is no particular moral centre. While some brain regions are more active than others when making moral decisions, there is no homunculus or seat of conscience guiding our actions. Research tells us that humans make moral decisions using a balance of emotion and reason. When they converge, we end up making good, personally satisfying decisions. When they diverge, moral dilemmas emerge.

neural-network-3637503_1920

Image credit: Ahmedgad – pixabay.com

In situations where our emotional intuition and reasoning point in different directions, can reason ever be wrong? By definition, no. Emotions don’t tap into some magical source of knowledge that reason can’t and hence, make better decisions when reason fails. It’s just that in situations where there is an absence of meaningful data or there is a need for urgency, emotions can sometimes deliver favourable outcomes. It doesn’t mean that reason is inherently flawed or that it couldn’t have arrived at the same conclusion. Emotional decision-making arises out of past experience, inbuilt prejudices and biases, and biological dispositions such as conflict avoidance; they are as likely to lead us astray as they are in helping us make the “right” choice. A good example of this is Harvard professor Joshua Greene’s research on the “Fat Man” version of the famous trolley problems. Without going into too much detail, it is sufficient to say that the same people who balk at pushing a fat man into the path of an incoming trolley in order to save five rail-yard workers from being crushed to death, may not hesitate so much when asked to pull a lever that would drop the fat man onto the rails. The end result is the same: the killing of the fat man and the saving of the five workers; just the method of execution is different. The experiment goes to show how something as trivial as the directness or indirectness of the action can influence our moral decision-making (contemplating the first action is more likely to engage the brain’s emotional centres that play a role in avoiding physical violence), even though the two formulations of the problem are essentially the same in terms of intention and consequence.

Furthermore, our emotional dispositions are not set in stone. Children as young as two value altruism, while our nearest cousin, the chimpanzee, has a yen for fairness. Toddlers can also be very selfish—what child hasn’t thrown a raging tantrum because it was denied something it wanted? Over time, these initial dispositions get strengthened and more refined, or weakened. Emotions and rationality feed into each other, and, like other aspects of our cognition, emotions too can be trained. A person who loves meat might reason that eating animals is ethically wrong. Initially, she might not feel too good about changing her eating habits. But by constantly shunning meat, she aligns her emotions with her rational choice, with the result that every time she avoids eating meat, she feels some inner satisfaction—a pleasure that she deems to be superior than the pleasure of indulging in one’s taste buds.

Does this mean that emotions are not essential for moral behaviour? Is it possible then, to create a moral robot that is all logic and zero feeling?

There seems to be one strong counter to the argument that moral behaviour can be explained by reason alone. And that is empathy. Empathy, or concern for the other, seems to be at the root of all our moral norms and behaviour. One could make the argument that we reason out moral behaviour because we are empathetic beings. If we didn’t have concern for the wellbeing of others, there probably would be no reason for the existence of moral norms. It would be a dog-eat-dog world and we would be fine with that.

If this is true, and experiential aspects of emotions cannot be implemented in robots (or alternatively, the emotions that they do get to experience are so far off the human spectrum as to be essentially unfathomable) then we are at an impasse. How can we ever build a robot with a moral compass that aligns with our own?

The easy answer is that we program the robot with directives that cannot be overridden, à la Asimov’s laws. But note that we are dealing with human-level intelligence here. If we can overcome our childhood programming, then what prevents a robot? A sufficiently intelligent AI will find ways to overcome both its programming and training. Directives are by nature, rigid and inflexible, and open to misinterpretation, as Asimov so famously showed in his stories. Building a truly empathetic AI seems to be a better way of ensuring morally correct behaviour than just programming it with rules. A robot with empathy can use it’s feelings to guide its behaviour in tricky moral situations.

Which brings us to a more “meta-ethical” question, and certainly one that is pertinent in answering our original concern. The question is, can a being devoid of emotions make moral decisions that are in line with our own conceptions of good and bad? Can moral behaviour exist without empathy? I will elaborate on this and more in my next blog post.

Out Now On Amazon

Subscribe to my newsletter

Sign Up
Archives
Categories

Warning: A non-numeric value encountered in /home/u325363518/domains/vivekpravat.com/public_html/wp-includes/category-template.php on line 941

Warning: A non-numeric value encountered in /home/u325363518/domains/vivekpravat.com/public_html/wp-includes/category-template.php on line 941

Warning: A non-numeric value encountered in /home/u325363518/domains/vivekpravat.com/public_html/wp-includes/category-template.php on line 986

Warning: A non-numeric value encountered in /home/u325363518/domains/vivekpravat.com/public_html/wp-includes/category-template.php on line 986

Warning: A non-numeric value encountered in /home/u325363518/domains/vivekpravat.com/public_html/wp-includes/category-template.php on line 986

Warning: A non-numeric value encountered in /home/u325363518/domains/vivekpravat.com/public_html/wp-includes/category-template.php on line 986

Warning: A non-numeric value encountered in /home/u325363518/domains/vivekpravat.com/public_html/wp-includes/category-template.php on line 986
Tags

Drop me a message





    0 0 votes
    Article Rating
    Subscribe
    Notify of
    guest
    1 Comment
    Most Voted
    Newest Oldest
    Inline Feedbacks
    View all comments
    Best SEO Services
    Best SEO Services
    4 years ago

    Awesome post! Keep up the great work! 🙂