Artificial Psychopathy: Exploring the Hyde of AI


Artificial Psychopathy: Exploring the Hyde of AI

According to global statistics, about 3.5% of business leaders displayed psychopathic traits such as egocentrism, insincerity, and the lack of empathy or remorse. These traits, if not addressed, can result to the proliferation of unethical business practices and toxic work environments in enterprises. Thus, there is a growing need to put advanced screening processes in place to avoid cunning psychopaths from taking over positions of power.

In the age of artificial intelligence, new developments enable psychopathy detection and improve the mental health landscape through the creation of artificial psychopathy. But just like artificial empathy—the Jekyll of AI—a psychopathic AI also causes a growing concern among the general public.

For this second part of the two-part article series, take a look at the darker side of behavioral AI. Discover the latest experiments and studies of psychopathic AI, its implications on organizations, and how to avoid making one for an enterprise.

A Curious Society and Corporate Psychopaths

Psychopaths, as portrayed in shows and movies, are not exactly the kind of people anyone wants to be in acquaintance with. They manipulate people and cause distress; and in some real-life cases, may even kill for the most incomprehensible reasons. Yet despite this dangerous notion, people are still interested in the workings of their mind, as evidence by the growing consumption and demand for true crime shows.

But of course, being in the same room with a psychopath can be different from what is seen in movies, shows, or crime cases. Not all psychopaths are killers, but they will definitely display their destructive behavior differently. What is more is that their fearless, egocentric, and manipulative tendencies may make it hard for a common person to spot them. In fact, an organization may be working with or under the supervision of one right now, and they will never know.

According to psychologists, psychopaths tend to display confidence and fearlessness, which makes them potentially resourceful and strong-willed corporate leaders. These positive traits, however, can take a harmful turn for an organization once the psychopathic leader imposes complete power over their subordinates. The corporate environment will experience productivity loss, reduced human and actual capital, decreased innovation opportunities, and even legal actions against abusive workplace dynamics or illegal business transactions.

Experiments on psychopathic data

Organizations have all the right reasons to be wary enough of accepting psychopaths, let alone placing them higher on the corporate ladder. But aside from fearing psychopathic leaders, enterprises also deal with another fear: the takeover of robots and other emerging technologies in the work environment. This fear may even increase with the creation of artificial psychopathy, because it may justify reasonings that indeed, technology can manipulate humans and take over the world. But the experiments on a psychopathic AI prove otherwise, as reflected in the following examples.

Norman, the World’s First Psychopathic AI

Created by MIT researchers, the psychopathic AI was named after Alfred Hitchcock’s Psycho film. Norman was fed with gruesome captions from a Reddit thread, then the AI was used to interpret inkblots from the Rorschach test. The AI’s interpretations were indeed terrifying: a normal AI sees a photo of a small bird, while Norman sees a man getting pulled into a dough machine.

Norman’s answers did reflect how some psychopaths may possibly see or interpret certain images. But aside from this, the existence of Norman shows how machine algorithms in AI are the reasons why artificial intelligence systems can become biased and unfair. The psychopathic AI is evidence of how biased data, when fed to an AI software, will certainly result to AI bias and discrimination.

Psychopathy detections

In some studies, AI is used for the benefit of risk assessments on potentially psychopathic individuals. For example, an AI system is now being used in accommodation services to detect unreliable customers. Providing lodging services earns great income but maintenance is also difficult, given that guests destroying hotel rooms and rental homes are common. The said AI tool can be used to check a customer’s criminal record and even their social media posts, so that the system can perceive the Dark Triad traits (narcissism, Machiavellianism, and psychopathy) from the customer and assess how trustworthy the customer is.

Meanwhile, specialists in New Mexico have developed an AI system that uses head tracking algorithms to perceive psychopathy. Using the AI system, the experts were able to determine the head movement patterns of prison inmates with high psychopathy levels. They found out that the prisoners kept their heads stationary during interrogations. With automated and advanced methods like this, medical experts and even law enforcement authorities can evaluate and better understand suspects in question.

How not to build a psychopathic AI

Artificial psychopathy is built to develop better AI systems and understand the human mind better. But it is just created for that purpose—to advance knowledge and improve current AI tools. The possibility of an AI tool turning into a psychopathic AI is possible though, if done without careful designing and deployment processes.

Such circumstance place organizations on the center of issues. Different kinds of  AI biases are only some of them, with more recent cases centering on racial bias. These biases result to customer distrust, poor customer experiences, reduced sales, legal actions, and discrimination.

Based on the US National Institute of Standards and Technology and the European Commission’s guidelines for a trustworthy AI, here are the following actions to avoid creating a psychopathic AI:

  1. Identify and manage AI bias from pre-design to deployment stage.

AI bias may occur in any or all of the development stages. On the pre-design stage, addressing a problem at hand can already unearth several biases. Moving to the development stage, the algorithms and data used may also reveal existing organizational bias. Even at the final deployment stage, institutional biases may still surface and this time, even the customers will be exposed to them.

To address these biases, it is recommended to make a context-specific framework for identifying and managing biases on the entire AI lifecycle. This approach can help develop integrated guidelines and standards that can work across different organizational contexts, rather than just focusing on specific use cases all the time.

  1. Evaluate unacceptable and high-risk AI systems.

Some AI software can be identified as unacceptable risks or high-risk systems. Unacceptable AIs are those that manipulate human behavior against their will, like encouraging dangerous behavior. On the other note, high-risk AIs are those that can access confidential information to operate such as recruitment tools, evidence reliability tools, identity verification tools, and healthcare assistance tools.

For this reason, creating a detailed risk assessment list is a must. The list must take into account the following key elements: human oversight, technical safety, data privacy, transparency, fairness and diversity, societal well-being, and enterprise accountability. Use these elements to evaluate possible risks that AI tools may generate and discuss actionable plans to reduce or eliminate them.  

  1. Capitalize on human and AI collaboration for ethical AI.

Mankind is responsible for developing emerging technologies like AI. In the creation of tools and software, lots of stakeholders are involved aside from the AI designers and developers. Organizations must also take into account other stakeholders that can help improve AI systems like data scientists, staff who will utilize the AI, legal officers, and even the management team.

Whoever the stakeholders are, they need to have active involvement in the development of the AI system. The commitment to ensure that the developed tool has risk reduction plans in place, or even compliant to existing laws and regulations, should therefore be communicated in all sectors. And in cases where the software needs to be acquired through external business partners, the same procedures and protocols must also be carefully reviewed and established. 

  1. Monitor and control AI data and other resources. 

Based on the example of Norman, data has a huge impact on an AI system. With data, the system is dictated to do what it is intended to do—and does not act on its own. So, organizations must be accountable in monitoring and controlling AI data, as biases may occur as customer preferences changes or as other processes are being optimized.

Counteract biases and other issues by discussing the data selection processes, ensuring privacy safeguards to data, and assessing them on a regular basis. Doing so can help determine the appropriate data and test, trace, and document other issues that may be encountered during the creation or even deployment of the AI tool. 

Conclusion

At first glance, a psychopathic AI sounds daunting and scary.  But just like other technologies that had since evolved over time, artificial psychopathy was not built to scare humanity—it was built to aid humanity in its existence. It is important to remember though, that even if it is created as learning or assistive tool, it is not without issues and organizations must take on the responsibility to resolve them.

This article is the second part of a two-part series about exploring empathy and psychopathy in AI. Read the first part of the series here:  https://www.cxoconnectme.com/industry/artificial-empathy-jekyll-of-ai/

***

Curious to learn more about the newest trends on artificial intelligence? Read more about the latest enterprise technology, innovation, and sustainable industry practices at CXO Connect ME.

 

Reference Links

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3668064

https://www.psychologytoday.com/us/blog/fulfillment-any-age/202111/deep-dive-the-dark-triad-s-worst-trait

https://www.onespan.com/blog/trustworthy-ai-why-we-need-it-and-how-achieve-it

https://www.nist.gov/news-events/news/2021/06/nist-proposes-approach-reducing-risk-bias-artificial-intelligence

https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682

https://www.dailymail.co.uk/sciencetech/article-9879827/AI-detect-signs-psychopathy-based-head-movements-study-finds.html

https://www.entrepreneur.com/article/287362

CXO Connect Middle East Team