Study identifies six factors humans must overcome to insure artificial intelligence is trustworthy, safe, reliable, and compatible with human values.

A University of Central Florida professor and 26 other researchers have published a study identifying the challenges humans must overcome to ensure that artificial intelligence is reliable, safe, trustworthy and compatible with human values.

Ozlem Garibay PhD, an assistant professor in the Department of Industrial Engineering and Management Systems at UCF, was the lead researcher for the study.

She says that the technology has become more prominent in many aspects of our lives, but it also has brought about many challenges that must be studied.

For instance, the coming widespread integration of artificial intelligence could significantly impact human life in ways that are not yet fully understood, says Garibay, who works on AI applications in material and drug design and discovery, and how AI impacts social systems.

The six challenges Garibay and the team of researchers identified are:

  • Challenge 1: Human Wellbeing
    AI should be able to discover the implementation opportunities for it to benefit the wellbeing of humans. It should also consider support for the wellbeing of users when interacting with AI.
  • Challenge 2: Responsibility
    Responsible AI refers to the concept of prioritising human and societal wellbeing across the AI lifecycle. This ensures that the potential benefits of AI are leveraged in a manner that aligns with human values and priorities, while also mitigating the risk of unintended consequences or ethical breaches.
  • Challenge 3: Privacy
    The collection, use and dissemination of data in AI systems should be carefully considered to ensure the protection and privacy of individuals and prevent harmful use against individuals or groups.
  • Challenge 4: Design
    Human-centred design principles for AI systems should use a framework that can inform practitioners. This framework would distinguish between AI with extremely low risk, AI with no special measures needed, AI with extremely high risks, and AI that should not be allowed.
  • Challenge 5: Governance and Oversight
    A governance framework that considers the entire AI lifecycle from conception to development to deployment is needed.
  • Challenge 6: Human-AI interaction
    To foster an ethical and equitable relationship between humans and AI systems, it is imperative that interactions be predicated upon the fundamental principle of respecting the cognitive capacities of humans.

Specifically, humans must maintain complete control over and responsibility for the behaviour and outcomes of AI systems.

The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology.

“These challenges call for the creation of human-centred artificial intelligence technologies that prioritise ethicality, fairness and the enhancement of human wellbeing,” Garibay says.

“The challenges urge the adoption of a human-centred approach that includes responsible design, privacy protection, adherence to human-centred design principles, appropriate governance and oversight, and respectful interaction with human cognitive capacities.”

Overall, these challenges are a call to action for the scientific community to develop and implement artificial intelligence technologies that prioritise and benefit humanity, she says.

The group of 26 experts include National Academy of Engineering members and researchers from North America, Europe and Asia who have broad experiences across academia, industry and government.

The group also has diverse educational backgrounds in areas ranging from computer science and engineering to psychology and medicine.

Their work also will be featured in a chapter in the book, Human-Computer Interaction: Foundations, Methods, Technologies, and Applications.

Five UCF faculty members co-authored the study:

  • Gavriel Salvendy, a university distinguished professor in UCF’s College of Engineering and Computer Science and the founding president of the Academy of Science, Engineering and Medicine of Florida.
  • Waldemar Karwowski, a professor and chair of the Department of Industrial Engineering and Management Systems and executive director of the Institute for Advanced Systems Engineering at the University of Central Florida.
  • Steve Fiore, director of the Cognitive Sciences Laboratory and professor with UCF’s cognitive sciences program in the Department of Philosophy and Institute for Simulation & Training.
  • Ivan Garibay, an associate professor in industrial engineering and management systems and director of the UCF Artificial Intelligence and Big Data Initiative.
  • Joe Kider, an associate professor at the IST, School of Modeling, Simulation and Training and a co-director of the SENSEable Design Laboratory.

The study,Six Human-Centered Artificial Intelligence Grand Challenges,” has been published in the International Journal of Human-Computer Interaction.