Responsible AI Is Also a Human Behavior Problem

Many Responsible AI discussions focus on fairness, transparency, or governance. But some of the most important failures emerge in how people interpret, react to, and act around AI systems in practice.




Opening Tension

Responsible AI is often discussed as if the central challenges live in the model: fairness, explainability, transparency, compliance. But some of the most consequential failures happen later, in how people interpret and act on AI in the real world.


Why Responsible AI Often Frames the Problem Too Narrowly

The technical and governance frames are necessary but incomplete. The missing issue is the behavioral layer.


What the Evidence Shows

Human response to algorithmic advice is inconsistent and context-sensitive (Dietvorst + Logg).


What This Looks Like in Healthcare

In healthcare, AI outcomes are shaped by workflow, stakeholder expectations, and implementation conditions, not only model performance (Vo, Steerling, Wilhelm).


The Missing Behavioral Layer

Uncertainty communication and trust calibration matter because people must decide how to treat AI outputs (Tomsett).


What This Changes for Responsible AI Practice

Responsible AI should influence product framing, deployment conditions, communication, evaluation in use, and monitoring.


One Pattern From Practice

xxx


Closing Takeaway

Responsible AI is not only about building systems that are technically sound. It is also about understanding the behavioral conditions under which people will use those systems well, badly, or somewhere in between.


References

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126.

Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.

Steerling, E., Siira, E., Nilsen, P., & Nygren, J. M. (2023). Implementing AI in healthcare—the relevance of trust: A scoping review. Frontiers in Health Services, 3, Article 1211150.

Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S., Srivastava, M., Pearson, G., & Kaplan, L. (2020). Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns, 1(4), Article 100049.

Vo, V., Auroy, L., Sarradon-Eck, A., et al. (2023). Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Social Science & Medicine, 338, Article 116326.

Wilhelm, C., Steckelberg, A., & Rebitschek, F. G. (2025). Benefits and harms associated with the use of AI-related algorithmic decision-making systems by healthcare professionals: A systematic review. The Lancet Regional Health – Europe, 50, Article 101145.