Dialogue-oriented artificial intelligence is increasingly being used in mental health care. It supports psychoeducation, self-management exercises, progress monitoring, and the implementation of structured therapeutic techniques. At the same time, its use raises fundamental questions: What does it mean when AI simulates empathy and care without itself bearing responsibility or values?
In the current DSI Insights column, Jana Sedlakova discusses the ethical and therapeutic implications of this development and suggests that dialogue-oriented AI should be understood not as a digital therapist, but as a fictional character. This perspective emphasizes the need for transparency about the simulated nature of AI and invites us to use AI not as a substitute for, but as a complement to, professional psychotherapeutic work.