This example applies the eDSR methodology to a complex sociotechnical project: creating a “Compassionate AI” system for healthcare.
Use Case: The “Compassionate AI” Patient Support System
Imagine I’m developing a Generative AI system designed to communicate with patients in distress.
The Challenge: The problem is complex because it involves technical hurdles (LLM hallucinations), social factors (patient anxiety, cultural nuances), and regulatory constraints. A simple linear process might fail if the definition of “compassion” isn’t validated before the code is written.
Here is how I would break this project down using the five design echelons.
1. Problem Analysis Echelon
Goal: A Validated Problem Statement.
- Activity: I will start by conducting a literature review on existing patient portals and interview doctors and patients about current communication gaps. Here I will find that while current tools answer medical questions, they fail to recognize emotional distress, leading to patient non-compliance.
- Validation: I will then proceed to validate this problem by reviewing practitioner initiatives and confirming that no current solution addresses this “empathy gap”.
- Intermediate Artifact: A published paper or internal memo defining the “Empathy Gap in Digital Health Interventions” as the core problem. This is a standalone contribution.
2. Objectives and Requirements Definition Echelon
Goal: Validated Design Objectives.
- Activity: Before writing code, you hold focus groups with psychologists and medical ethicists to define what “compassion” looks like in a text interface. You define meta-requirements: the system must detect sentiment changes and adjust tone without giving false medical hope.
- Validation: You check these requirements for coherency (do they contradict medical protocols?) and completeness (did we miss a stakeholder view?)
- Intermediate Artifact: A set of “Design Principles for Compassionate AI Interaction.” This can be communicated/published before the software is fully built.
3. Design and Development Echelon
Goal: A Validated Artifact Design (Projectable Solution).
- Activity: I will design the system architecture. You integrate an LLM with a “Retrieval-Augmented Generation” (RAG) framework that references a database of verified empathetic responses. You design the specific prompt engineering guardrails.
- Validation: I will use logical reasoning and assertions to prove that this architecture creates technically sound and consistent responses. You verify the design’s elegance and internal consistency.
- Intermediate Artifact: The “Compassionate Response Architecture” (blueprints and logic flows), validated by technical peers.
4. Demonstration Echelon (Proof of Concept – PoC)
Goal: A Validated Instance of the Artifact.
- Activity: I will then build a prototype (instantiation) and run it in a simulated environment (artificial context). You feed the AI historical patient transcripts to see how it responds compared to human nurses.
- Validation: I will test for robustness (does it crash?), effectiveness, and ease of use. You confirm it is technically possible for the AI to detect distress markers.
- Intermediate Artifact: A working Prototype (Beta 0.1) that demonstrates feasibility.
5. Evaluation Echelon (Proof of Value – PoV)
Goal: Validated Artifact in Use (Contextualized).
- Activity: You deploy the system in a real-world pilot study (natural context) with a clinic in the German healthcare system. Real patients interact with the tool for appointment scheduling and symptom triage.
- Validation: You use field experiments and surveys to measure the utility and efficacy of the artifact. Does it actually reduce patient anxiety? Does it increase appointment adherence?
- Intermediate Artifact: Empirical evidence providing the Proof of Value (PoV).
Why this helps your research:
- Iterative Publishing: Instead of waiting 3 years to publish one massive paper, you can publish the Problem Analysis (Echelon 1) and Design Principles (Echelon 2) early in the process.
- Handling Complexity: If the Evaluation (Echelon 5) fails (e.g., patients hate it), you don’t have to scrap the whole project. You might only need to revisit the Objectives (Echelon 2) or the Design (Echelon 3).\
- Sociotechnical Fit: This model forces you to validate the social requirements (Echelon 2) separately from the technical code (Echelon 3), which is crucial for AI ethics.









Leave a Reply