Summary
Catholic hospital in South Korea adopts AI ethics code
Seoul medical center says framework is the nations first for healthcare AI ethics
May 08, 2026
Archbishop Peter Chung Soon-Taick Chung and Archbishop Giovanni Gaspari, Apostolic Nuncio to Korea, with the Medical AI Ethics Code during the CMC…
Source: Herald Malaysia Online

AI News Q&A (Free Content)
Q1: What prompted a Catholic hospital in South Korea to adopt an AI ethics code, and what are the main principles outlined in this framework?
A1: The Catholic Medical Center in South Korea adopted an AI ethics code as hospitals increasingly integrate advanced technologies into healthcare environments. The framework emphasizes the role of AI as a supporting tool while ensuring that ultimate responsibility for patient care remains with human clinicians. Key principles include accountability, oversight, protection of patient data, and maintaining patient dignity, safety, and holistic treatment. The code draws on ethical principles outlined by the Vatican, showcasing a blend of technological and moral considerations.
Q2: How does the AI ethics framework adopted by South Korean hospitals align with the Vatican's guidance on artificial intelligence?
A2: The AI ethics framework adopted by South Korean hospitals aligns with the Vatican's guidance on artificial intelligence by emphasizing human-centered care and ethical responsibility. The Vatican's guidelines serve as a moral compass that underpins the hospital's AI principles, ensuring that technological advancements do not undermine human dignity or ethical standards in patient care.
Q3: What are the ethical concerns and priorities associated with AI use in healthcare, specifically in South Korea?
A3: In South Korea, the ethical concerns associated with AI use in healthcare revolve around safety, security, reliability, and privacy protection. These priorities differ based on the context; for critical life-sustaining scenarios, safety and security are emphasized, whereas privacy protection is prioritized in preventive healthcare. The findings highlight the need for ethics education and a balanced approach to AI-H (AI in Healthcare) ethics, addressing both privacy and inclusiveness.
Q4: What insights does the scholarly article 'Beyond principlism: Practical strategies for ethical AI use in research practices' offer regarding AI ethics?
A4: The article 'Beyond principlism: Practical strategies for ethical AI use in research practices' suggests a user-centered approach to bridging the gap between abstract ethical principles and practical research practices. It proposes five goals for ethical AI use, which include understanding model training, respecting privacy, avoiding plagiarism, applying AI beneficially, and ensuring transparency. These guidelines aim to foster responsible AI use while promoting innovation without compromising research integrity.
Q6: What does the case study of OpenAI reveal about the ethical discourse in AI development?
A6: The case study of OpenAI reveals that the discourse around AI ethics primarily focuses on safety and risk, with less emphasis on academic and advocacy ethics frameworks. This indicates a potential gap in the application of comprehensive ethical principles in industry practices, highlighting the need for aligning public and academic discourses to ensure responsible AI development and governance.
Q7: What role do national AI ethics guidelines play in South Korea's approach to ethical AI in healthcare?
A7: National AI ethics guidelines in South Korea aim to strengthen the ethical responsibilities of developers and provide guidance to prevent misuse of AI technologies. These guidelines are part of a broader initiative to establish a people-centered intelligent information society that proactively addresses the social and cultural impacts of AI. They complement hospital-specific frameworks by setting a national standard for ethical AI use.
References:
- Beyond principlism: Practical strategies for ethical AI use in research practices
- Competing Visions of Ethical AI: A Case Study of OpenAI





