AI and robotics are rapidly becoming integral to South African healthcare, but the regulatory environment governing their use has not kept pace. While existing laws such as the National Health Act, POPIA, and HPCSA ethical rules offer valuable guidance, none were designed with AI-driven clinical decision-making or autonomous robotics in mind. This gap creates uncertainty for hospitals, practitioners, and patients.
Advanced technologies, such as Da Vinci surgical robots, currently used in South African hospitals including Tygerberg and Netcare Pretoria East for urology, gynecology, and colorectal procedures, highlight the urgent need for clear regulatory oversight. Questions about liability, approval, and training arise when technology contributes to complex surgical procedures, illustrating the gaps in current healthcare legislation.
Legal precedents, such as the Zamakhuhle Hospital v Hlatswayo case, underscore the responsibilities of hospitals and practitioners. Even when a surgeon acts as an independent contractor, hospitals may still be held liable, highlighting the importance of updated guidance on AI and robotic technology use.
Currently, South Africa has no AI-specific healthcare regulations. As AI becomes more powerful and autonomous, this absence raises complex questions: Who is liable when AI contributes to an incorrect diagnosis? How should medical devices powered by machine learning be evaluated? What standards should govern data-driven algorithms that influence treatment decisions?
Liability is one of the most pressing concerns. Responsibility may be shared between clinicians, hospitals, and technology manufacturers. While healthcare practitioners remain accountable for clinical decisions, manufacturers may be liable for defective software or malfunctioning robotic systems. Healthcare institutions may also face vicarious liability. Without updated regulatory clarity, navigating these intersections becomes increasingly difficult.
Another challenge lies in the approval and oversight of AI medical tools. Traditional medical device regulations were designed for static technologies, not systems that continuously learn and adapt. South Africa needs a modernised framework that outlines validation, ongoing monitoring, post-market surveillance, and the ethical deployment of evolving AI tools.
Policy reform must also address the rights of patients. Updated guidelines should ensure transparency around AI use, clear expectations for informed consent, and protection against discrimination resulting from algorithmic bias. As new technologies roll out, maintaining fairness and dignity must be non-negotiable.
A collaborative national effort, bringing together regulators, healthcare providers, technologists, insurers, and legal experts, is essential. Updated HPCSA guidance, strengthened medical device regulations, and revised liability frameworks will provide the clarity needed for safe adoption.
South Africa is poised for a breakthrough in healthcare innovation. But to realise this potential, regulatory reform must advance at the same pace as technology. Only then can AI elevate healthcare while safeguarding patient rights and maintaining ethical, trusted care.
This article draws on insights from Adv Maud Letzler’s webinar on the “Evolving Landscape of Liability and Ethics in South African Healthcare – Navigating the Age of AI and Robotics.”
