We are in a stage of transition as artificial intelligence (AI) is increasingly being used in healthcare across the world. Transitions offer opportunities compounded with difficulties. It is universally accepted that regulations and the law can never keep up with the exponential growth of technology. This paper discusses liability issues when AI is deployed in healthcare. Ever-changing, futuristic, user friendly, uncomplicated regulatory requirements promoting compliance and adherence are needed. Regulators have to understand that software itself could be a software as a medical device (SaMD). Benefits of AI could be delayed if slow, expensive clinical trials are mandated. Regulations should distinguish between diagnostic errors, malfunction of technology, or errors due to initial use of inaccurate/inappropriate data as training data sets. The sharing of responsibility and accountability when implementation of an AI-based recommendation causes clinical problems is not clear. Legislation is necessary to allow apportionment of damages consequent to malfunction of an AI-enabled system. Product liability is ascribed to defective equipment and medical devices. However, Watson, the AI-enabled supercomputer, is treated as a consulting physician and not categorised as a product. In India, algorithms cannot be patented. There are no specific laws enacted to deal with AI in healthcare. DISHA or the Digital Information Security in Healthcare Act when implemented in India would hopefully cover some issues. Ultimately, the law is interpreted contextually and perceptions could be different among patients, clinicians and the legal system. This communication is to create the necessary awareness among all stakeholders.