Inventing AI with Ethics and Safety

Ishrak
2 min readFeb 17, 2021

I want to build an artificially intelligent conversational agent in the healthcare domain. The primary task of the agent will be, as the name suggests, converse with the users regarding their healthcare issues. Conversational agents are sometimes confused with chatbots. A chatbot is an internet robot or in other words a piece of software application that runs automated tasks over the internet. Most of time, a chatbot lacks the capability to carry on sophisticated conversation with a human counterpart because, chatbots are designed to follow rules. On the other hand, conversational AI emerges as a complex version of chatbots in the sense that they have the ability of Natural Language Understanding (NLU) and hence the skill to converse beyond rules.

There can be different applications of such agents. Let’s focus on one such use case in the healthcare domain. We can have an AI agent speak to a patient about both their physiological and psychological ailments. This agent can potentially replace some doctors of general practice by diagnosing a patient’s conditions and prescribing procedures or medicines accordingly. A patient can even upload her diagnostic test results in the form of images to have it checked by the AI agent. With the blessing of deep learning, it is now possible to identify life threating conditions from medical imaging and deduce possible diseases from the mentioned symptoms.

The obvious benefit of such systems is prompt service and minimized cost. However, there are various ethical issues that come along with such a conversational AI in healthcare. First of all, we have privacy issues concerning patient data. Then, if the designer of the agent introduces bias in the model, the conversational agent can learn to favor one pharmaceutical company over the other. And, most importanty even minor inaccuracies in such systems can cause a butterfly effect and lead to a life and death situation of a patient. But, there are ways in which we can address these issues with reasonable assurance. We can have federated learning where we are not required to upload the patient data to the central server instead we only upload the model trained on the patient’s edge device. Then, we can remove bias from the training by having the agent recommend generic names of the medicines as opposed to the brand names. Also, improving the overall accuracy of these systems and having a human in the loop can effectively ensure the safety of such conversational AI agents. Therefore, these simple and responsible measures can be adopted to use conversational AI for achieving healthcare convenience without compromising ethics and safety.

--

--