Navid discussed how Artificial Intelligence (AI) can improve healthcare and I completely agree. He pointed out all the benefits of AI in healthcare such as the ability to collect more data quicker, does not have unconscious biased, and more accurate diagnoses all of which, would help save lives. I assumed data would be entered into a software from all over the world and able to be accessed within seconds. That is faster and more thorough than any physician or laboratory communication I’ve ever witness for diagnosing patients. One broad example would be COVID; could it have been potentially isolated sooner if we would have known the severity and consequences of the illness when it first began? The downfall of this information being at everyone’s fingertips is when the treatment is unavailable in one country but not another. I imagine that would create an emotional challenge for the physician and patient if there is a known successful treatment yet unable to get access to it however, I feel that could eventually be resolved. Another concern would be the many exceptions we see in medicine. The AI may only output the most common symptoms or treatments which would cover most cases but the few numbers of “abnormal” cases may not benefit from an AI program. It would be nearly perfect if the AI could give an “I don’t know” answer as Navid suggested. Unlike human physicians, AI will have no ability to create unconscious biased ever. Each patient case will input of their illness or disease, the AI will compare the individual’s information to a large data set and that’s it! It cannot take into consideration the “type” of patient such as a helplessness patient or difficult patient. It cannot see past experiences of patients such as substance abuse. Both of which, I think, can influence a human physician’s opinions and potentially patient treatment. That being said, sometimes knowing the patient’s personality does positively influence the type of treatment that would be best because a compliant patient is better than patient that refuses treatment. Our world is getting large and populated at a fast rate, and no human can keep up. Although there is no way AI can take over completely. Humans have higher order thoughts and emotions that an AI is far from containing. As long as the AI can continuously gather data from around the world, adapt to the collected data, and there is still human oversight, I think AI would be a huge benefit to medicine.
I appreciate the attempt at simplifying a complex field of computing science. What was missing is commentary on the ethics behind the use of emergent technologies such as AI. This matters because training an AI system requires both negative and positive outcomes to be fed back into the system. The subject of care must provide their informed consent AND understand the experimental nature of the technology. The regulatory frameworks in place, exist to help ensure that negative outcomes are reduced (based on experience) AND that there is a responsible person for the decision making that led to the outcome. Who is that responsible person with an AI system? The clinician who blindly trusts the AI, or the engineer who coded the algorithms? As a health care professional and digital health leader I suggest that until those aspects are established from a medico-legal, liability, education and socio-technical point of view for clinicians - we should be rigorously challenging the message of simplicity put forward here SO that AI can be a trustworthy tool.
Dr. Saidy made an interesting case of potentially using AI for the future of healthcare. He talks about the best of AI and some regulations that would need to be used. There are some ethical components that came to mind with this AI use. AI algorithms have a risk of perpetuated biases of certain populations if the data used to train them isn’t representative. I loved how Dr. Saidy was able to address this and how they are designing it to be fair and unbiased which is amazing progress. Another thing I was wondering was the transparency of the AI systems. It seems AI systems can be opaque, making it difficult for patients and healthcare providers to understand how the decisions are made. It is important to ensure that the AI systems are explainable so that patients and healthcare providers can understand how decision are made. I feel it is also important to ensure that there are clear lines of responsibility for the decision made by AI systems and there some mechanisms in place to address errors and/or mistakes. Overall, the healthcare system can benefit greatly from AI systems and being able to advance technology to help patients will be a great accomplishment. These are just some thoughts I had of things that need to be considered as we advance forward into artificial intelligence in healthcare.
While implementation and use of a new technology can be scary, the potential for greater streamlined success is far more vast than any hesitation or fear of using Ai. With a strong and unbiased foundation, along with flexible and stringent monitoring, we could enter a new era of healthcare.
I am 3rd year MBBS student. I love computers & technology more than medical. And my vision is to work in AI in healthcare, IoMT, and in other fields of technological innovation in health care. I know basics of python, basics of C language, worked with aurdino and Raspberry Pi. And general knowledge of AI and ML. Im really passionate and have potential but not clear about the way I should take. Plz help!
Your dedication to quality content is amazing.
I'm writing a persuasive on the benefits of using AI in healthcare. This was very helpful in looking at the benefits and idea of how helpful using an AI can be in healthcare. It's not entirely offsetting the role of decision making from physicians it's a great tool.
Dr. Saidy’s talk regarding artificial intelligence(AI) in healthcare made the argument that AI can provide many benefits including making hospitals more efficient and can improve access to care by providing accurate decision-making tools. Interestingly, AI can factor in the outcomes of thousands of other patients to determine what will work best for the patient based on their individual circumstances by comparing to other patient outcomes with similar circumstances. This could provide insight into how physicians determine what treatment or procedure may be best for their patients given their patient’s specific circumstances. However, I argue that no two people and their specific circumstances are going to be identical and guarantee an identical outcome. AI could be used to make recommendations but there could be circumstances in that AI fails to factor into its algorithm even if AI continues to evolve over time and get better at its predictions for healthcare outcomes. An ethical consideration when considering implementing AI into healthcare is that AI systems can have a significant impact on patient autonomy and decision-making. This would impact patient autonomy if AI systems are used to make decisions about diagnosis, treatment, or clinical outcomes without human input. I think it’s important that AI systems are designed and implemented in a way that respects patient autonomy and preferences so for example, the patient still gets to decide what treatment would work best when presented with all the treatment options and the risks and benefits for each. Also, if the AI algorithm is not up to date or if there is an issue with the learning process of the AI system, it could lead to the patient receiving an incorrect diagnosis or an incorrect treatment which would not lead to improved healthcare outcomes. These unintended consequences or errors due to relying on the AI system to guide diagnosis and treatment can put patient safety at risk and could cause harm to the patient as a result. This relates to the ethical principle of nonmaleficence which requires that healthcare providers do no harm to their patients. In order to comply with the principle of nonmaleficence, AI systems need to be designed and implemented in a way that minimizes the risk of harm to patients, and any potential harm is carefully considered and weighed against the potential benefits of using the AI system. Dr. Saidy brought up a valid point that Ai systems often don’t use data sets that represent people of all different races. Therefore, when AI predicts an outcome for someone of Asian race from a predominately white male data set, the prediction made by the AI system is likely to be less accurate. AI systems therefore can inadvertently perpetuate biases and discrimination against people if they are trained on biased or incomplete data. Overall, there are benefits to implementing AI and there are also risks and challenges that need to be further investigated before AI is allowed to fully predict and guide outcomes.
Amazing information, Looking forward to the future.
Sounds motivational for future generations of scientists!
Artificial intelligence has some amazing potential benefits in the health care field, with potential efficiency improvements for hospitals, assisting and guiding physicians in patient treatment regimens, as well as the greatest potential of diagnosing a patient. Dr. Navid Saidy discussed some very important complications to consider related to introducing artificial intelligence into the medical practice. Including regulations for medical devices that typically involves a physical device. Yet in the case of artificial intelligence it is a software that evolves and does not involve a static repetitive outcome. Nevertheless if artificial intelligence purpose is to diagnose, give treatment options and prognosis, it’s output has numerous outcomes which can be hard to quantify and therefore hard to regulate. Even as stated in the video the regulations change to allow for more transparency and real time monitoring, there still are risks. One of the main concerns about artificial intelligence is that the data used to create its program is biased. Since humans are the ones collecting the data, and have interpreted the data with some implicit assumptions that are then incorporated in to the system, this bias is then transferred to the artificial intelligence models and can lead to biased resorts in diagnosis/treatment. This is why it is so vital that when assessing these new technologies that the results are accurate. This leads me to a important topic to consider with the implementation of artificial intelligence; that of beneficence. Beneficence is the act of doing good by benefiting the patient more than doing harm. Artificial intelligence has a great potential capacity to reach a current diagnosis with more efficiency which could greatly benefit patients in time sensitive care. However, in the cases where the wrong diagnosis is stated, with confidence by artificial intelligence this could lead to greater harm to the patient. The capacity for AI to state when the answer is unknown and if more testing is needed, is crucial to the application of this technology. These drawbacks need to be critically supervised as artificial intelligence is incorporated into medicine. It is naïve to say that artificial intelligence won’t be a part of medicine in the future. All the same, we need to be careful and diligent in assessing the technology and outcomes for patients. It is important to remember, that part of healing comes from a healing touch and emotional and spiritual connectivity of humans. As technologies become more and more integrated in our society, we must prioritize and preserve our humanity.
Amazing information Dr. Navid. Looking forward to the future.
Two years later and what has been said in this Ted Talk is still true on a good trajectory. Many considerations and conversations have come into play.
This video is truly inspiring! Looking forward to your next content! 🌟
AI is the future of effficient care in health sector
As we shift toward a “personalized medicine” the use of AI in healthcare is inevitable. I really appreciate Dr. Navis’s comments on data bias and as a medical student I wanted to know more. I am already aware of the biases found in current medicine but had not even considered the idea that our basic medical algorithms were bias. There is a great article written by Katherine J. Igoe explaining the biases seen in medical algorithms. In this article she explains that currently our genetic and genomic data is represented by 80% of Caucasians, and thus makes our understanding of genetics geared towards Caucasians. Obviously, we cannot just ignore race when conducting genetic information and in her article, she suggests the best solution for combating the inevitable use of AI is having a diverse group of professionals and not strictly a team of data scientist. This includes have a diverse professional team consisting of physicians, data scientist, government, philosophers, and everyday civilians.
Amazing infromation dr. Navid Looking frowad to the futurr.👍
Artificial intelligence can really make medicines for immortality and make our earth heaven for those who are choosed , and save latest technologies
Long overdue. Use AI to predict and medication dosage and meds (adhd e.g.) by past response. Whatsapp then the patient simply exports the log as a txt and a LLM does analysis and etc just use GPT4 and so on (dont even have to fine-tune but it might help) as long it lerns and can be reused or benefitial for future models
@asyahussain7560