AI

Opinion - Safe and Responsible Artificial Intelligence in Health Care


I wrote a submission as part of the public consultation on Safe and Responsible AI in Healthcare. I haven’t really ever done one of these before, but since it was interesting - I thought I might share it here.


1.      How can AI benefit health and care in Australia and how can we measure and deliver these benefits?

There are many possible ways AI could benefit health and care in Australia. It could assist in improving diagnostics, possibly improve patient outcomes (via detecting trends/clinical abnormalities), and may aid rural & remote areas.

In Radiology, it is already being noted that some AI systems are able to detect or flag possible issues for Radiologists to review – this also occurs for some Gastroenterologists who have advanced imaging systems integrated into their scopes.

From one Sydney specialist, their current comment is that “AI helps highlight or query areas that may not have been seen by the human eye – it doesn’t replace my role or my expertise, but acts almost like a live second opinion. In this way, I feel like I am providing clinically superior investigations but also have a form of backup”.

I believe one of biggest ways that AI can assist are in systems that use tap into clinical data/observation inputs to help predictive analytics – helping to identify deteriorating patients or flagging certain patients that may fall within risk of certain conditions. An example of this is the “the Ainsoff Deterioration Index” that made news in June 2023 – see publication (Link: Link: https://pubmed.ncbi.nlm.nih.gov/37150397/);

Bassin L, Raubenheimer J, Bell D. The implementation of a real time early warning system using machine learning in an Australian hospital to improve patient outcomes. Resuscitation. 2023 Jul;188:109821. doi: 10.1016/j.resuscitation.2023.109821. Epub 2023 May 5. PMID: 37150397 -

There are now ways and methods for patients to be monitored and assessed at home or outside the clinical setting; and changes in data received could be flagged or alerted for medical intervention. Any AI system should be measured and assessed to determine their benefits. This could be done via randomized controlled trials (RCTs) in healthcare settings and a system of consistent auditing.

2.      Can AI improve access to care, and what regulations could be amended or added to enable this?

Yes – I believe that AI would enable additional support for remote diagnostics, telemedicine, and mobile health apps in areas of need. I’m not an expert, and would have to defer to others with greater understanding of Australian healthcare regulations – but I think you would need to look at the; the Privacy Act 1988 and My Health Records Act 2012. These would need to be updated incorporate the use of AI.

As part of this, you may need to establish specific guidelines for AI-powered tools, ensuring that cover data protection, accuracy, and fairness standards. There may be similar regulations internationally that could be used to base this on. One that would be worth reviewing would be the EU's Artificial Intelligence Act.

 

3.      What risk does AI pose to patients/consumers or healthcare professionals? Are the risks high or low? What criteria could be used to characterise risk? Should consumers be informed when AI is used in these low-risk ways?

I believe the main risks that AI could pose is inaccurate diagnoses, problems with bias in how algorithms operate, the concern about privacy breaches / how data is governed, and perhaps an eventual over-reliance on automation.

Risks could be high (ie, if AI makes a critical error which leads to a mis or missed diagnosis or inappropriate/missed treatment) or the risks could be low (ie for things like administrative assistance in appointment scheduling). If AI tools are adopted, then organisational risk-management systems would need to be updates to assess the severity of the clinical impact, the level of autonomy given to the AI system, and the degree of human oversight.

Consumers should be informed whenever AI is being used – and this should be explained clearly, and staff given resources to handle any questions or concerns.

 

4.      What factors are important for rural, regional or remote Australia when assessing the benefits, risks, and safety of AI? Are there other communities that face specific risks when implementing AI in health care? What considerations should be made to ensure all Australians have access to the benefits of AI?

In rural areas one factor that would need to be considered would be quality and accessibility to sufficient internet services. There are also things like staffing / expertise shortages, and the level of health literacy. AI could assist by enabling telemedicine, remote monitoring, and perhaps automated diagnostics. There is also a possible concern surrounding the amount of reliance on these tools – particularly with risk of system failures.

 

5.      Should health care professionals have a choice about whether they use AI as part of their work?

Yes, in every case healthcare professionals should have the autonomy to decide whether to incorporate AI tools into their practice or not. Many healthcare professionals may wish to rely on more conventional (traditional) methods. If AI systems are implemented or adopted, there should always be adequate training and support so that clinicians feel confident in using them.

 

6.      What unique considerations are specific to AI in health care, and why? Should the government address them through regulatory change?

Healthcare requires a high level of accuracy, accountability, and transparency due to the life-or-death stakes involved. Unlike other industries or sectors, an error in healthcare can directly (physically) harm people. The increased stakes mean there needs to be strict regulatory oversight. The Government should assess changing existing healthcare laws and regulations to specifically address AI-related issues – particularly things like clinical accountability, patient consent, and data security.

 

7.      How does the use of AI differ in healthcare settings compared to general or other sectors such as finance, education, etc.?

In healthcare, AI may influence or direct clinical decisions that could directly impact human physical well-being, requiring far higher standards of accuracy and reliability than sectors like finance or education. Mistakes in healthcare AI could lead to physical harm or even death, whereas mistakes in other sectors are often financial or reputational. This means that an use of AI should include stringent testing, certification, and regulatory scrutiny.

 

8.      Should there be an Australian body specifically dedicated to overseeing AI in health care?

Yes, a dedicated federal body, similar to / or perhaps reporting to, the Therapeutic Goods Administration (TGA) could be established to oversee regulation of AI in healthcare. This body would be tasked with ensuring that AI tools meet required safety, accuracy, and ethical standards. Additionally, it could also serve to be a liaison between developers, healthcare providers, and policymakers.

 

9.      Are there any specific changes to existing health care laws that would address AI-related harms or help AI to be used safely?

Again, without being an expert in the existing health care laws; I would have to assume that there is an opportunity somewhere akin to how we govern medical devices. Perhaps these could be expanded to explicitly include AI-driven technologies. There should also be updates to legislation that handles data privacy and security (I think this would be The Privacy Act?).

 

10.  Which international approaches should we consider, if any, that are specific to health care?

Internationally, I believe that the EU’s General Data Protection Regulation (GDPR) and possibly their Artificial Intelligence Act would be useful to review as they prioritize patient data protection and ethical AI use. Australia could consider adopting similar standards to ensure AI technologies are safe, transparent, and effective, while also factoring in patient privacy.

11.  Should humans be able to overrule a finding or decision made by AI?

Yes, human healthcare professionals should always retain the ability to overrule AI decisions. AI should only every be used to ASSIST rather than replace human judgment. This is essential in healthcare since these often relate to clinical decisions where clinicians need to be able to be able to think critically, factor in context, and have empathy.

 

12.  Should there always be a person or “human in the loop” to make decisions or deliver a healthcare service? Are there any circumstances in which it would be acceptable to have fully automated health or care decisions made by an AI product?

There should almost always be human oversight and involvement. Humans must always remain “in the loop” for critical elements like diagnosis or treatment planning. There may be lower risk, or more mundane  areas in administrative or clerical tasks that you may be able to fully automate (as long as there are sufficient safety measures in place).

 

13.  Should errors made by AI be reported? If yes, how should they be reported?

Yes – all errors made by AI should be documented and transparently reported. A national or federal reporting system could be created that links into the proposed governing body from question 8. These errors would need to be logged, data analysed, with the aim of improving future AI performance and safety.

 

14.  Should there be transparency about when AI is involved in health or care, and should consent be requested from the consumer or healthcare professional?

Use of AI should always be disclosed and both patients and healthcare professionals should be informed/aware of its use. Patients should provide informed consent, particularly if the AI directly influences clinical decisions.

 

15.  Generative AI may be developed for general use, yet used in health care. Should generative AI have any special treatment, regulatory or otherwise?

I believe that in any areas where there is an update or section relating to AI – that there should be some clause or section that explicitly covers Generative AI. The major concern is that because it is adaptive and can be influenced by bad data/information which could result in unsafe or incorrect medical advice.

 

16.  What protections are needed for patient data used or generated by AI that are different for health care?

Healthcare data is highly sensitive and security is of the utmost importance. AI systems will need to ensure that since they are accessing this highly sensitive data that they should also adhere to the highest standards of data encryption, anonymization, and other prudent modern security methods. Furthermore, regulations should ensure that AI developers/services/companies cannot misuse or sell patient data without explicit consent, oversight, and transparency. Any disclosures or requests to access patient information should also be reviewed or alerted to the proposed governing body (Q8).

 

17.  Is it acceptable for developers of AI products to use patient data to develop their products or to sell patient data collected from use of AI?

It is absolutely unacceptable for any developer/company to sell patient data. Any data-sharing arrangements should be fully transparent, and all stakeholders alerted/notified when it occurs, and should comply with strict privacy laws.

 

18.  Should your healthcare information be kept in Australia? If yes, would your view change if this reduced ability to access advances in AI made overseas?

This is a challenging question – I believe that for the most part all efforts to retain healthcare information and data should be made. However, given the nature of technology and advancements – there should also be reasonable understanding that in some cases data is able to be sent or accessible by offshore. Overall, all options should make certain that patient privacy is protected.

 

19.  Are there any specific safety considerations that have not been raised elsewhere?

One safety concern that hasn’t been explored is the idea of algorithmic bias. There is a potential for AI systems to be trained or access biased/skewed datasets. Steps would need to be taken to ensure that AI systems are trained/based on diverse, representative data is crucial to avoid unintended harm.