-
Reviewed
Overview
In this webinar, Associate Professor Helen Frazer discusses current breast cancer screening, the accuracy of screening with artificial intelligence (AI) assistance and the future role of AI determining risk-based personalised population screening.
Overview
In this webinar, Associate Professor Helen Frazer discusses current breast cancer screening, the accuracy of screening with artificial intelligence (AI) assistance and the future role of AI determining risk-based personalised population screening.
Dr Elizabeth Farrell: Associate Professor Helen Frazer is a radiologist, breast cancer clinician, and clinical director at St Vincent’s Breast Screen and at Breast Screen Victoria. She’s the head investigator in breast cancer AI projects called BRAIx. She’s also on the CSIRO National AI Centre Think Tank. She trained in radiology in New South Wales, then joined Breast Screen New South Wales, and has also worked in WA, Victoria, as well as Atlanta, Georgia and California in the USA. She has postgraduate degrees in clinical leadership, epidemiology and biostatistics, and is a member of the Australian Institute of Company Directors. In 2022, she won the ANZ Women in AI Innovator of the Year Award. And she’s going to talk to us about transforming breast cancer screening with AI. Helen, thank you very much.
Dr Helen Frazer: Okay, so what we’re going to touch on today is breast cancer screening today, and that’s the population program, and most of you will be very aware of that. Also, the accuracy of screening with AI assistance into the future, and also this really forecasting role of AI determining risk, and personalisation of the screening program.
So a little bit of context about breast cancer screening, as I’m sure you’re all really familiar with this, it’s a very mature program. It was introduced over 30 years ago now. Sits alongside cervical cancer screening and maturity, bowel cancer is a little bit earlier, but excitingly as you’d know as well, we will stand up the high risk lung cancer screening program probably about July this year. So I think Australia’s really leading the way in population cancer screening programs.
The program targets women in 50 to 74 years. Above 40, women are eligible to attend, and also beyond 75, but they’re not actively recruited. We have a program of ongoing quality assurance, but also independent audit every four years, which sets that high quality, and that can be brought forward every two years if there are performance issues. What’s really relevant to this presentation is that we’ve been digital since 2012. Health care AI leads the way with that, and especially radiology, because we’ve been working in ones and zeros for a long time, and that enables us, with these incredible digital data sets, to develop AI tools for decisioning and classification, using image classification algorithms. It is a successful public health initiative. Reduction in mortality of between 40 to 50% for those women that screen. A population level reduction in mortality of about 30%. We are, as you can see on the map there, one of just 22 lucky countries globally that have a national, organised population screening program.
So this is a graph of age-standardised mortality, or death rates from breast cancer. And you can see that, since the introduction of screening, a very slow and steady decline. But as all the clinicians in the room know, and especially Jane as our surgeon here, that it’s not just screening. The reduction in mortality has also been from significant advancements in treatment. But it’s not good enough. So how do we take that next quantum step in reducing deaths from breast cancer? And that’s what the breast cancer community are all working hard on. How do we get to zero deaths from breast cancer?
There are three levers for that. Better prevention, better screening and better treatments. My arena is the better screening one, but with prevention, can we address the modifiable causes? That’s come up this morning in Jane’s presentation. For better screening, how do we increase participation? Stubbornly stable at about 50% since the program’s inception. But how do we improve the accuracy, so reduce the false negatives or the interval cancers, very disappointing for all of us, in the population screening program? And of course, better treatments.
So again, I’ll just go quickly through this. The process of screening is important, as we understand how AI tools can be integrated into our program. So a woman coming for the population screening program at average risk of developing breast cancer, two views of each breast, so four in total. And then these are submitted for reading by the radiologists. And each image is individually read by two breast imaging radiologists, and we’re blinded to each other’s decision. Now if we agree, that’s good, but if we differ, it goes to a third radiologist as an arbitrator. So from that you can see it’s time intensive, but it’s also very resource intensive. Since the program started, we’ve had this multi-reader model, and about 95%, actually probably even a little bit more, of all images and clients that we see are normal. They don’t have any evidence or features suggestive of breast cancer.
So there’s a lot of work that needs to be done for 95% with no suspicion of cancer. So 5% of women that come through this screening pathway are referred to assessment with indications of breast cancer. And they’re going to have extra imaging in the screening program, digital breast tomosynthesis, ultrasound, sometimes a clinical examination, oftentimes a biopsy. And for those women that come through assessment, for every 10 women that come, 9 of them have been recalled unnecessarily. So one of those 10 will have a cancer diagnosis. And that gives us an overall yield of 7 cancers per 1000 women screened. And that reflects the fact that these are asymptomatic, well women in the community going about their day-to-day work. As Robyn pointed out, that a proportion of them, a small proportion, will have genetic inherited risk of breast cancer but might not be aware of it. But overall these are well women, with no evidence to suggest breast cancer.
So we have to screen a lot of women with that low incidence of breast cancer in the background population to detect those cancers. So even though it is a successful public health initiative, there are challenges, and these challenges are well known. They’re not new. And they can be thought about in three buckets: accuracy, client experience and the cost. When it comes to accuracy, we call back too many women, unnecessarily. About 35,000 around the country every year come back to assessment clinics with an indication for cancer, but are subsequently determined not to have cancer, which is great news, and usually met with much joy, but there’s a lot of associated harms and costs as you can appreciate with that. In addition, around the country, about 1000 women who have had a normal screening mammogram present with what we call an interval cancer. They’re symptomatic, they go to the GP, they go down the diagnostic pathway with a cancer diagnosed.
This, for me, is the hardest thing in the job, when a client wants to understand, ‘I’ve done everything right, breast screen failed to detect my cancer, please explain.’ So we go through those every year, very thoroughly. The majority of them do have dense breasts, unfortunately, it wasn’t there, we can’t see it. A very small proportion, we haven’t done our best work, but they’re very useful learning episodes for us. And a smaller proportion again, you just can’t, even in a fatty breast, you just can’t see anything. And maybe they’ve got those very, very high, aggressive cancers, the killer cancers, that have actually arisen in the two year screening interval.
With regard to experience, it’s a cookie cutter approach. One-size-fits-all, predominantly, except for a small proportion of women who come back related to their family history annually, or have had a borderline breast biopsy. And the service level, I think, it meets all its standards, but it’s up to 14 days to receive a result from a normal mammogram. And when I go out and talk in the clinics to women, it’s that 14 days of waiting for a normal result that’s very stressful. It’d be great to bring that down.
And finally, the efficiency. It’s an expensive program to administer, about $270 million around the country each year for the screening program. And those costs are increasing. We see an ageing population, increasing demands for our services, increasing incidents of breast cancer. But coming fast at us, and this isn’t unique, I don’t think, to breast imaging, it’s across the whole healthcare network, is a workforce availability. So certainly in radiology, it’s predicted in five years’ time we won’t have enough breast imaging radiologists to meet our demand.
So, I am a radiologist. I’m quite visual. I’m going to show you some images here. Activity for a rainy Sunday afternoon. 1000 mammograms there that you can see in the grid. And this is to give you an indication of that background population screening statistics for us. So for every 1000 women screened through the program, we detect 7 cancers, the true positives. But within that group of 1000 women are just under 2 interval cancers that are diagnosed, and they’re the false negatives of the program. To catch those 7 cancers we’ve had to bring back 40 women unnecessarily. And I think, just keep that in your mind, because I think it’s really important when we talk about the population program.
So I’m really passionate about the program. I’ve worked in it for over 25 years. I feel that we do do as good a job as we can with our current scenarios, but I think there’s so many opportunities for us to do a better job and leverage the data and AI capabilities. So I’ve been leading this program of work since 2020. I’ll take you through it as quickly as I can. Apologies in advance if it’s quite technical. And to Jane, apologies, I know that I haven’t got forest plots, but I’ve got ROC curves, and I know that the eyes will glaze over, but I’ll just show you those quickly as we move through.
So as part of the work, we’ve developed a globally unique AI data set, and what we’ve learned is that the value is in the data, and you would’ve all heard that, and we’ve been able to develop this data set because of the population screening program. We’ve demonstrated benefits retrospectively. We’ve shown that the, also we’re starting to push into this, that the image classification algorithm is also a very good predictor of risk of developing breast cancer in the medium term, and that’s sort of forming the foundations of our future work. This side, in this financial year, we hope to start a randomised control trial with the AI reader, taking out one of the readers in that multi-reader pathway that I talked to you about. We’ve got funding and ethics for that, we’re just in the final stages of technical integration into our existing systems. And we’re developing a roadmap for deployment. And at Breast Screen Victoria we’re talking to vendors as well so that we can benchmark and make sure that we’re across all that’s available so that we can actually choose the best solutions.
Should have pointed out right at the outset that this is a program built on partnerships. We’ve got five partners, St Vincent’s Institute of Medical Research, St Vincent’s Hospital, Breast Screen Victoria, University of Melbourne, and the University of Adelaide.
Okay, so the three critical capabilities for development of AI tools is dataset development, algorithmic testing or development as well, and the human AI integration. When we hear about or talk about AI, everyone gets very excited about the algorithm, but that’s kind of like the easy part of it. There’s a lot to be considered, predominantly around the quality of the data, but also what it’s like to be human and interact with AI. That is really understudied and we need to really think closely about that.
When we think about the dataset, as I mentioned, it’s globally unique, it’s a public asset from the population screening programs, and it’s a very large, digital, longitudinal, fully grounded dataset. And that’s what makes it excellent for development of AI tools. In the dataset that we’ve developed, which is called ADMANI, and that’s been published, not the images but the actual descriptor of it, we have over 6.2 million images. Now, the figures that you can see here are a little outdated, but actually grown even bigger as we’ve incorporated some of the more recent years once we’ve had the interval cancer outcomes. But in our original dataset, we had images from over 30,000 screen-detected cancers that have been proven on surgery. And that’s quite remarkable, as a radiologist, we’d never get to see something like that working full-time in our lifetime. So you can sort of see the resource or library that we’ve got access to, 7500 interval cancer cases, so those false negative mammograms, and it is sequential.
What also makes the dataset incredibly unique is it contains lots of other variables: family history, hormone replacement therapy, what happened at reading, what happened at surgery, all the histo-pathological outcomes, what was the, even right through to linkages to the death register. So it’s a very rich image and non-image data set, and it is fully grounded. It’s one of the most unique healthcare data sets. Unlike when I try and extract data from the main imaging departments in hospitals, and all you can go on as the ground truth is a radiological report. And I know that we oftentimes get that wrong. I’m a radiologist myself, so I can say that. We do do our best, but a ground truth of a radiology report is nothing like the ground truth of a surgical histopathology report. And we also know the interval history, for every client that comes through the screening program, if it’s the unfortunate circumstance of an interval cancer that’s reported to the cancer registry, that gets reported back to the screening program for us to take a close look at and learn from it.
Okay, so here are the curves, and apologies, doesn’t matter if you haven’t seen these before, I’ll just take you very quickly through it. But these are some results that we published in Nature Communications last year on a very large, fully grounded screening population dataset of 150,000 women. And if you just look at the insert, which is the blown up bit in the middle, it’s where all of the actions happening on that receiver operating characteristic curve chart. The black curve that you can see is the algorithm across all the different thresholds of sensitivity and specificity. The black circle below is the weighted mean of the individual radiologists that work across the screening program in Victoria. And there’s probably about 85 of us reading at the moment. The little grey bubbles that you can see in the background are all of us. I’ll be one of those dots, hopefully I’m closer to the top left corner, which is better performance. But you can see from that that we’re all very variable. And that’s why we’ve got this multi-reader model. We’ve all got strengths and weaknesses. We’re looking for preclinical disease, just subtle pattern changes. It is a tough gig, especially when 95% of them are normal. If you’re not on your game and that sort of thing, you can actually miss some things.
So that’s one of the logic of that multi-reader model 30 years ago when screening was established. But you might be surprised at the variability, but it takes us back to that black circle of the weighted mean. And the top red square is the consensus of that multi-reader model. And that’s the two radiologists when they agree, or the third if they differ. So what we can conclude from this is that the AI reader performs above the level of the weighted mean of the radiologists across the state of Victoria, and you can actually see there’s not that many grey bubbles above it.
And since this chart, we’ve actually fine-tuned it even further. We’re getting closer to the red, which is great. But it’s not ready for standalone. So we can’t get rid of that multi-reader model and bring in the AI, because it doesn’t beat the consensus yet. So we still need to certainly have the human oversight of that. The area under this curve was 0.93, 1.00 being a perfect area, and in that context the curve would hug that top left corner. So it was these results that then encouraged us to take this a bit further to see, how could we integrate it into our multi-reading model?
Now, in the interest of time, I’m not going to take you through all of these scenarios, but if you could just concentrate on the one in the top right, which is AI reader replacement. So in that scenario, the AI reader is going to take out the second radiologist’s reading in the program. Now if they agree, that’s good, if they differ, it’ll go through to the third human arbitration read, just like what we do at the moment. That third reader is unblinded to the human and the AI, and it’s actually in that third reader scenario that we will really get to explore the human AI interaction. Are our radiologists, do they have a lot of automation bias, they always side with the machine output? Or actually do they have automation neglect and they always contest it. And those things are important because they really can influence high consequence decisions at scale. So we really need to study that. So that’s the scenario that we’ll take through to the randomised control trial which, as I mentioned, we start mid-year this year.
The AI band pass, I’ll just quickly mention, sorry, the one below it. This one is probably the most compelling in terms of workload reduction. So what I actually didn’t mention, I’ll go back to with the reader replacement, the workload reduction in that is about 50%. It’s just under actually at 48. So as soon as AI gets implemented into the multi-reader pathway, we’ll see a 50% reduction in reading that’s required in the program, which is significant for us, especially as we face all of the constraints with lack of workforce at the moment. With AI band pass, this scenario can actually reduce the workload required by about 80%, because the AI is very good at ruling out completely normal mammograms. And as you know, it’s normal, normal, normal, normal, normal. There’s a lot of them, 95%, right? So the AI band pass, with the AI reading every single mammogram, could rule out about that proportion of our work. It can rule in, because it’s also very good at detecting differently malignant cases, all of those cancers, or potential cancers straight through to the assessment clinic so that we don’t lose too much time.
And in this scenario, just the middle band is the band that the humans read, those more complex cases of challenging breast parenchymal patterns, et cetera. So the band pass gives us significant savings, but the challenge is it’s a much higher ethical bar. So it takes a human out the loop, whereas the one that you see before, the reader replacement, keeps the human in the loop and the human’s still in charge. So the AI band pass is not ready for primetime yet, but my perspective is that that’s what will be, or we’ll be heading into the future. I don’t think it will necessarily take too long. The world is moving very quickly and we do see autonomous solutions in health care already. So ECG outputs for instance, or lab results.
The triage, I won’t dwell on, that was the first randomised control trial has been in Sweden, and that’s the protocol that they used.
So the last ROC. I promise that this one’s even looking more complex, but it’s actually the same curve. And you recognise the red box that we saw before, which is the consensus of that multi-reader system. The black circle there is the weighted mean of the individual radiologist. But just draw your attention to the green box and the purple box. The green box is secondary to replacement. That’s what the trial’s going to be. And you can see from this, or we deduce, that it has slightly higher sensitivity and specificity than the consensus, which is what we need. So that’s great. We’re happy about that. Delivers about 48% of that workload reduction. The purple box is the band pass, but as I say, that’s not what we are taking through to the trial.
That’s the clinical study protocol with the control on the intervention arm secondary to replacement. And we have ethics approval for that. This is just an example of a really challenging case. This is one of our interval cancers.
So we know in the retrospective dataset, this woman developed an interval breast cancer. I’ve got it with a red circle around the top. I think this is a really tough case. As I say, two readers didn’t pick this. But if you can see the
decision tree on the right, which is the output of the AI system, you can see that the red-y pink-y colours on the right, the numerical scores there. Now we’re still iterating this, talking to the radiologist to understand how they would like to see the output in some user experience testing. But basically the threshold for recall is set at 95, and you might not be able to perceive from the back of the room, but all the scores on that right side for both views are well above 95. So this is a case where the AI readers picked it and the human hasn’t. The left side is cooler colours in the 50s I think, from memory, but they don’t meet the threshold. So this is just an example of the sort of annotations and synoptic reports that the radiologists will be able to see in the trial and into the future.
And I’m just going to move now into what I think, and certainly probably more in keeping what we’ve been talking about this morning, although at an average risk population background, is what we have learned is that the image classification algorithm is actually also a very good predictor of risk of developing breast cancer. So on the far left, you can see that the algorithm picks up very strong signals of all the true negatives without cancer. The far right, you can see that in that 10th, the uppermost decile, the black circle, algorithm also picks up a very strong signal of the cancers. But what was really interesting is in this grounded dataset, we know the interval cancers, the algorithm detected strong signals from 40% of those women that went on to develop an interval cancer. And you can sort of see that in that uppermost decile. So that has been the start of our risk discovery work, which I think is really exciting and hopefully is something that’s going to really help us do that better job, improve accuracy, experience, and efficiency in the population program into the future.
So this slide, before I take you through it quickly, I just want to really acknowledge the work of a colleague of mine, Professor John Hopper, that very sadly died suddenly towards the end of last year. This was probably the last piece of work that he generated. He was incredibly excited about it. When he got the risk score results and this curve was generated, he was straight on the phone. He used to ring me every day. But this one was particularly, it blew him away. And he’d spent 30 years working in breast cancer in the breast cancer community, very focused on breast density, but this kind of, he was particularly excited by this because he felt that it helped us do an even better job than breast density. So I’ll take you through that. But as I say, as a community for us in our research program, we’re so sad, very saddened by John’s passing, but we’re really motivated. He was a great developer of people to keep his legacy and this work going.
So this is the earliest work, which it is currently in pre-print, but we’re hoping to publish soon. So as I say, AI is a really good indicator of risk, image classification scores on the algorithm. And we can think of the image actually as a biomarker, which, when I started in radiology 30 years ago and I’d hold up the image to the view box, I never would’ve thought that it could be a biomarker. And that sort of excites and surprises me every day. Mammography does contain, and certainly at the pixel level, very informative indicators of risk. And this is beyond what’s captured in basically patient questionnaires that we’ve been talking about this morning. For instance, the Tyrer-Cuzick, iPrevent, CanRisk. That’s what we’ve got at the moment. But for someone that has, I don’t know if many of you have tried taking the test, but people that do have a family history, they can be really labour intensive, and can be quite challenging for individuals to do themselves.
And at the end of the day, they give us a lifetime risk of developing cancer, whereas the image classification scores can give us a medium-term risk of developing cancer to give people indication of what they might need to do faster than, or what might be more relevant than, their lifetime risk. And it can be refreshed every two years, or every one year, whenever they come back from screening. It’s dynamic. And I think that can be really helpful, and I think it’s exciting.
Mammographic density, again, is where we are at the moment, and we have, I’m pleased to say, rolled it out across Victoria now in the screening program. But it is a very blunt measure, blunt summary measure, of risk of cancer on a mammogram. So the AI classification algorithm outputs are much more discriminatory of the risks of early cancer.
From this chart that you can see on the left, we’ve got the BRAIx risk score along the bottom. The black line that you can see is a normal distribution. We’ve transformed the classification algorithm into a normal distribution with the epidemiological overlay, age, family history density, which was John Hopper’s work. And you can see on the far right the histogram cancer detected at screening. And so that’s the very strong signal that the classification algorithm, hence the risk score, picks up for screening at the time.
But what was particularly interesting is that, you can see on the faded out grey histogram, that the curve has shifted to the right for women that develop cancer in the next four years. So what we’ve seen in our large data sets, in fact, is that for women with a BRAIx risk score of 2, which is the tail in the normal distribution, 2 standard deviations from the mean, the uppermost 2%, one in three of those women went on to develop breast cancer, either in the next two screening rounds or the next two intervals. And that’s the real excitement for where we can move forward, as really starting to think about how we can personalise a woman’s screening journey, interval, imaging modality, et cetera, based on their risk of developing breast cancer.
Okay, so where are we now? Currently, as I’ve mentioned, 50 to 74 years are actively invited to screen. Mammograms at 2-year intervals. Both of the bookends are eligible to attend, but we see really low participation. I mean I think we’re low across the target anyway at around about 50%. But for women 40 to 49 for instance, that’s about 10 or 11%. So we’re really not seeing many women in those bookends, which is understandable because we don’t actually invite them. A very small number of clients will have annual screening. But in the future, hopefully, as we see the image classification algorithms cut through, more discriminatory than breast density, we can start to invite women earlier. And this is all to be tested and developed and trialled at 40 years. Currently for women at 40 years, the program isn’t able to accurately discriminate risk or early cancer, mainly because of the mammographic density. But we’re very hopeful, with some of the discoveries, that will be able to bring that forward.
Individual risk scores would be provided to the client for discussion with the GP. Hopefully this will help our participation. Hopefully it’ll help lift our participation, because rather than just communicating something like a risk score, we’ll actually also be communicating, hopefully, a prescription, or a pathway, of a risk score with, ‘This client can just have normal screening intervals every two years.’ Or we might triage out those at that uppermost risk, uppermost 2 or 5% risk, that need a different screening interval and a different imaging modality. And by that I mean contrast imaging, for instance.
And the other thing that we are really learning and exploring deeply now is that it’s actually the change in the risk score over time. So we get the opportunity to assess that each time someone comes for their mammogram, how that changes, and the trajectory of its change is very important for predicting future risk of cancer.
So I’ve taken this from a paper recently published by Eric Topol, who’s the patriarch of image classification AI. He’s actually a cardiologist. But I think it’s really interesting, because I think it’s where we’re heading, not just in breast cancer screening, but also in health care in general, hopefully for a precision health system. But what I’ve really learned is that the population program does have all of the ingredients that we need for that precision system. We’ve got these large digital datasets that we can leverage, which are incredibly valuable. I do think that the era of that double reading, for two radiologists and a third, is definitely coming to a close. And we’re at the dawn of this more personalised, AI leveraged, always at the moment having the human in the loop, breast cancer screening program.
And I think that we can then hopefully realise this longstanding vision that we all have of risk stratified personalised screening. You can see here, for instance, in the diagram below on having a mammogram, that there’s multiple omics in that. We’ve got the electronic health record, we’ve got polygenic risk scores, we’ve got genetic information, we can have blood, we can have tissue as well. So there’s a whole soup of multi-omic data that, coupled with developments in foundation models, large language models and ChatGPT, that has an incredible capacity to link very disparate, massive data sources. We can really start to head to precision screening.
So I’ll just go to this image of the Breast Screen Northern Territory van outside or in Central Australia, outside Uluru. I think that I wanted to highlight the incredible efforts that the organisation does go to screen women all around the country. It has been an effective program, it has a reduction in mortality for those that screen, as I mentioned. But it is a one-size-fits-all approach and we can do a lot better. And I think those developments in technology, especially data and AI, do highlight the potential for this risk-based personalised screening to get towards zero deaths from breast cancer. And I actually do think, and I feel very proud that I think that the breast cancer screening program can be an exemplar for a precision health system into the future.
End of transcript
Presenters
Associate Professor Helen Frazer
State-wide Clinical Director – BreastScreen Victoria
Clinical Director – St Vincent’s BreastScreen
Lead Investigator – BRAIx program
Dr Elizabeth Farrell
MBBS, HonLLD, FRANZCOG, FRCOG
Gynaecologist and Medical Director
Jean Hailes for Women’s Health
Slides for download
-
Download transforming breast cancer screening with AI slidesViewPDF • 3 MB
Acknowledgement of country
This webinar was filmed on the traditional lands of the Wurundjeri and Gadigal peoples. Jean Hailes for Women’s Health acknowledges the Traditional Owners of Country throughout Australia and recognises their continuing connection to land, waters and culture. We pay respect to Elders past and present.
Continuing Professional Development (CPD) information
- ssess the role of AI in breast screening
- identify the potential benefits of AI in screening
- determine how AI might support future risk-based personalised population screening.
Jean Hailes education activities can be used to fulfill the CPD requirements of many registered health professions.
Depending on your profession, you may need to keep a record of the following: event date, provider, your learning needs, type of activity, content details, learning outcomes, reflection on the activity and CPD hours.
The RACGP activity ID number for this webinar is 1264152. It is accredited with RACGP for 0.5 hours of Educational Activity (EA).
This activity does not count towards a GP’s RANZCOG accreditation.
On completion of Jean Hailes’ education activities, you can fill out an online evaluation survey, after which your certificate of completion or attendance will be emailed to you for your CPD record.
To provide direct feedback to the RACGP about this activity, complete their feedback form.
Receive your certification
To receive your certificate of completion or attendance, fill out our online evaluation survey. Once you’ve completed the survey, we will email your certificate for your CPD record within 5 business days.
Our review process
This information has been reviewed by clinical experts and is based on the latest evidence.
Our content review process ensures our health information is accurate, trustworthy, current and useful.
We regularly check our information to make sure it reflects the latest clinical guidelines and key findings from large, reliable studies.
Where possible, we focus on Australian research to make our information more relevant locally.
Experts play a key role in reviewing our content. Clinicians at Jean Hailes check information for accuracy and real‑world relevance. These include GPs, gynaecologists, endocrinologists, psychologists and allied health professionals.
We also work with partner organisations, independent specialists and people with lived experience to make sure our content reflects both expert knowledge and the experiences of the community.
Evidence and medical knowledge is constantly changing. The authors have taken care to ensure that the information on this page is accurate and up to date at the time it was created. This content is intended for healthcare professionals who should always manage patients within their scope of practice and work within local policies and practices. This content is not intended for members of the general public.