Training data is where the problems lie, especially when (already?) it starts to train itself on other AI output. Garbage in, garbage out, as the saying goes, only a lot worse than normal.
The dedicated purpose AI tools, like medical imagery trained specifically on expertly verified images are a whole different tool than the large language tools that most people play with. AI has huge beneficial potential when properly trained with medical data, and with the output checked by human experts. A combination of AI analysis of symptoms followed by expert checking is already better than just checking with the experts, at least one research report found that in their tests. This is because AI will pick up oddball medical conditions that an expert will not think of initially, but will be clear after AI directs attention to the real problem.
As I mentioned before, the trouble with AI is that the term includes wildly diverse analytical tools, some valuable and effective, others toys with little real use, and still others potentially useful but actually dangerous.