As ironic as the situation stands, wherein a human created chatbot ‘still in its infancy’ now threatens human creativity, a latest incidence has jolted academics even more.
AI researchers when they talk about AI medicine think of medical AI as a crude diagnostic tool for doctors that suggests possible diagnoses. AI is not a substitute for an experienced doctors judgement
yeah but AI is just not that sophisticated in its reasoning like I said it's a crude tool. and the more sophisticated AI models don't explain their reasoning well
Ah, fully agreed. Properly formatting the input data (especially ongoing symptoms) for the model would also be a nightmare; and yeah, there's a whole field about trying to pinpoint how the black box reached a decision - and it's not making much progress.
Eventually, though ? it's really one job I could see mostly automated; earlier than most others.
I think the proposed uses for AI (at least in medical imaging) is to tweak so that it has a very high false positive rate, low false negative rate, so it causes a reduction in the amount of work that radiologists do while missing as few actual positives as possible. Which is definitely useful and doesn't seem overly dangerous, but it's not "revolutionary" so it takes a backseat when talking about AI or anything. I think it's actually a shame that small improvements don't get sexy coverage, since really incremental improvements in science and engineering are probably as responsible for modern wonders as the initial revolutionary discoveries.
AI researchers when they talk about AI medicine think of medical AI as a crude diagnostic tool for doctors that suggests possible diagnoses. AI is not a substitute for an experienced doctors judgement
OK, but isn't that the main task of a GP doctor ? pattern recognition based on symptoms / the results of analysis / medical history ?
All the actual human part of healthcare seems to, mostly, be done by nurses, not doctors.
yeah but AI is just not that sophisticated in its reasoning like I said it's a crude tool. and the more sophisticated AI models don't explain their reasoning well
Ah, fully agreed. Properly formatting the input data (especially ongoing symptoms) for the model would also be a nightmare; and yeah, there's a whole field about trying to pinpoint how the black box reached a decision - and it's not making much progress.
Eventually, though ? it's really one job I could see mostly automated; earlier than most others.
maybe personally I'm dubious that such an AI would be cost effective. But I am dubious in general of claims AI will revolutionise our society.
deleted by creator
I think the proposed uses for AI (at least in medical imaging) is to tweak so that it has a very high false positive rate, low false negative rate, so it causes a reduction in the amount of work that radiologists do while missing as few actual positives as possible. Which is definitely useful and doesn't seem overly dangerous, but it's not "revolutionary" so it takes a backseat when talking about AI or anything. I think it's actually a shame that small improvements don't get sexy coverage, since really incremental improvements in science and engineering are probably as responsible for modern wonders as the initial revolutionary discoveries.