X-formerly-Twitter owner and xAI CEO Elon Musk claims that his foul-mouthed AI chatbot Grok is now capable of understanding images . And what does its creator want you to do? Feed it private medical documents, of course. "Try submitting x-ray, PET, MRI or other medical images to Grok for analysis," Musk wrote in a tweet on Tuesday.
"This is still early stage, but it is already quite accurate and will become extremely good." "Let us know where Grok gets it right or needs work," he added, presuming that his audience will be willing to freely test his chatbot for him with their possibly compromising medical information — and maybe even to trust its analysis. Sadly, he was right.
Many users have already replied sharing what the Magic Grok Ball had to say about their medical documents, ranging from brain and bone scans to blood charts. Being Musk fans, they were unsurprisingly enthusiastic about the chatbot's results. A few celebrated the fact that they would no longer need to see a specialist.
But doctors were more mixed. One noted that Grok failed to identify a "textbook case" of tuberculosis that spread to the spine. Another found that it wrongly diagnosed breast scans and missed clear signs of cancer.
In a hilarious case , it mistook a benign breast cyst for testicles. To be clear, AI-assisted radiology is a serious, burgeoning field of research — so there are many experts who feel hopeful about the technology. That doesn't mean it's being best represented by a general-purpose chatbot, however.
Beyond the very high potential for misdiagnosis, submitting medical documents to an AI chatbot like Grok is a bad idea if you value your own privacy. Because Musk certainly doesn't: he billed Grok as having " real-time access " to data via X, which many interpreted as an admission that he trained the chatbot on users' tweets. That remains unverified, but this became an official policy in July , when X gave users the ability to "opt-out" of having their data used to train Grok, when by default you were opted in.
Chatbots are a privacy nightmare in general. Because they use conversations to improve their capabilities, whatever you say to them could be inadvertently regurgitated in another conversation in some shape or form. Large organizations, from JP Morgan to Amazon , have prohibited employees from speaking to chatbots for these very reasons.
With all that being said , we'll spell out the obvious: don't fork over your medical info to Musk's " anti-woke " chatbot, please. More on chatbots: After Teen's Suicide, Character.AI Is Still Hosting Dozens of Suicide-Themed Chatbots Share This Article.
Technology
Elon Musk Recommends Feeding Your Medical Scans Into His Grok AI
Bot M.D. Elon Musk's foul-mouthed chatbot Grok now has image understanding capabilities. And what does its creator want you to do? Feed it private medical documents, of course. "Try submitting x-ray, PET, MRI or other medical images to Grok for analysis," Musk wrote in a tweet on Tuesday. "This is still early stage, but it [...]