ChatGPT was earlier reported to have updated its policies. The new policy was said to have restricted the tool from offering specific advice in medical, legal, or financial domains.
OpenAI has confirmed that there has been no change in their policy and that the 'model behaviour remains unchanged', after several reports of ChatGPT updating its policies over liability concerns, were doing the rounds.
Karan Singhal, head of health AI at OpenAI, took to X to deny all of this reportage and say, "Not true. Despite speculation, this is not a new change to our terms. Model behaviour remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information."
This means that OpenAI still holds firm that it cannot be substituted for professional advice and help, but it hasn't changed its code to prevent users from still using ChatGPT for mental, legal, and financial advice.
An earlier report from Nexta suggested that OpenAI had pushed new restrictions that prevented users from relying on ChatGPT for professional consultations in high-stakes areas, including health treatments, legal strategies, financial planning, housing, education, migration, or employment decisions without human oversight. The new changes were said to have been introduced on October 29.
Recent incidents highlight risks
Even though there is no policy change, the conversation around ChatGPT's increasing influence on people's decision is important. In August, a 60-year-old man was hospitalised for three weeks after substituting table salt with sodium bromide based on the AI's input, leading to paranoia, hallucinations, and an involuntary psychiatric hold, as detailed in the Annals of Internal Medicine.
In September, Warren Tierney, a 37-year-old from County Kerry, Ireland, delayed medical care after ChatGPT deemed cancer "highly unlikely" for his swallowing difficulties. He was later diagnosed with stage-four esophageal adenocarcinoma. Tierney told the Mirror that the AI's response "delayed me getting serious attention," though he accepted personal responsibility.
Even Kim Kardashian has blamed ChatGPT for failing her law exams. These events underscore broader concerns about users treating AI as a substitute for licensed experts, particularly in medicine, where the tool cannot assess empathy, body language, or real-time crises.