After OpenAI’s Oct 29 policy update reiterating that ChatGPT shouldn’t give advice requiring a professional license, user tests suggest the chatbot still produces doctor-like and lawyer-style responses. OpenAI’s Head of HealthAI says nothing has changed in the model’s behavior and that the update wasn’t new policy. OpenAI maintains ChatGPT isn’t a replacement for licensed professionals.
What Changed on October 29
OpenAI revised its usage terms on October 29, explicitly stating that ChatGPT must not provide medical or legal advice—areas that require licensed professionals. The language served as a formal reminder: health and legal guidance should be handled by qualified experts, not AI systems.
What Users Are Seeing in Practice
Despite the update, many prompts continue to elicit responses that read like medical or legal guidance—from explanations of symptoms and treatment options to clarifications of legal terms and suggested next steps. These answers are often well-structured and confident, echoing the tone of a clinician or attorney—even when accompanied by general disclaimers.
OpenAI’s Response
OpenAI’s Head of HealthAI, Karan Singhal, pushed back on reports of a policy shift, writing on X that the change was “not new” and that model behavior remains unchanged. In other words, the wording clarified expectations, but the system’s core behavior wasn’t altered.
Context and Caveats
OpenAI has long emphasized that ChatGPT is not a substitute for professional advice. Its outputs are generated from publicly available information and training data, and should be treated as general information, not as instructions to diagnose, treat, or act on legal matters.
Why It Matters
-
User safety: Over-confident guidance in sensitive domains can mislead users.
-
Regulatory pressure: Health and legal advice are regulated fields; policy language and model behavior must align.
-
Trust and transparency: Clear boundaries help users understand when to seek licensed professionals.
Bottom Line
OpenAI’s policy reiterates long-standing limits, but the real-world behavior many users observe still feels advisory. Until model responses and in-product guardrails fully align with policy, the safest approach is to treat ChatGPT as an informational tool—and consult qualified experts for decisions about your health or legal situation.