I just asked chat GPT a soft ball question on a subject I am professionally skilled at, the outline of what it gave was ok, but it spit out some explanations that are common, but deeply flawed. Things that lead people to make inappropriate decisions and I frequently have to educate people about. If a peer gave me the same answer, I would not waste my time listening to them on that topic. So, if it fails at what I know, why would I trust it in any way with what I don't know?
It is a language model, the current versions are incompetent. There is certainly potential there, but potential is not capability. One day I do believe there will be capability, but at present it is the Dunning-Kruger effect on steroids with a veneer of legitimacy.