

or finding sources for info that would take a LOT longer otherwise.
Maybe. It adds to the list of sources you have to check from, but i’ve found i still have to manually check to see if it’s actually on topic rqther than only tangentially related to what I’m writing about. But that’s fair enough, because otherwise it’d be like cheating, having whole essays written for you.
Its great for getting detailed references on code
I know it’s perhaps unreasonable to ask, but if you can share examples/anecdotes of this I’d like to see. To understand better how people are utilising LLMs

I think we have an easier job determining right away if humans are lying about something, and humans generally own up to being unsure about things. On the other hand, AI seems to be designed with an intention to be infallible, as it doesn’t even give an estimate as to how sure it is that it’s information is correct.
If a human in an organisation lies/says incorrect things a lot, they get fired.
So it sounds like AI is only really useful for your
linewider area of work, that being anything programming focused, and therefore you’re thinking of a very specific type of information to get fetched - templates to build off of. I hope you can see why it was bad to generalise in your initial response; someone working with historical or political facts, a structural engineer working on bridges, or a teacher, can’t rely on GPT to get them the info they work with.