After spending an hour doing some expenses calculations for work I decided to run them through ChatGPT to see if I could use it in future to do those to save myself time.
It fuck up percentages just enough for someone who won't check it twice not to notice. AI is only good for to be used as an advanced search engine, which it does very well in most cases.
concur - it's a great crawler/spider. This cat I was referring to is one of those that memorizes policy and whatnot like it's nothing, and he uses this to qualify himself as legitimate and (his words) irreplaceable, which we all know is a fools errand. It only makes sense that he leans on such a tool to reference index since that's how he's wired biologically. i just wish that these folks would stop and think for a sec, where the data is coming from especially since in my work, I have explained it numerous times in numerous ways that it is the source of the data that must always be questioned.
does 'AI' have a good idea? maybe - where did it source its data, and it just goes from there.
It can be a good tool to use as a reference, but still you have to check everything. I use it in a "give me as much data to sift through myself as you can" and then I use it to generate and format the report from data I hand picked. Almost always fucks up that too, but it's easy to fix.
I know people like that guy you are talking about. Whole point of their existence is to overcomplicate things for everyone. I don't need to read policies and manuals written by management who not even once rolled up their sleeves and did what they are paying me to do.
ingestible LLMs is the name of the game of parsing [super]-large text to digestible output - that's the path we're on here anyways as it comes to USG policy relative to cyber, logistics, legal, etcetera as it comes to acquisition - that's the ideal anyways. Implementation is another monster, but that's a different discussion.
(post is archived)