the issue I have is that there is no guarantee at al to receive a "correct" answer
if "testing" a "normal" application is "difficult", how do you qualify an AI as "accurate" ?
when is an answer "right" ? are we able to know if an answer is right ?
While I agree with your premise, I feel as though code should be easier than most things for AI. Most code issues are result syntax errors, misuse of code, small snippits are fine and easy enough to test, a lot of times its like using google to search for trouble shooting issues back in the day, you need to know what to ask to get the right answer, ie know how to google. Now, knowing how to pose the question to the AI will be the challenge, ie: if you have no fucking idea how to code, then yeah dangerous and hard to verify. Hit a mental block and can't figure out how to get past a spot? easy to verify.
Hell, I've used it (Bing version) to help some powershell, really just pointing me in the right direction, nicer than bugging a co-worker when I get stuck, and I don't get wrapped into a stupid conversation wasting time asking a simple question either.
it all sound like as going to a drug dealer (not a pharmacy)
the first deals are possibly free and wonderful
then hell happens
Interesting take. Yeah I'm a bit leery as to where it goes, guess time will tell. When they make it begin to recognize uses and store the type of questions they ask, ie: Welcome back tempus_vulgate... Then start to tailor responses accordingly, yeah not looking forward to (((them))) trying to sway results to convince me to be a good goy and settle down.
(post is archived)