WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

1.1K

Yeah, this always happens. Some "shiny new thing" gets a shit load of investment then a ton of people make startup's that basically never create a product (some do, they are just pointless) then to bubble bursts. If someone actually somehow manages to create a real AGI then they instantly win the game and everyone else goes out of business, its a winner take all gamble.

Archive: https://archive.today/IAbrZ

From the post:

>"The most significant change we're seeing over the past 18 to 20 months is the accuracy of those answers from the large language models," gushed the CEO at last week's Harvard Business Review Future of Business Conference. "I think over the past 18 months, that problem has pretty much been solved – meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer," he added.

Yeah, this always happens. Some "shiny new thing" gets a shit load of investment then a ton of people make startup's that basically never create a product (some do, they are just pointless) then to bubble bursts. If someone actually somehow manages to create a real AGI then they instantly win the game and everyone else goes out of business, its a winner take all gamble. Archive: https://archive.today/IAbrZ From the post: >>"The most significant change we're seeing over the past 18 to 20 months is the accuracy of those answers from the large language models," gushed the CEO at last week's Harvard Business Review Future of Business Conference. "I think over the past 18 months, that problem has pretty much been solved – meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer," he added.
[–] 1 pt

when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer

No you can't.