Yeah, these sorts of things are massively parallel and designed to run on specialized (GPU) hardware. I work for a law firm (AMLAW 100) and was researching doing our own AI based audio recognition, and even just for that it isn't feasible. It would mean spinning up multiple servers with specialized CUDA hardware and THEN training the AI constantly. Just not worth it when there are closed caption services in the cloud that will do the same thing and sign our NDAs.
Yeah, these sorts of things are massively parallel and designed to run on specialized (GPU) hardware. I work for a law firm (AMLAW 100) and was researching doing our own AI based audio recognition, and even just for that it isn't feasible. It would mean spinning up multiple servers with specialized CUDA hardware and THEN training the AI constantly. Just not worth it when there are closed caption services in the cloud that will do the same thing and sign our NDAs.
(post is archived)