Yeah its less than 40 characters, maybe one special character.. no doubt it would be easy for a program to crack.
Special characters do not matter anymore. That's a bad trick. NIST updated the standard, "short" (10, 12, 14 char) passwords that are complex are retarded. Your A_%neg0pz password is not strong. That's a shit password and just as easy for an RTX 5090 to c rack - microseconds - as 1234567890. Because even if it's not, and it takes 3x more GPU power to crack, your 180 ms time is no w 540ms... WOW! edit below
Dig into XKCD CorrectHorseBatteryStaple, and understand why it's better, but in 2025 even that's shit. Use AI, chatGPT is better than Gemini, Gemini is google and will tel you that 10-12 len + complex characters is better, because that's what google uses. Issue is, google is serving 2 billion people (or whatever), and that works better than "Make a RANDOM string of words that you'll EASILY remember that's at least length 58", people don't do random very well. They'll do some stupid shit like "MommyDaddyKittyDoggyBrotherSister". All of those words are individually vast search spaces and you'd think that's good, but it's bad because the lexicographical relationship between them is vast, so a dictionary attack will guess those groupings before others.
e: Because those complex passwords are hard for humans to remember, so humans being the dumb niggers that we are, tend to write them down and then also lose that paper. Or even worse use shit like "What's your first car?" and "Your high school mascot?" as your security questions, the go on twitter and answer a meme account who makes a meme asking those exact question as a means of data harvesting.
DID I TELL YOU PEOPLE ARE RETARDED?
Yeah well I wasn't terribly concerned about the security of my router when I set the username and password the first time. At least its better than the default admin/12345.
stupid question but you twice mention that a GPU can crack it but isn't that just a hunk of hardware that renders images to your screen? A video card? How is that going to crack passwords?
It's not exactly new but things like CUDA allows you to run a program directly on a graphics card. You might think why that matters but then think, what do graphics cards do best?
Well, the short version is that they do complex math fast and in parallel where a CPU is not as good at that. It means that math intensive problems in this context are far faster to run on a GPU than a CPU.
Cuda and tensor on nvidia are money printers due to this. It's why Nvidia is winning bigly versus ATI. Brute force is hard because it's A LOT of data to crunch, but the actual math is the SAME math. So GPUs are AMAZING at matrix math. That is, doing the same math on tens-of-thousands, millions of objects at once. And it's not just millions, it's tens-of-millions and in ideal cases hundreds of millions per second.
"Short answer: Yes — and often far more than “hundreds of millions.” For fast hash algorithms (MD5, SHA‑1, NTLM, some plain SHA‑256 variants) a single high‑end GPU can test tens of millions → tens of billions of candidates per second depending on the algorithm, kernel tuning, and driver/CUDA/OpenCL backend. "
lol. Go ask AI, they'll answer better than I have time to.
edit because I found a bit of time, or hatever. Think of it this way, a GPU "has" to render 30, 60, 120, 240 or whatever frames per second. What is a frame? At a fundamental, base level it's 1920*1080 pixels for 1080p. 4k is hilarious because of how it scales, 4k is 3840x2160 or 8,294,400 pixels. To achieve 100 FPS at 4k you need to do at MINIMUM 829,440,000 calculations, but that's not it. Each pixels takes many dozens, hundreds of calculations.
GPUs thrive at that, as what's his face said: CPUs don't, they can do more per core than a GPU can per core. But what's your CPU have, 4, 6, 8, 16 cores?
An RTX 5090 has 21,760 CUDA cores that operate at ~2.6 Ghz. That's 21,760 * 2,600,000,000 = 56,576,000,000,000 (56 trillion)
Ask AI. Moore's Law is over, but it's not over. Per chip it's done, we can't keep going smaller, but we can get more efficient.
You should look up VDDR7's bandwidth, because it will make you laugh. Then you need to realize that that bandwidth isn't enough to fulfill the CUDA cores thirst.
Here:
(post is archived)