WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

784

I can't recall my exact username/password variation combo for my network router, but I've narrowed it down to a list of about 81-90 or so possibilities..

I could just hit the factory reset and start over, but I'd rather avoid having to reconfigure my settings and all if I can put the [relatively] short list of permutations into a program or bot and have the problem solved quickly.

I can't recall my exact username/password variation combo for my network router, but I've narrowed it down to a list of about 81-90 or so possibilities.. I could just hit the factory reset and start over, but I'd rather avoid having to reconfigure my settings and all if I can put the [relatively] short list of permutations into a program or bot and have the problem solved quickly.

(post is archived)

[–] 1 pt

stupid question but you twice mention that a GPU can crack it but isn't that just a hunk of hardware that renders images to your screen? A video card? How is that going to crack passwords?

[–] 1 pt

It's not exactly new but things like CUDA allows you to run a program directly on a graphics card. You might think why that matters but then think, what do graphics cards do best?

Well, the short version is that they do complex math fast and in parallel where a CPU is not as good at that. It means that math intensive problems in this context are far faster to run on a GPU than a CPU.

[–] 1 pt

That is pretty interesting, I did not know about that. It's funny, back in the day I was really into the whole computer scene. MCSE, Novell Netware certified, the whole 9. Now I don't know jack about shit. I got out of that whole scene after the dot com crash and never went back into it.

[–] 1 pt

Cuda and tensor on nvidia are money printers due to this. It's why Nvidia is winning bigly versus ATI. Brute force is hard because it's A LOT of data to crunch, but the actual math is the SAME math. So GPUs are AMAZING at matrix math. That is, doing the same math on tens-of-thousands, millions of objects at once. And it's not just millions, it's tens-of-millions and in ideal cases hundreds of millions per second.

"Short answer: Yes — and often far more than “hundreds of millions.” For fast hash algorithms (MD5, SHA‑1, NTLM, some plain SHA‑256 variants) a single high‑end GPU can test tens of millions → tens of billions of candidates per second depending on the algorithm, kernel tuning, and driver/CUDA/OpenCL backend. "

lol. Go ask AI, they'll answer better than I have time to.

[–] 1 pt

I like your answer just fine, thank you.

[–] 1 pt

Thanks, I type like I'm manic, probably because I am and because my keyboard is a 20 year old logitech G15 that I refuse to get rid of because only thee w e space and left ctrl keys are moderately fubared.

[–] 1 pt (edited )

edit because I found a bit of time, or hatever. Think of it this way, a GPU "has" to render 30, 60, 120, 240 or whatever frames per second. What is a frame? At a fundamental, base level it's 1920*1080 pixels for 1080p. 4k is hilarious because of how it scales, 4k is 3840x2160 or 8,294,400 pixels. To achieve 100 FPS at 4k you need to do at MINIMUM 829,440,000 calculations, but that's not it. Each pixels takes many dozens, hundreds of calculations.

GPUs thrive at that, as what's his face said: CPUs don't, they can do more per core than a GPU can per core. But what's your CPU have, 4, 6, 8, 16 cores?

An RTX 5090 has 21,760 CUDA cores that operate at ~2.6 Ghz. That's 21,760 * 2,600,000,000 = 56,576,000,000,000 (56 trillion)

Ask AI. Moore's Law is over, but it's not over. Per chip it's done, we can't keep going smaller, but we can get more efficient.

You should look up VDDR7's bandwidth, because it will make you laugh. Then you need to realize that that bandwidth isn't enough to fulfill the CUDA cores thirst.

Here:

[–] 1 pt

The part that was really throwing me is that I did not know there was software that could directly interface and run on your gpu. A program. That is very interesting. Your info is also very interesting, I did not know that gpu's were THAT fast. Amazing how far this has all progressed in 40 years.

[–] 1 pt

Go look up development of CUDA, I forget if it was a video or article, but it's funny how the leylines lined up for Nvidia, they just won bigly on a bet back in the early 2000s.

Here, start with this.

The individual you're referring to is likely Alex Krizhevsky, a Canadian computer scientist who, in 2012, revolutionized the field of computer vision by using GPUs to train a deep learning model that outperformed traditional methods in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).

Krizhevsky, along with Ilya Sutskever and their advisor Geoffrey Hinton at the University of Toronto, developed AlexNet, a deep convolutional neural network (CNN) that achieved a significant breakthrough in image classification. Instead of relying on handcrafted features, AlexNet utilized a GPU-accelerated approach, training on two NVIDIA GTX 580 graphics cards. This method dramatically reduced training time and enabled the model to learn complex patterns directly from raw pixel data.

The success of AlexNet not only won the 2012 ILSVRC by a substantial margin but also demonstrated the power of GPUs in deep learning, leading to widespread adoption of GPU-based training in AI research. This shift played a pivotal role in the subsequent explosion of interest and advancements in artificial intelligence.