WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

871

I can't recall my exact username/password variation combo for my network router, but I've narrowed it down to a list of about 81-90 or so possibilities..

I could just hit the factory reset and start over, but I'd rather avoid having to reconfigure my settings and all if I can put the [relatively] short list of permutations into a program or bot and have the problem solved quickly.

I can't recall my exact username/password variation combo for my network router, but I've narrowed it down to a list of about 81-90 or so possibilities.. I could just hit the factory reset and start over, but I'd rather avoid having to reconfigure my settings and all if I can put the [relatively] short list of permutations into a program or bot and have the problem solved quickly.

(post is archived)

[–] 1 pt (edited )

edit because I found a bit of time, or hatever. Think of it this way, a GPU "has" to render 30, 60, 120, 240 or whatever frames per second. What is a frame? At a fundamental, base level it's 1920*1080 pixels for 1080p. 4k is hilarious because of how it scales, 4k is 3840x2160 or 8,294,400 pixels. To achieve 100 FPS at 4k you need to do at MINIMUM 829,440,000 calculations, but that's not it. Each pixels takes many dozens, hundreds of calculations.

GPUs thrive at that, as what's his face said: CPUs don't, they can do more per core than a GPU can per core. But what's your CPU have, 4, 6, 8, 16 cores?

An RTX 5090 has 21,760 CUDA cores that operate at ~2.6 Ghz. That's 21,760 * 2,600,000,000 = 56,576,000,000,000 (56 trillion)

Ask AI. Moore's Law is over, but it's not over. Per chip it's done, we can't keep going smaller, but we can get more efficient.

You should look up VDDR7's bandwidth, because it will make you laugh. Then you need to realize that that bandwidth isn't enough to fulfill the CUDA cores thirst.

Here:

[–] 1 pt

The part that was really throwing me is that I did not know there was software that could directly interface and run on your gpu. A program. That is very interesting. Your info is also very interesting, I did not know that gpu's were THAT fast. Amazing how far this has all progressed in 40 years.

[–] 1 pt

Go look up development of CUDA, I forget if it was a video or article, but it's funny how the leylines lined up for Nvidia, they just won bigly on a bet back in the early 2000s.

Here, start with this.

The individual you're referring to is likely Alex Krizhevsky, a Canadian computer scientist who, in 2012, revolutionized the field of computer vision by using GPUs to train a deep learning model that outperformed traditional methods in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).

Krizhevsky, along with Ilya Sutskever and their advisor Geoffrey Hinton at the University of Toronto, developed AlexNet, a deep convolutional neural network (CNN) that achieved a significant breakthrough in image classification. Instead of relying on handcrafted features, AlexNet utilized a GPU-accelerated approach, training on two NVIDIA GTX 580 graphics cards. This method dramatically reduced training time and enabled the model to learn complex patterns directly from raw pixel data.

The success of AlexNet not only won the 2012 ILSVRC by a substantial margin but also demonstrated the power of GPUs in deep learning, leading to widespread adoption of GPU-based training in AI research. This shift played a pivotal role in the subsequent explosion of interest and advancements in artificial intelligence.

[–] 0 pt

Wow, I had no idea. That is very cool. I'll have do do more reading, thanks again.