Will take a look, thank you.
I have a machine that sits on my network and acts as a server for downloaded media via podcasts and other methods of retrieval. It does this with some command line tools like podget, and serves via simple SMB and a squeezebox server.
Some of the podget feeds can get large quickly, as they're aggregate feeds, and you can wind up with TB of data in a directory. I'll slap a large drive in a dock and start copying what I want to keep off for archival purposes.
I let the feed directories get larger than they should because I didn't have a spare drive until recently. Started copying things off with the serving machine (after temporarily turning off new downloads) and noticed that it started to get funny after about 3TB was copied. Things would get really slow, the workload on the CPU would start creeping up, and it would eventually stop. This was on the serving machine (Odroid C2) and a RPi 4 that I keep for doing YT-DL jobs.
Can't use my Win7 box because all the SBC stuff is formatted EXT4. I have an old Dell Beige Box that I keep around for large jobs, it's a i5 with 16GB of RAM and runs Debian. Hooked everything up to that and it correctly identified that the directories were too large to copy unless you do a -r on the top level, in which case it works fine.
Since the Dell box has better I/O capabilities (not everything running through the same little CPU or onboard hub) I have to assume the SBCs just get overloaded with the large amount of filenames and USB I/O - the Odroid approached 15/15/15 before I killed the job.
Just a thought to keep in mind if you have large file operations to perform. Single boards just don't seem to be suited for this, but then they weren't really designed for this either.
(post is archived)