Depends on the distro and desktop environment but some will "transfer" files to a software buffer that doesn't actually write the data immediately. Works for limiting unnecessary writes on Flash memory but not USB sticks that are designed to be inserted and removed at short notice.
You can force Linux to commit pending writes using the 'sync' command. Note it won't give you any feedback until the operation is finished (multiple minutes for a thumbdrive writing GBs of data) so append & to your command ('sync &') to start it as its own process so you don't lock the terminal.
You can also watch the progress using the command form this Linux Stack Exchange Q;
As I've already mentioned, sync does absolutely nothing. The copy took so long that the sync command exited 4 times while the files were still transfering and were nowhere near finishing. Regarding the watch -d grep -e Dirty: -e Writeback: /proc/meminfo command, I did not mention it in this thread but I did try it and yes, there was some almost 900k kB of data in the "Dirty" buffer that went up and down constantly even after I've disabled the caching.
Silly question perhaps, but are you sure you're using the correct port on your Linux system? If I plug my external HD into a USB2 port, I'm stuck at 30-40MB/sec, while on a USB3 port I get ~150-180MB/sec. That's proportionally similar to the difference you described so I wonder if that's the culprit.
You can verify this in a few different ways. From Terminal, if you run lsusb you'll see a list of all your USB hubs and devices.
It should look something like this:
Bus 002 Device 001: ID xxxx:yyyy Linux Foundation 3.0 root hub
Bus 002 Device 002: ID xxxx:yyyy <HDD device name>
Bus 003 Device 001: ID xxxx:yyyy Linux Foundation 2.0 root hub
Bus 004 Device 001: ID xxxx:yyyy Linux Foundation 3.0 root hub
So you can see three hubs, one of which is 2.0 and the other two are 3.0. The HDD is on bus 002, which we can see is a USB 3.0 hub by looking at the description of Bus 002 Device 001. That's good.
If you see it on a 2.0 bus, or on a bus with many other devices on it, that's bad and you should re-organize your USB devices so your low-speed peripherals (mouse, keyboard, etc.) are on a USB2 bus and only high-speed devices are on the USB3 bus.
You can also consult your motherboard's manual, or just look at the colors of your USB ports. By convention, gray ports are USB 1.0, blue ports are 2.0, and green ports are 3.x.
If you're running KDE, you can also view these details in the GUI with kinfocenter. Not sure what the Gnome equivalent is.
Bus 004 Device 004 and it's USB 3.0 as it should be.
Also I've disabled caching and I'm now copying 6 video files at only just 15MB/s (and it's slowing down, byt the time I went to make screenshot for this post it dropped again). And it's quite a bit slower than on Windows still.
I'm using USB 3 10Gbps on my Linux system. The USB stick is USB 3-1.0 and the Windows PC only has USB 2.0 so it should be the slowest but it's actually several times faster.
except linux waits on updating the UI until all write buffers are flushed, whereas Windows does not.
I wish that were true here. But when I copy to USB the file manager ( XFCE/Thunar ) shows the copy is finished and closes the copy notifications way way before it's even half done, when I copy movies to a stick.
I use fast USB 3 stick on USB 3 port, and I don't get anywhere near the write speed the stick manufacturer claims. So I always open a terminal and run sync, to see when it's actually finished.
I hate to the extreme when systems don't account for write cache before claiming a copy is finished, it's such an ancient problem we've had since the 90's, and I find it embarrassing that such problems still exist on modern systems.
That's nice but I managed to copy 300GB worth of data from the Windows PC to my Linux PC in around 3h to make a backup while I reinstall system and now I've been stuck for half a day copying the data back to the old Windows PC and I've not even finished 100GB yet... I've noticed this issue long ago but I ignored it as I never really had to copy this much data. Now it's just infuriating.
One thing I ran into, though it was a while ago, was that disk caching being on would trash performance for writes on removable media for me.
The issue ended up being that the kernel would keep flushing the cache to disk, and while it was doing that none of your transfers are happening. So, it'd end up doubling or more the copy time because the write cache wasn't actually helping removable drives.
It might be worth remounting without any caching, if it's on, and seeing if that fixes the mess.
But, as I said, this has been a few years, so that may no longer be actively the case.
Random peripherals get tested against windows a lot more than Linux, and there are quirks which get worked around.
I would suggest an external SSD for any drive over 32GB. Flash drives are kind of junk in general, and the external SSDs have better controllers and thermals.
Out of curiosity, was the drive reformatted between runs, and was a Linux native FS tried on the flash drive?
The Linux native FS doesn’t help migrate the files between Windows and Linux, but it would be interesting to see exFAT or NTFS vs XFS/ext4/F2FS.
That's just the state of things. I have experienced this as well, trying to copy a 160 GB usb stick to another one (my old itunes library). Windows manages fine, but neither Linux nor MacOS do it properly. They crawl, and in macos' case, it gets much slower as time goes by, and I had to stop the transfer. Overall, it's how these things are implemented. It's ok for a few gigabytes, but not a good case for many small files (e.g. 3-5 mb each) with many subfolders, and many GBs overall. Seems to me that some cache is overfilling, while windows is more diligent to clear up that cache in time, before things get into a crawl. Just a weak implementation for both Linux and MacOS IMHO, and while I'm a full time Linux user, I'm not afraid to say it how I experienced it under a debian and ubuntu.
I'm copying 6 video files that are 40GB total and it's been over 3h now and still not finished so it's not just a lot of small files. It's just slow as hell in general. Yes, the USB is 3.0 connected to 3.0 port verified it's actually running at 3.0 bus. No, it's not fault of the USB drive as this takes around 30min from USB 2.0 on Windows 10. Yes, I've tried Fat32, exFAT and NTFS... I couldn't care less about ext4 for this particular use case so it's not relevant and I haven't tried it yet because I'm still stuck copying. Not sure what rsync does different, I just use standard CTRL+C/CTRL+V copy/paste that I expect to work flawlessly in 2025. No idea why I would want to use command line for copying files to USB drive. This seems like an ongoing problem for over 10 years from what I've been looking at trying to find solution, I found none that worked yet, just the same comments I'm getting here mostly.
Very strange... So it sounds like you're using whatever the default file manager is for your desktop, there really isn't any reason the filesystem type would make things that much slower. Something must be very different about your system to be slowing transfers down that much.
I would use iotop to see how much data is being written, where, and what speeds it's getting but if you prefer a graphical version of that maybe "system monitor" is available to you in gnome or whatever desktop you use. You've probably already tested other drives I guess, maybe try just booting a fresh live USB of something and see if the problem persists there too.
Personally, i have never experienced problems while reading from USB sticks, but i have while writing. I have a 15+ years old USB2 stick and a new USB3.x stick. The USB2 stick writes with constant ~20MB/s, while USB3 is all over the place between 200MB/s and ~0.1MB/s. Unusable for me. For a while i used external HDDs and SSDs over USB3, as they somehow run without problems, but they are cumbersome and expensive.
Therefore i have switched to transfer files over the network (for large files i plug in Ethernet) using KDE connect. Unfortunately it can not send folders (yet), so i would .tar them before sending, and untar them after.
LocalSend would also be an option. Maybe that can do folders natively.
I'm aware of these programs but they are just a way around the problem and not a solution. Besides they have their own limitations... I can't use KDE Connect because it does not work on my network because I run 3 routers in one network and would have to be connected to the same router which is not possible... Because of the reason why I need to run 3 routers
Oh boy, yes I remember I used to disable that shit ages ago in fstab, that's quite annoying that's still necessary!!!
How do you do that for USB sticks?