The question is current as ever, most people will find these sort of questions because they suffer from the single-threaded design of mysql and mysqldump.
If you have millions or billions of rows exporting can take days (to weeks) so you end up only exporting parts of the data instead.
A quick hack to solve this is to export portions of the data, this works best if you have a numeric key (like an autoincrement id).
Below is a linux/unix example on how to export a table rougly 20-100 times faster than normal.
Assumed column "id" is from 1 to 10000000
Assumed cpu has 16 threads
Assumed disk is an ssd or nvme
seq 0 1000 | xargs -n1 -P16 -I{} | mysqldump -h localhost --password=PASSWORD --single-transaction DATABASE TABLE --where "id > {}*10000 AND id < {}*10000+10000" -r output.{}
The above code will run 16 threads, roughly cutting time to export to 1/10 of normal. It creates 16 files that also can be loaded in parallel which speeds up loading up to 10 times.
On a strong server I use up to 150 parallel threads, this depends on the type of disk and cpu you are running.
This method, a bit refined, can cut the loading or export of a 1 week export to a few hours.
When doing this over network --compress can help a lot, also ignore insert statements will help with faulty mysql indexes that are not avoidable on large data. loading data with 'mysql -f' further helps to avoid stopping in such cases.
P.S. never use the mysql options to add indexes and keys at the end on large tables.