Member for

2 months 2 weeks
By admin , 29 May 2025

When restoring large sql dumps from mariadb-dump or mysldump, only a single thread is used which can result in very slow restores. One thing you can do is use mydumper instead, which is a great solution which is very fast and supports advanced options such as incremental dumps.

Sometimes however, you want to use standard tools - or maybe all you have is the sql dump file and you need to restore it quickly. In this case you can use this small script to multi-thread there restore based on tables i.e. one thread per table. Now if you only have one huge table this won't help you - you'd need to modify the script to split the sql dump into smaller chunks and multi thread it that way.

vim /opt/sqlrestore.sh and paste in below + save. Then chmod +x the file and call it like this:

/opt/sqlrestore.sh /path/to/yourdump.sql {db_user} {db_pass} {db_name}

If like us you sometimes need to restore large sql files with many tables this makes a massive speed boost. Modify the script to suit how many cores you want to use on the restore (--jobs 4 for 4 cores).

#!/bin/bash
[ $# -ne 4 ] && { echo "Usage: $0 <sql_file> <db_user> <db_pass> <db_name>"; exit 1; }
INPUT_SQL="$1" DB_USER="$2" DB_PASS="$3" DB_NAME="$4"
[ ! -f "$INPUT_SQL" ] && { echo "Error: Input file $INPUT_SQL does not exist"; exit 1; }
command -v parallel >/dev/null 2>&1 || { echo "Error: GNU parallel not installed"; exit 1; }
TEMP_DIR="/home/oliverb/tmp/parallel_mysql_$$"
mkdir -p "$TEMP_DIR" || { echo "Error: Could not create $TEMP_DIR"; exit 1; }
echo "Using temporary directory: $TEMP_DIR"
cleanup() { rm -rf "$TEMP_DIR"; }
trap cleanup EXIT INT TERM
echo "Splitting SQL dump..."
awk -v output_dir="$TEMP_DIR" '
BEGIN { 
    current_file = ""; 
} 
/^-- Table structure for table `/ { 
    if (current_file != "") 
        close(current_file); 
    table_name = gensub(/.*`(.+)`/, "\\1", 1); 
    current_file = output_dir "/table_" table_name ".sql"; 
    print "Creating file for table: " table_name > "/dev/stderr"; 
} 
current_file != "" { 
    print > current_file 
} 
END { 
    if (current_file != "") 
        close(current_file); 
}' "$INPUT_SQL"

[ ! "$(ls -A "$TEMP_DIR")" ] && { 
    echo "Error: No table files created"; 
    exit 1; 
}
echo "Importing tables in parallel..."
ls "$TEMP_DIR"/*.sql | parallel --eta --jobs 4 "mariadb -u '$DB_USER' -p'$DB_PASS' '$DB_NAME' < {}"
[ $? -eq 0 ] && echo "All tables imported successfully." || { echo "Error: Some imports failed."; exit 1; }

*Make sure you also have parallel installed.

Tags