DEVELOPER ZONE ::
Login / Register
The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML format.
If you are doing a backup on the server and your tables all are MyISAM
tables, consider using the mysqlhotcopy instead because it can accomplish faster backups and faster restores. See Section 8.14, “mysqlhotcopy — A Database Backup Program”.
There are three general ways to invoke mysqldump:
shell>mysqldump [
shell>options
]db_name
[tables
]mysqldump [
shell>options
] --databasesdb_name1
[db_name2
db_name3
...]mysqldump [
options
] --all-databases
If you do not name any tables following db_name
or if you use the --databases
or --all-databases
option, entire databases are dumped.
To get a list of the options your version of mysqldump supports, execute mysqldump --help.
Some mysqldump options are shorthand for groups of other options. --opt
and --compact
fall into this category. For example, use of --opt
is the same as specifying --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset
. Note that as of MySQL 5.1, all of the options that --opt
stands for also are on by default because --opt
is on by default.
To reverse the effect of a group option, uses its --skip-
form (xxx
--skip-opt
or --skip-compact
). It is also possible to select only part of the effect of a group option by following it with options that enable or disable specific features. Here are some examples:
To select the effect of --opt
except for some features, use the --skip
option for each feature. For example, to disable extended inserts and memory buffering, use --opt --skip-extended-insert --skip-quick
. (As of MySQL 5.1, --skip-extended-insert --skip-quick
is sufficient because --opt
is on by default.)
To reverse --opt
for all features except index disabling and table locking, use --skip-opt --disable-keys --lock-tables
.
When you selectively enable or disable the effect of a group option, order is important because options are processed first to last. For example, --disable-keys --lock-tables --skip-opt
would not have the intended effect; it is the same as --skip-opt
by itself.
mysqldump can retrieve and dump table contents row by row, or it can retrieve the entire content from a table and buffer it in memory before dumping it. Buffering in memory can be a problem if you are dumping large tables. To dump tables row by row, use the --quick
option (or --opt
, which enables --quick
). --opt
(and hence --quick
) is enabled by default as of MySQL 5.1 to enable memory buffering, use --skip-quick
.
If you are using a recent version of mysqldump to generate a dump to be reloaded into a very old MySQL server, you should not use the --opt
or --extended-insert
option. Use --skip-opt
instead.
mysqldump supports the following options:
Display a help message and exit.
Add a DROP DATABASE
statement before each CREATE DATABASE
statement.
Add a DROP TABLE
statement before each CREATE TABLE
statement.
Surround each table dump with LOCK TABLES
and UNLOCK TABLES
statements. This results in faster inserts when the dump file is reloaded. See Section 7.2.17, “Speed of INSERT
Statements”.
Dump all tables in all databases. This is the same as using the --databases
option and naming all the databases on the command line.
Adds to a table dump all SQL statements needed to create any tablespaces used by an NDB Cluster
table. This information is not otherwise included in the output from mysqldump. This option is currently relevant only to MySQL Cluster tables.
This option was added in MySQL 5.1.6.
Allow creation of column names that are keywords. This works by prefixing each column name with the table name.
The directory where character sets are installed. See Section 5.10.1, “The Character Set Used for Data and Sorting”.
Write additional information in the dump file such as program version, server version, and host. This option is enabled by default. To suppress this additional information, use --skip-comments
.
Produce less verbose output. This option suppresses comments and enables the --skip-add-drop-table
, --skip-set-charset
, --skip-disable-keys
, and --skip-add-locks
options.
Produce output that is more compatible with other database systems or with older MySQL servers. The value of name
can be ansi
, mysql323
, mysql40
, postgresql
, oracle
, mssql
, db2
, maxdb
, no_key_options
, no_table_options
, or no_field_options
. To use several values, separate them by commas. These values have the same meaning as the corresponding options for setting the server SQL mode. See Section 5.2.6, “SQL Modes”.
This option does not guarantee compatibility with other servers. It only enables those SQL mode values that are currently available for making dump output more compatible. For example, --compatible=oracle
does not map data types to Oracle types or use Oracle comment syntax.
Use complete INSERT
statements that include column names.
Compress all information sent between the client and the server if both support compression.
Include all MySQL-specific table options in the CREATE TABLE
statements.
Dump several databases. Normally, mysqldump treats the first name argument on the command line as a database name and following names as table names. With this option, it treats all name arguments as database names. CREATE DATABASE
and USE
statements are included in the output before each new database.
--debug[=
, debug_options
]-# [
debug_options
]
Write a debugging log. The debug_options
string is often 'd:t:o,
. The default value is file_name
''d:t:o,/tmp/mysqldump.trace'
.
--default-character-set=
charset_name
Use charset_name
as the default character set. See Section 5.10.1, “The Character Set Used for Data and Sorting”. If no character set is specified, mysqldump uses utf8
.
Write INSERT DELAYED
statements rather than INSERT
statements.
On a master replication server, delete the binary logs after performing the dump operation. This option automatically enables --master-data
.
For each table, surround the INSERT
statements with /*!40000 ALTER TABLE
and tbl_name
DISABLE KEYS */;/*!40000 ALTER TABLE
statements. This makes loading the dump file faster because the indexes are created after all rows are inserted. This option is effective for tbl_name
ENABLE KEYS */;MyISAM
tables only.
Dump events from the dumped databases. This option was added in MySQL 5.1.8.
Use multiple-row INSERT
syntax that include several VALUES
lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.
--fields-terminated-by=...
, --fields-enclosed-by=...
, --fields-optionally-enclosed-by=...
, --fields-escaped-by=...
These options are used with the -T
option and have the same meaning as the corresponding clauses for LOAD DATA INFILE
. See Section 13.2.5, “LOAD DATA INFILE
Syntax”.
Deprecated. Now renamed to --lock-all-tables
.
Flush the MySQL server log files before starting the dump. This option requires the RELOAD
privilege. Note that if you use this option in combination with the --all-databases
(or -A
) option, the logs are flushed for each database dumped. The exception is when using --lock-all-tables
or --master-data
: In this case, the logs are flushed only once, corresponding to the moment that all tables are locked. If you want your dump and the log flush to happen at exactly the same moment, you should use --flush-logs
together with either --lock-all-tables
or --master-data
.
Emit a FLUSH PRIVILEGES
statement after dumping the mysql
database. This option should be used any time the dump contains the mysql
database and any other database that depends on the data in the mysql
database for proper restoration. This option was added in MySQL 5.1.12.
Continue even if an SQL error occurs during a table dump.
One use for this option is to cause mysqldump to continue executing even when it encounters a view that has become invalid because the defintion refers to a table that has been dropped. Without --force
, mysqldump exits with an error message. With --force
, mysqldump prints the error message, but it also writes a SQL comment containing the view definition to the dump output and continues executing.
--host=
, host_name
-h
host_name
Dump data from the MySQL server on the given host. The default host is localhost
.
Dump binary columns using hexadecimal notation (for example, 'abc'
becomes 0x616263
). The affected data types are BINARY
, VARBINARY
, BLOB
, and BIT
.
--ignore-table=
db_name.tbl_name
Do not dump the given table, which must be specified using both the database and table names. To ignore multiple tables, use this option multiple times.
Write INSERT
statements with the IGNORE
option.
This option is used with the -T
option and has the same meaning as the corresponding clause for LOAD DATA INFILE
. See Section 13.2.5, “LOAD DATA INFILE
Syntax”.
Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction
and --lock-tables
.
Lock all tables before dumping them. The tables are locked with READ LOCAL
to allow concurrent inserts in the case of MyISAM
tables. For transactional tables such as InnoDB
, --single-transaction
is a much better option, because it does not need to lock the tables at all.
Please note that when dumping multiple databases, --lock-tables
locks tables for each database separately. Therefore, this option does not guarantee that the tables in the dump file are logically consistent between databases. Tables in different databases may be dumped in completely different states.
Write the binary log filename and position to the output. This option requires the RELOAD
privilege and the binary log must be enabled. If the option value is equal to 1, the position and filename are written to the dump output in the form of a CHANGE MASTER
statement. If the dump is from a master server and you use it to set up a slave server, the CHANGE MASTER
statement causes the slave to start from the correct position in the master's binary logs. If the option value is equal to 2, the CHANGE MASTER
statement is written as an SQL comment. (This is the default action if value
is omitted.)
The --master-data
option automatically turns off --lock-tables
. It also turns on --lock-all-tables
, unless --single-transaction
also is specified (in which case, a global read lock is acquired only for a short time at the beginning of the dump. See also the description for --single-transaction
. In all cases, any action on logs happens at the exact moment of the dump.
Enclose the INSERT
statements for each dumped table within SET AUTOCOMMIT=0
and COMMIT
statements.
This option suppresses the CREATE DATABASE
statements that are otherwise included in the output if the --databases
or --all-databases
option is given.
Do not write CREATE TABLE
statements that re-create each dumped table.
Do not write any table row information (that is, do not dump table contents). This is very useful if you want to dump only the CREATE TABLE
statement for the table.
This option is shorthand; it is the same as specifying --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset
. It should give you a fast dump operation and produce a dump file that can be reloaded into a MySQL server quickly.
The --opt
option is enabled by default. Use --skip-opt
to disable it. See the discussion at the beginning of this section for information about selectively enabling or disabling certain of the options affected by --opt
.
Sorts each table's rows by its primary key, or by its first unique index, if such an index exists. This is useful when dumping a MyISAM
table to be loaded into an InnoDB
table, but will make the dump itself take considerably longer.
--password[=
, password
]-p[
password
]
The password to use when connecting to the server. If you use the short option form (-p
), you cannot have a space between the option and the password. If you omit the password
value following the --password
or -p
option on the command line, you are prompted for one.
Specifying a password on the command line should be considered insecure. See Section 5.8.6, “Keeping Your Password Secure”.
The TCP/IP port number to use for the connection.
--protocol={TCP|SOCKET|PIPE|MEMORY}
The connection protocol to use.
This option is useful for dumping large tables. It forces mysqldump to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out.
Quote database, table, and column names within ‘`
’ characters. If the ANSI_QUOTES
SQL mode is enabled, names are quoted within ‘"
’ characters. This option is enabled by default. It can be disabled with --skip-quote-names
, but this option should be given after any option such as --compatible
that may enable --quote-names
.
Write REPLACE
statements rather than INSERT
statements. Available as of MySQL 5.1.3.
Direct output to a given file. This option should be used on Windows to prevent newline ‘\n
’ characters from being converted to ‘\r\n
’ carriage return/newline sequences. The result file is created and its contents overwritten, even if an error occurs while generating the dump. The previous contents are lost.
Dump stored routines (functions and procedures) from the dumped databases. Use of this option requires the SELECT
privilege for the mysql.proc
table. The output generated by using --routines
contains CREATE PROCEDURE
and CREATE FUNCTION
statements to re-create the routines. However, these statements do not include attributes such as the routine creation and modification timestamps. This means that when the routines are reloaded, they will be created with the timestamps equal to the reload time.
If you require routines to be re-created with their original timestamp attributes, do not use --routines
. Instead, dump and reload the contents of the mysql.proc
table directly, using a MySQL account that has appropriate privileges for the mysql
database.
This option was added in MySQL 5.1.2. Before that, stored routines are not dumped. Routine DEFINER
values are not dumped until MySQL 5.1.8. This means that before 5.1.8, when routines are reloaded, they will be created with the definer set to the reloading user. If you require routines to be re-created with their original definer, dump and load the contents of the mysql.proc
table directly as described earlier.
Add SET NAMES
to the output. This option is enabled by default. To suppress the default_character_set
SET NAMES
statement, use --skip-set-charset
.
This option issues a BEGIN
SQL statement before dumping data from the server. It is useful only with transactional tables such as InnoDB
, because then it dumps the consistent state of the database at the time when BEGIN
was issued without blocking any applications.
When using this option, you should keep in mind that only InnoDB
tables are dumped in a consistent state. For example, any MyISAM
or MEMORY
tables dumped while using this option may still change state.
This option is not supported for MySQL Cluster tables; the results cannot be guaranteed to be consistent due to the fact that the NDBCluster
storage engine supports only the READ_COMMITTED
transaction isolation level. You should always use NDB
backup and restore instead.
The --single-transaction
option and the --lock-tables
option are mutually exclusive, because LOCK TABLES
causes any pending transactions to be committed implicitly.
To dump large tables, you should combine this option with --quick
.
See the description for the --opt
option.
For connections to localhost
, the Unix socket file to use, or, on Windows, the name of the named pipe to use.
See the description for the --comments
option.
Options that begin with --ssl
specify whether to connect to the server via SSL and indicate where to find SSL keys and certificates. See Section 5.8.7.3, “SSL Command Options”.
Produce tab-separated data files. For each dumped table, mysqldump creates a
file that contains the tbl_name
.sqlCREATE TABLE
statement that creates the table, and a
file that contains its data. The option value is the directory in which to write the files.tbl_name
.txt
By default, the .txt
data files are formatted using tab characters between column values and a newline at the end of each line. The format can be specified explicitly using the --fields-
and xxx
--lines-terminated-by
options.
Note: This option should be used only when mysqldump is run on the same machine as the mysqld server. You must have the FILE
privilege, and the server must have permission to write files in the directory that you specify.
Override the --databases
or -B
option. mysqldump regards all name arguments following the option as table names.
Dump triggers for each dumped table. This option is enabled by default; disable it with --skip-triggers
.
Add SET TIME_ZONE='+00:00'
to the dump file so that TIMESTAMP
columns can be dumped and reloaded between servers in different time zones. Without this option, TIMESTAMP
columns are dumped and reloaded in the time zones local to the source and destination servers, which can cause the values to change. --tz-utc
also protects against changes due to daylight saving time. --tz-utc
is enabled by default. To disable it, use --skip-tz-utc
. This option was added in MySQL 5.1.2.
--user=
, user_name
-u
user_name
The MySQL username to use when connecting to the server.
Verbose mode. Print more information about what the program does.
Display version information and exit.
--where='
, where_condition
'-w '
where_condition
'
Dump only rows selected by the given WHERE
condition. Quotes around the condition are mandatory if it contains spaces or other characters that are special to your command interpreter.
Examples:
--where="user='jimf'" -w"userid>1" -w"userid
Write dump output as well-formed XML.
NULL
, 'NULL'
, and Empty Values: For some column named column_name
, the NULL
value, an empty string, and the string value 'NULL'
are distinguished from one another in the output generated by this option as follows:
Value: | XML Representation: |
NULL (unknown value) |
|
'' (empty string) |
|
'NULL' (string value) |
|
Beginning with MySQL 5.1.12, the output from the mysql client when run using the --xml
option also follows these rules. (See Section 8.8.1, “mysql Options”.)
Beginning with MySQL 5.1.18, XML output from mysqldump includes the XML namespace, as shown here:
shell>mysqldump --xml -u root world City
|
1 Kabul AFG Kabol 1780000 ...
|
4079 Rafah PSE Rafah 92020
You can also set the following variables by using --
syntax:var_name
=value
max_allowed_packet
The maximum size of the buffer for client/server communication. The maximum is 1GB.
net_buffer_length
The initial size of the buffer for client/server communication. When creating multiple-row-insert statements (as with option --extended-insert
or --opt
), mysqldump creates rows up to net_buffer_length
length. If you increase this variable, you should also ensure that the net_buffer_length
variable in the MySQL server is at least this large.
It is also possible to set variables by using --set-variable=
or var_name
=value
-O
syntax. This syntax is deprecated.var_name
=value
The most common use of mysqldump is probably for making a backup of an entire database:
shell> mysqldump db_name
> backup-file.sql
You can read the dump file back into the server like this:
shell> mysql db_name
backup-file.sql
Or like this:
shell> mysql -e "source /path-to-backup/backup-file.sql
" db_name
mysqldump is also very useful for populating databases by copying data from one MySQL server to another:
shell> mysqldump --opt db_name
| mysql --host=remote_host
-C db_name
It is possible to dump several databases with one command:
shell> mysqldump --databases db_name1
[db_name2
...] > my_databases.sql
To dump all databases, use the --all-databases
option:
shell> mysqldump --all-databases > all_databases.sql
For InnoDB
tables, mysqldump
provides a way of making an online backup:
shell> mysqldump --all-databases --single-transaction > all_databases.sql
This backup just needs to acquire a global read lock on all tables (using FLUSH TABLES WITH READ LOCK
) at the beginning of the dump. As soon as this lock has been acquired, the binary log coordinates are read and the lock is released. If and only if one long updating statement is running when the FLUSH
statement is issued, the MySQL server may get stalled until that long statement finishes, and then the dump becomes lock-free. If the update statements that the MySQL server receives are short (in terms of execution time), the initial lock period should not be noticeable, even with many updates.
For point-in-time recovery (also known as “roll-forward,” when you need to restore an old backup and replay the changes that happened since that backup), it is often useful to rotate the binary log (see Section 5.11.4, “The Binary Log”) or at least know the binary log coordinates to which the dump corresponds:
shell> mysqldump --all-databases --master-data=2 > all_databases.sql
Or:
shell>mysqldump --all-databases --flush-logs --master-data=2
> all_databases.sql
The --master-data
and --single-transaction
options can be used simultaneously, which provides a convenient way to make an online backup suitable for point-in-time recovery if tables are stored using the InnoDB
storage engine.
For more information on making backups, see Section 5.9.1, “Database Backups”, and Section 5.9.2, “Example Backup and Recovery Strategy”.
If you encounter problems backing up views, please read the section that covers restrictions on views which describes a workaround for backing up views when this fails due to insufficient privileges. See Section D.4, “Restrictions on Views”.
User Comments
If you want to compress your backups directly even
without hitting your HD. You can try this command:
#mysqldump --opt -u user --password="password"
database | bzip2 -c > database.sql.bz2
I'm using the following script as daily cron, it works.
#!/bin/sh
date=`date -I`
mysqldump --opt --all-databases | bzip2 -c
> /var/backup/databasebackup-$date.sql.bz2
To dump only select records into a file based on a timestamp field you can use this (last_modified is the timestamp field). This is used in a shell script to be used as a cron to take records that are more than a month old and dump them into an archive file (then the dumped records are deleted).
/yourpath/mysqldump "--where=(month(last_modified)+year(last_modified)*12 month(current_date)+(year(current_date)*12)-1)" database table > archive.sql
To P J
Your example is very bad. There is much more elegant way and also in your example no index can be used to optimize query.
Here is better where clause to "take records that are more than a month old"
WHERE last_modified or
WHERE last_modified
depending on time resolution needed.
for "mysqldump" ing innodb tables
take the dump as you would normally do using mysqldump
open the dump file put this statement at the beginning
SET FOREIGN_KEY_CHECKS=0;
of the sql dump text file
and import the same file as you would nomally import an sql dump file
If you have a value for "database=" in your my.cnf file mysqldump will not work and will give you error "mysqldump: option `--databases' doesn't allow an argument".
For a faster "mysqldump" innodb tables
1. mysqldump --opt --user=username --password database > filetosaveto.sql
2. open the dump file put this statement at the beginning of the sql dump text file:
SET FOREIGN_KEY_CHECKS=0;
3. mysql --user=username --password database
Very fast.
After adding "SET FOREIGN_KEY_CHECKS=0;" remember to append the "SET FOREIGN_KEY_CHECKS=1;" at the end of the import file. The potential problem is that any data inconsistency that would've made the foreign key failed during import would have made it into the database even after the forieng keys are turned back on. This is especially true if the foreign keys aren't turned back on after a long period of time which can happen if the "SET FOREIGN_KEY_CHECKS=1;" was not appended to the import file in the first place.
You can even do your mysqldump backups with logrotate.
Simply put something like this into /etc/logrotate.conf:
/var/backups/mysql/dump.sql {
daily
rotate 14
missingok
compress
postrotate
/usr/bin/mysqldump --defaults-extra-file=/.../backup-credentials.cnf --opt --flush-logs --all-databases > /var/backups/mysql/dump.sql
endscript
}
Following mysqldump import example for InnoDB tables is at least 100x faster than previous examples.
1. mysqldump --opt --user=username --password database > dumbfile.sql
2. Edit the dump file and put these lines at the beginning:
SET AUTOCOMMIT = 0;
SET FOREIGN_KEY_CHECKS=0;
3. Put these lines at the end:
SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
SET AUTOCOMMIT = 1;
4. mysql --user=username --password database
Notice that from 4.1 the default behaviour for mysqldump has changed.
In 4.0 all variables was FALSE by default and you could for example use the short -e to start using extended inserts.
In 4.1 this and several other mysql specific variables are set to TRUE by default.
If you want to set them to FALSE again you have two choices:
Either create a default file where you turn of the variables that you choose.
Or you can use the long variable name combined with a value like this:
mysqldump --extended-insert=FALSE
This works for all variables that controls the behaviour of mysqldump which can be found at the end of a "mysqldump --help" listing.
If you're using phpMyAdmin's PDF and MIME type features and you're dumping those tables, you need to use the --allow-keywords options. The pma_history table created for those features has a column named "table" and if you dump without the --allow-keywords you'll get a syntax error when you try to import them later.
If you want to schedule a task on windows to backup and move your data somewhere, the lack of documentation and command-line tools in windows can make it a real beast. I hope this helps you keep your data safe.
First off, you will need a command line file compressor (or your should use one, anyway). I like GNU gzip. You can get it for windows here http://gnuwin32.sourceforge.net/packages/gzip.htm
Secondly, you will need to use windowsw FTP via command line. It took me all day to find documentation on this guy, so I hope this saves some time for somebody.
Anyway, you need two files -- the batch file and a script for your ftp client. The Batch file should look like this guy (it uses random numbers in the file name so that multiple backups are not overwritten):
@ECHO OFF
@REM Set dir variables. Use ~1 format in win2k
SET basedir=C:\BACKUP~1
SET workdir=c:\TEMP
SET mysqldir=c:\mysql\bin
SET gzipdir=c:\PROGRA~1\GnuWin32\bin
SET mysqlpassword=mygoodpassword
SET mysqluser=myrootuser
@REM Change to mysqldir
CD %mysqldir%
@REM dump database. This is all one line
mysqldump -u %mysqluser% -p%mysqlpassword% --all-databases >%workdir%\backup.sql
@REM Change to workdir
CD %workdir%
@REM Zip up database
%gzipdir%\gzip.exe backup.sql
@REM Move to random file name
MOVE backup.sql.gz backup.%random%.gz
@REM FTP file to repository
FTP -n -s:%basedir%\ftp-commands.txt
@REM Remove old backup files
del backup.sql
del backup.*.gz
@REM Change back to base dir
CD %basedir%
And your ftp script should look like this guy (and be named ftp-commands.txt so the above script can find it)
open
ftp.mybackuplocation.com
user
myusername
mypassword
bin
put backup.*.gz
quit
Make sure both of the above files are in whatever directory you set up as %basedir% and test it out and make sure everything works for you. Then schedule it to run every day to protect your data!
Corey's example is helpful, but I don't care for the random file name. Here is the manual script I use on Windows for kicking off a MYSQL backup.
You could easily add all the other bells and whistles of ZIP, FTP, and scheduling should you need it. Note that I didn't use a password or many of the other args for mysqldump, you can add those if ya need 'em.
@ECHO OFF
for /f "tokens=1-4 delims=/ " %%a in ('date/t') do (
set dw=%%a
set mm=%%b
set dd=%%c
set yy=%%d
)
SET bkupdir=C:\path\to\where\you\want\backups
SET mysqldir=D:\path\to\mysql
SET dbname=this_is_the_name_of_my_database
SET dbuser=this_is_my_user_name
@ECHO Beginning backup of %dbname%...
%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% > %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql
@ECHO Done! New File: dbBkup_%dbname%_%yy%%mm%%dd%.sql
pause
A little reformulation of the actions that occur during an online dump with log-point registration, i.e. a dump that does not unduly disturb clients using the database during the dump (N.B.: only from 4.1.8 on!) and that can be used to start a slave server from the correct point in the logs.
Use these options:
--single-transaction
--flush-logs
--master-data=1
--delete-master-logs
If you have several databases that are binary-logged and you want to keep a consistent binary log you may have to include all the databases instead of just some (is that really so?):
--all-databases
Now, these are the actions performed by the master server:
1) Acquire global read lock using FLUSH TABLES WITH READ LOCK. This also flushes the query cache and the query result cache. Caused by option --single-transaction.
2) All running and outstanding transactions terminate. MySQL server stalls for further updates.
3) Read lock on all tables acquired.
4) All the logs are flushed, in particular the binary log is closed and a new generation binary log is opened. Caused by option --flush-logs
5) Binary lock coordinates are read and written out so that the slave can position correctly in the binary log. Caused by --master-data=1
6) Read lock is released, MySQL server can proceed with updates. These updates will also go to the binary log and can thus be replayed by the slave. Meanwhile, the InnoDB tables are dumped in a consistent state, which is the state they were in in step 5. (Not guaranteed for MyISAM tables)
7) Dump terminates after a possibly long time.
8) Any old binary log files are deleted. Caused by --delete-master-logs.
Additionally, there are performance-influencing options:
--extended-insert: use multiple-row insert statements
--quick: do not do buffering of row data, good if tables are large
And there are format-influencing options:
--hex-blob: dump binary columns in hex
--complete-insert: use complete insert statements that include column names works nicely with --extended-insert
--add-drop-table: add a DROP TABLE statement before each CREATE TABLE statement.
4.1 to earlier version backwards compabibility: Bug 203 notes that database names are only quoted using -q from mysqldump version 10.1 (part of 4.1) because the mysql command line program does not support quoted database names until version 13.0. Version 12 is the one distributed with 4.0. This can be a problem if you are using mysqldump 10.1 and need quoted table and field names but can't have quoted database names for import into a 4.0 or earlier server. Using the later mysql client program on the destination or the earlier mysqldump version on the source may be a workaround.
Bug 203: http://bugs.mysql.com/bug.php?id=203
Following Lon B helpful post:
You can pipe it to gzip to compress in windows. I didn't think it would work on windows, but apparently it does.
@ECHO Beginning backup of %dbname%...
%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% | gzip> %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql.gz
Of course,you need gng gzip in your path or directory.
When using mysqldump on a replication master, if you want the slave(s) to follow, you may want to avoid the --delete-master-logs option, because it can delete binary logs before the "CHANGE MASTER" is read by the slaves, therefore breaking the replication (then you have to issue manually the "CHANGE MASTER" on the slave(s)). If you want to get rid of old and useless binary logs, it is better to issue a "PURGE MASTER" SQL command on the master after the mysqldump.
I moved my MySQL installation from Linux to Windows 2003 and had to create a new backup script. I was using hotcopy but with windows it's not avaliable.
" [email protected] -subject="MySQL Backup" -msg=%logdir%\LOG%fn%.txt
So, Inspired by Lon B and Corey Tisdale (above) I created a batch file that will create a mysqldump GZiped file for each database and put them into seperate folders. It also creates a log file. You will have to set the vars at the top to match your system.
You will also need GZip to do the compression...
It could still use some work (like no error trapping etc...) but it's in production for me now.
I used a utility "commail.exe" to send the log file to me after the backup is complete.
//--- Begin Batch File ---//
@echo off
:: Set some variables
set bkupdir=E:\MySQL\backup
set mysqldir=E:\MySQL
set datadir=E:\MySQL\data
set logdir=E:\MySQL\logs
set dbuser=username
set dbpass=password
set zip=C:\GZip\bin\gzip.exe
set endtime=0
:GETTIME
:: get the date and then parse it into variables
for /F "tokens=2-4 delims=/ " %%i in ('date /t') do (
set mm=%%i
set dd=%%j
set yy=%%k
)
:: get the time and then parse it into variables
for /F "tokens=5-8 delims=:. " %%i in ('echo.^| time ^| find "current" ') do (
set hh=%%i
set ii=%%j
set ss=%%k
)
:: If this is the second time through then go to the end of the file
if "%endtime%"=="1" goto END
:: Create the filename suffix
set fn=_%yy%%mm%%dd%_%hh%%mm%%ss%
:: Switch to the data directory to enumerate the folders
pushd %datadir%
:: Write to the log file
echo Beginning MySQLDump Process > %logdir%\LOG%fn%.txt
echo Start Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo --------------------------- >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt
:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (
:: Create the backup sub-directory is it does not exist
if not exist %bkupdir%\%%f\ (
echo Making Directory %%f
echo Making Directory %%f >> %logdir%\LOG%fn%.txt
mkdir %bkupdir%\%%f
) else (
echo Directory %%f Exists
echo Directory %%f Exists >> %logdir%\LOG%fn%.txt
)
:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %%f%fn%.sql.gz
echo Backing up database %%f%fn%.sql.gz >> %logdir%\LOG%fn%.txt
%mysqldir%\bin\mysqldump --user=%dbuser% --password=%dbpass% --databases %%f --opt --quote-names --allow-keywords --complete-insert | %zip% > %bkupdir%\%%f\%%f%fn%.sql.gz
echo Done...
echo Done... >> %logdir%\LOG%fn%.txt
)
:: Go back and get the end time for the script
set endtime=1
goto :GETTIME
:END
:: Write to the log file
echo. >> %logdir%\LOG%fn%.txt
echo --------------------------- >> %logdir%\LOG%fn%.txt
echo MySQLDump Process Finished >> %logdir%\LOG%fn%.txt
echo End Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt
:: Return to the scripts dir
popd
:: Send the log file in an e-mail
c:\commail\commail -host=smtp.yourcompany.com -from="server
//--- End Batch File ---//
Here's a bash wrapper for mysqldump I cron'd to run at night. It's not the sexiest thing but it's reliable.
& . Ideally these would be in constants but I couldn't get the bash escaping to work.
--password='' | grep -v ^Database$`)
--password='' | grep -v '^Tables_in_'`) --password='' --quick --add-drop-table --all ${d} ${t} | bzip2 -c > ${path}/${t}.sql.bz2
It creates a folder for each day, a folder for each db & single bzip2'd files for each table. There are provisions for exclusions. See below where it skips the entire tmp & test db's and in all db's, tables tbl_session & tbl_parameter. It also cleans up files older than 5 days (by that time they've gone to tape).
Be sure to update
# setup
suffix=`date +%Y%m%d`
dest=/mirror/mysqldumps
cmd='/usr/bin/mysqldump'
databases=(`echo 'show databases;' | mysql -u
for d in "${databases[@]}"; do
if [[ $d != 'tmp' && $d != 'test' ]]
then
echo "DATABASE ${d}"
s="use ${d}; show tables;"
tables=(`echo ${s} | mysql -u
for t in "${tables[@]}"; do
if [[ $t != 'tbl_parameter' && $t != 'tbl_session' ]]
then
echo " TABLE ${t}"
path="${dest}/${suffix}/${d}"
mkdir -p ${path}
${cmd} --user=
fi
done
fi
done
# delete old dumps (retain 5 days)
find ${dest} -mtime +5 -exec rm {} \;
You always wanted to BACKUP your most important database somewhere in your Linux system, as well as send the dump by email, so that you can recover the entire content if the system crashes.
You can use these 2 scripts.
First Step:
-Install the mutt client that will transfer emails on the command-line : "apt-get install mutt"
-Create the backup directory : "mkdir /home/backups"
Second Step:
- Copy these 2 scripts on your root directory or your user directory :
#!/bin/sh
# Script name : auto_mysql_dump.sh
# Backup the dbname database
dir=`date +%Y-%m-%d`
dbname=`mybase`
if [ -d /home/backups ]; then
mkdir /home/backups/$dir
mysqldump -B --user=user_of_my_base --password=pwd_of_my_base --host=host_of_my_base $dbname > /home/backups/$dir/$dbname.sql
fi
# End of script auto_mysql_dump.sh
#!/bin/sh
# Script Name : auto_mail_dump.sh
# Sends an email with the dump realized before
dir=`date +%Y-%m-%d`
dbname=`mybase`
mutt -s "Today backup" -a /home/backups/$dir/$dbname.sql [email protected] # End of script auto_mail_dump.sh
-Don't forget to change the access to make them executable:
"chmod 700 auto_mysql_dump.sh"
"chmod 700 auto_mail_dump.sh"
Third step:
-Edit the CronTab to schedule the execution of the two scripts.
"crontab -e" (you will use the vi editor)
We consider that the 2 scripts are in the /root directory
-I want the dump to be executed at 8.30 everyday
-I want the mail to be sent at 9.00 everyday
Thus I add these 2 rows after the existing lines :
Hit the "i" to insert new characters...
30 8 * * * /root/auto_mysql_dump.sh > /dev/null
00 9 * * * /root/auto_mail_dump.sh > /dev/null
Save the crontab by hitting : "Esc" + ":wq" (means Write and Quit)
What you should do now :
Once you've written the scripts, test-them !
Enjoy the automatic backup from now on :-)
When you need to import the data from a mysqldump, instead of using "shell>mysql source dump.sql" is much better.
This way of importing the data avoids problems with language specific characters being turned into garble.
Here's a DOS script that will backup all your databases to a seperate file in a new folder, zip the folder, encrypt the zip and email the encrypted zip to one or many adresses. If the backup is larger than a specified limit only the logfile is emailed. The unencrypted zipfile is left on your local machine.
"
The script is also available at http://www.jijenik.com/projects/mysqlbackup/
Many thanks to Wade Hedgren whose script formed the basis for this version.
//--- Begin Batch File ---//
::
:: Creates a backup of all databases in MySQL.
:: Zip, encrypts and emails the backup file.
::
:: Each database is saved to a seperate file in a new folder.
:: The folder is zipped and then deleted.
:: the zipped backup is encrypted and then emailed, unless the file exceeds the maximum filesize
:: In all cases the logfile is emailed.
:: The encrypted backup is deleted, leaving the unencrypted zipfile on your local machine.
::
:: Version 1.1
::
:: Changes in version 1.1 (released June 29th, 2006)
:: - backups are now sent to the address specified by the mailto variable
::
:: The initial version 1.0 was released on May 27th, 2006
::
::
:: This version of the script was written by Mathieu van Loon ([email protected])
:: It is based heavily on the script by Wade Hedgren (see comments at http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html)
::
:: This script requires several freeware libraries:
:: - zipgenius (a compression tool), www.zipgenius.it
:: - blat (an emailer tool), www.blat.net
:: - doff (extracts datetime, ignores regional formatting), www.jfitz.com/dos/index.html
::
:: Some areas where this script could be improved:
:: - include error trapping and handling
:: - make steps such as encryption and email optional
:: - allow the user to specify a single database on the command line
::
@echo off
::
:: Configuration options
::
:: The threshold for emailing the backup file. If the backup is larger
:: it will not be emailed (the logfile is always sent).
set maxmailsize=10000000
:: The passphrase used to encrypt the zipfile. Longer is more secure.
set passphrase=secret
:: Name of the database user
set dbuser=root
:: Password for the database user
set dbpass=password
:: Recipients of database backup, comma seperated, enclosed in quotes
set mailto="[email protected],[email protected]"
:: From address for email
set mailfrom="MySQL Backup Service
:: Email server
set mailsmtp=localhost
:: Email subject
set mailsubject="MySQL Backup"
:: directory where logfiles are stored
set logdir=C:\DatabaseBackups\logs
:: directory where backup files are stored
set bkupdir=C:\DatabaseBackups
:: Install folder of MySQL
set mysqldir=C:\Program Files (x86)\MySQL\MySQL Server 4.1
:: Data directory of MySQL (only used to enumerate databases, we use mysqldump for backup)
set datadir=C:\Program Files (x86)\MySQL\MySQL Server 4.1\data
:: Path of zipgenius compression tool
set zip=C:\Program Files (x86)\ZipGenius 6\zg.exe
:: Path of blat mail tool
set mail=C:\DatabaseBackups\Backupscript\libraries\Blat250\full\blat.exe
:: Path of doff date tool (specify only the folder not the exe)
set doff=C:\DatabaseBackups\Backupscript\libraries\doff10
::
::
:: NO NEED TO CHANGE ANYTHING BELOW
::
::
:: get the date and then parse it into variables
pushd %doff%
for /f %%i in ('doff.exe yyyymmdd_hhmiss') do set fn=%%i
for /f %%i in ('doff.exe dd-mm-yyyy hh:mi:ss') do set nicedate=%%i
popd
set logfile="%logdir%\%fn%_Backuplog.txt"
:: Switch to the data directory to enumerate the folders
pushd "%datadir%"
:: Write to the log file
echo Beginning MySQLDump Process > %logfile%
echo Start Time = %nicedate% >> %logfile%
echo --------------------------- >> %logfile%
echo. >> %logfile%
:: Create the backup folder
if not exist "%bkupdir%\%fn%\" (
echo Making Directory %fn%
echo Making Directory %fn% >> %logfile%
mkdir "%bkupdir%\%fn%"
)
:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (
:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %fn%_%%f.sql
echo Backing up database %fn%_%%f.sql >> %logfile%
"%mysqldir%\bin\mysqldump" --user=%dbuser% --password=%dbpass% --databases %%f --opt --quote-names --allow-keywords --complete-insert > "%bkupdir%\%fn%\%fn%_%%f.sql"
echo Done... >> %logfile%
)
:: return from data dir
popd
pushd %bkupdir%
echo Zipping databases
echo Zipping databases >> %logfile%
REM C9 : maximum compression
REM AM : Delete source files
REM F1 : Store relative path
REM R1 : include subfolders
REM K0 : Do not display progress
"%zip%" -add "%fn%_MySQLBackup.zip" C9 AM F1 R1 K0 +"%bkupdir%\%fn%"
echo Crypting zipfile
echo Crypting zipfile >> %logfile%
REM C : Create non-executable zip
REM S : Do not delete after x tries
REM 3 : Use AES encryption
"%zip%" -encrypt "%fn%_MySQLBackup.zip" C S 3 "%passphrase%" %mailfrom%
echo Deleting directory %fn%
echo Deleting directory %fn% >> %logfile%
rmdir /s /q "%bkupdir%\%fn%"
:: Go back and get the end time for the script
set endtime=1
:: return from backup dir
popd
:: update the nicedate for the log
pushd %doff%
for /f %%i in ('doff.exe dd-mm-yyyy hh:mi:ss') do set nicedate=%%i
popd
:: Write to the log file
echo. >> %logfile%
echo --------------------------- >> %logfile%
echo MySQLDump Process Finished >> %logfile%
echo End Time = %nicedate% >> %logfile%
echo. >> %logfile%
:: Send the log file in an e-mail, include the backup file if it is not too large
:: We use the CALL Trick to enable determination of the filesize (type CALL /? at prompt for info)
:: note that you _must_ specify the full filename as the argument
pushd %bkupdir%
Call :MAILFILE "%bkupdir%\%fn%_MySQLBackup.czip"
echo Backup completed
goto :EOF
:MAILFILE
if /i %~z1 LSS %maxmailsize% (
echo Emailing backup file
"%mail%" %logfile% -q -attach %1 -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
) ELSE (
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B.
echo Emailing logfile only
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B. only emailing logfile. >> %logfile%
"%mail%" %logfile% -q -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
)
echo Deleting encrypted backup file
del %1
popd
//--- End Batch File ---//
RE: Mathieu van Loon
Excellent, I had this installed and configured in about 10 minutes. I do have one minor fix however.
You aren't getting the time portion of the DOFF command captured into your variable. It appears that the output formatting string MUST NOT CONTAIN ANY BLANKS so I changed mine to:
for /f %%i in ('doff.exe dd-mm-yyyy_at_hh:mi:ss') do set nicedate=%%i
This is terrific, wish I found it 10 hrs ago (darn mySQL Administrator Backup - such a waste!!!
***
Now the problem is that my backups won't restore.... I am backing up multiple instances of MediaWiki, Mantis, and Joomla. I'm playing around with the
--max_allowed_packet= nnn and that should fix it based on manual backups working. Now is that nnn bytes or an abbreviation? Hmmm.
I often get errors [MySQL 4.* and 5.*] on reloading a dump of databases having big blobs. I found the solution disabling the --extended-insert (that comes inside the multiple option --opt, enabled by default) with --skip-extended-insert. I think this way is safer, but it is also more more slow.
RE: Mathieu van Loon
Excellent, I had this installed and configured in about 10 minutes.
But how to execute the script automatically in windows XP everyday on the sime time without the user having to start the bat file.
Changing the default behavior of a utility like mysqldump on a version upgrade is a terrible idea. No matter how reasonable such a change may seem to be, there is always the possibility of unforeseen consequences. Here is an example:
We have been using mysqldump to copy table structures from one server to another, in a process that initializes a replication relationship. The input of the copy is to be the slave of the configuration, the output the eventual master. Data collection programs eventually write to the master (which is kept very small), and replication then updates the far larger slave.
Upgrading to version 4.1 allows us to use the black hole engine for the master tables; consequently, when the master tables are created from those on the slave, their engine types are changed to "blackhole". This is done in a script that pipes the output of mysqldump running on one server (S) through a filter that changes the engine type to blackhole, and then to the other mysql server (M).
If through an error this initialization process were to be run on a system where replication was already in place, the "CREATE TABLE … ENGINE=BLACKHOLE" statements executed on M could be replicated back to S. The SQL generated by the 4.0 version of mysqldump (default behavior) contains only CREATE TABLE statements; if these statements were executed on the slave they would fail with errors, and no harm would be done. With the 4.1 version, however, the CREATE TABLE statements are prefaced with DROP TABLE statements – the result would be to destroy the existing tables on S.
It is of course our responsibility to thoroughly check the behavior of our software after a database upgrade, and we have taken steps to avoid this tragedy (using the --skip-opt option in mysqldump, taking pains to disable replication, etc.). But for MySQL developers to create additional pitfalls for us to fall into by changing default behavior is – allow me to use a pompous word here – outrageous!
To backup certain tables in a database, you have to set the -B operator (for the proper database) AND use the --tables operator to list the tables you wish to backup.
--tables
Example:
mysqldump --user=.. --password=.. -B
Here's a python script that does rolling WinRAR'd backups on Windows. It should be trivial to change to Linux, or another compression program. is the tab character. Indentation is significant in Python.
for fileName in oldBackupFileNames[0:len(oldBackupFileNames)-maxBackups]:
os.remove(fileName)
print "Deleted old backup \""+fileName+"\""
Please note:
1) this was a quick hack, so please test thoroughly before using in production. Still, I hope it will be a useful basis for your own script.
2) the --single-transaction switch is used as I am backing up InnoDB tables.
3) mysqldump is run with the root user. It would be A Good Thing to make this more secure - eg. create a backup user with read-only permissions to the tables.
4)
import glob
import os
import time
# configuration
baseBackupFileName = "backupName"
maxBackups = 3
mySqlDumpCommand = "d:\\programs\\mysql\\bin\\mysqldump --user=root --password=rootpass --single-transaction DBName Table1Name Table2Name Table3Name"
winRarPath = "\"c:\\Program Files\\WinRAR\\WinRAR.exe\"" # path is quoted as it contains spaces
print "--- START ---"
# create new backup
newBackupFileName = baseBackupFileName + time.strftime("_%Y%m%d_%H%M%S", time.localtime())
os.system(mySqlDumpCommand+" > "+newBackupFileName+".sql")
# compress new backup
os.system(winRarPath+" a "+newBackupFileName+" "+newBackupFileName+".sql")
os.remove(newBackupFileName+".sql")
print "Created new backup \""+newBackupFileName+".rar\""
# delete old backups
oldBackupFileNames = glob.glob(baseBackupFileName+"_*_*.rar")
oldBackupFileNames.sort()
if len(oldBackupFileNames) > maxBackups:
print "--- END ---"
The example I was taught goes simply like this:
mysql>mysqldump -u root -p DATABASENAME > DATABASENAME.sql
If you wanted to test it, you could drop the database:
mysql>DROP DATABASE DATABASENAME;
(DATABASE referring to DATABASENAME, the one you just backed up)
The quit mysql:
mysql>q
Log back in:
]$ mysql -u root -p
mysql>CREATE DATABASE PREVIOUS_DATABASENAME;
mysql> \q
Bye
]$ mysql -u root -p PREVIOUS_DATABASENAME Enter password:
And it will be restored!
--master-data
"default value" seems to be "1" under my environment, MySQL 5.0.24 on Linux 2.6 kernel.
Thanks,
Kenji
Corey and Lon,
The scripts were very helpful!
Thank you.
the --master-data option also requires the SUPER or REPLICATION CLIENT privilege.
Add your own comment.