我有一个包含多个数据库(5)的mysqldump文件。其中一个数据库需要很长时间才能加载,有没有办法通过数据库拆分mysqldump文件,或者只是告诉mysql只加载其中一个指定的数据库?通过数据库拆分一个带有多个数据库的mysqldump文件
Manish
我有一个包含多个数据库(5)的mysqldump文件。其中一个数据库需要很长时间才能加载,有没有办法通过数据库拆分mysqldump文件,或者只是告诉mysql只加载其中一个指定的数据库?通过数据库拆分一个带有多个数据库的mysqldump文件
Manish
这个Perl脚本应该可以做到。
#!/usr/bin/perl -w
#
# splitmysqldump - split mysqldump file into per-database dump files.
use strict;
use warnings;
my $dbfile;
my $dbname = q{};
my $header = q{};
while (<>) {
# Beginning of a new database section:
# close currently open file and start a new one
if (m/-- Current Database\: \`([-\w]+)\`/) {
if (defined $dbfile && tell $dbfile != -1) {
close $dbfile or die "Could not close file!"
}
$dbname = $1;
open $dbfile, ">>", "$1_dump.sql" or die "Could not create file!";
print $dbfile $header;
print "Writing file $1_dump.sql ...\n";
}
if (defined $dbfile && tell $dbfile != -1) {
print $dbfile $_;
}
# Catch dump file header in the beginning
# to be printed to each separate dump file.
if (! $dbname) { $header .= $_; }
}
close $dbfile or die "Could not close file!"
运行本作包含所有数据库
./splitmysqldump < all_databases.sql
谢谢对于漂亮的脚本来说它像魅力一样工作 – sakhunzai 2014-04-17 11:40:49
转储文件这是一个伟大的博客文章中,我总是重新指做这种事情了mysqldump
。
http://gtowey.blogspot.com/2009/11/restore-single-table-from-mysqldump.html
您可以轻松地扩展它来提取单个数据库的。
这实际上是一个很棒的技巧,简单而有效。 :-) – 2014-03-24 05:04:44
我一直在研究一个python脚本,它将一个大的转储文件拆分成小的文件,每个数据库一个。它的名字是dumpsplit,这里是一个从无到有:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import re
import os
HEADER_END_MARK = '-- CHANGE MASTER TO MASTER_LOG_FILE'
FOOTER_BEGIN_MARK = '\/\*\!40103 SET [email protected]_TIME_ZONE \*\/;'
DB_BEGIN_MARK = '-- Current Database:'
class Main():
"""Whole program as a class"""
def __init__(self,file,output_path):
"""Tries to open mysql dump file to call processment method"""
self.output_path = output_path
try:
self.file_rsrc = open(file,'r')
except IOError:
sys.stderr.write('Can\'t open %s '+file)
else:
self.__extract_footer()
self.__extract_header()
self.__process()
def __extract_footer(self):
matched = False
self.footer = ''
self.file_rsrc.seek(0)
line = self.file_rsrc.next()
try:
while line:
if not matched:
if re.match(FOOTER_BEGIN_MARK,line):
matched = True
self.footer = self.footer + line
else:
self.footer = self.footer + line
line = self.file_rsrc.next()
except StopIteration:
pass
self.file_rsrc.seek(0)
def __extract_header(self):
matched = False
self.header = ''
self.file_rsrc.seek(0)
line = self.file_rsrc.next()
try:
while not matched:
self.header = self.header + line
if re.match(HEADER_END_MARK,line):
matched = True
else:
line = self.file_rsrc.next()
except StopIteration:
pass
self.header_end_pos = self.file_rsrc.tell()
self.file_rsrc.seek(0)
def __process(self):
first = False
self.file_rsrc.seek(self.header_end_pos)
prev_line = '--\n'
line = self.file_rsrc.next()
end = False
try:
while line and not end:
if re.match(DB_BEGIN_MARK,line) or re.match(FOOTER_BEGIN_MARK,line):
if not first:
first = True
else:
out_file.writelines(self.footer)
out_file.close()
if not re.match(FOOTER_BEGIN_MARK,line):
name = line.replace('`','').split()[-1]+'.sql'
print name
out_file = open(os.path.join(self.output_path,name),'w')
out_file.writelines(self.header + prev_line + line)
prev_line = line
line = self.file_rsrc.next()
else:
end = True
else:
if first:
out_file.write(line)
prev_line = line
line = self.file_rsrc.next()
except StopIteration:
pass
if __name__ == '__main__':
Main(sys.argv[1],sys.argv[2])
或者,可以节省每个数据库为直接单独的文件...
#!/bin/bash
dblist=`mysql -u root -e "show databases" | sed -n '2,$ p'`
for db in $dblist; do
mysqldump -u root $db | gzip --best > $db.sql.gz
done
使用'mysql --batch --skip-column-names'而不是'sed'作为机器可解析输出。 [(参考)](https://dev.mysql.com/doc/refman/5.0/en/mysql-command-options.html) – 2014-04-27 19:33:52
像Stano建议,最好的办法是做转储时喜欢的东西......
mysql -Ne "show databases" | grep -v schema | while read db; do mysqldump $db | gzip > $db.sql.gz; done
当然,这依赖于〜/ .my.cnf文件的存在与
[client]
user=root
password=rootpass
否则只是-u和-p参数mysql和mysqldump的通话将它们定义:
mysql -u root -prootpass -Ne "show databases" | grep -v schema | while read db; do mysqldump -u root -prootpass $db | gzip > $db.sql.gz; done
希望这有助于
我可能会做的转储和步骤重装:
注意:如果您使用的是MyISAM表格,则可以在步骤4中禁用索引评估,稍后重新启用它以使插入速度更快。
检查此解决方案为Windows/Linux:http://stackoverflow.com/questions/132902/how-do-i-split-the-output-from-mysqldump-into-smaller-files/30988416#30988416 – Alisa 2015-06-22 22:04:18