What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone
Related
I have hundreds of SQL file which I want to restore all of the databases in different database name for each file.
I look around for a solution, but what I got is something like concat all the files into one SQL file using cat.* and then restore using the concatenated file.
But, what I want is to restore it to a different database so, I think concat is not suitable for my case.
Here's one solution: alternate USE commands with your sql files, so you change the default database before the respective database's content. Gather the whole collection together and then pipe that to the input of the mysql client.
Example using bash syntax:
(
echo "USE database1;"
cat file1.sql
echo "USE database2;"
cat file2.sql
...
) | mysql
Another solution is to run the mysql client once for each file, and specify the database name as the argument:
mysql database1 < file1.sql
mysql database2 < file2.sql
...
Re your comment:
You can write a loop in bash too.
for file in *.sql
do
db=...
mysql $db < $file
done
The tricky part above is the "..." — deciding which db goes with each input SQL file. You haven't described any way to match them, so I don't know what you'd have to do to figure that out. But if you can make that inference somehow from the filename, then you can do this without having to type every file.
grep -n "Table Structure" dumpfile.sql
returns
XXXXXX:-- Table structure for table `table_name_1`
XXXXXX:-- Table structure for table `table_name_2`
XXXXXX:-- Table structure for table `table_name_3`
But after this point, it breaks. Not sure why ?
AND also
For retrieving a single table from huge dump file (Around 489GB), I used:
sed -n -e '/Table Structure 'table_name'/p' dump_file_name.sql > extracted_file.sql
But it is not able to locate the table_name.
So my question here is. How can all the tables be accessed ? Or why is it after certain table, it is not able to find the table.
Please If anyone can help me with this. It will be a greatest deed !
You have two problems with your sed command.
First, you're using single quotes inside the string that's delimited by single quotes. That won't work, because the inside quotes will just end the shell string, not be included literally.
Second, the quotes in the dump file are backticks, not single quotes.
Also, you're missing for table in your pattern, and the s in structure should be lowercase.
sed -n -e '/Table structure for table `table_name`/p' dump_file_name.sql > extracted_file.sql
But you can just use grep for this, you don't need sed:
grep 'Table structure for table `table_name`' dump_file_name.sql > extracted_file.sql
I often use the cqlsh command COPY...FROM CSV... but I have new needs.
I'd like to add an extra colum in my cassandra table that would be created from two other columns.
Example (cvs file)
1;2
2;4
3;6
would become a table with these values:
my table: 12;1;2
24;2;4
36;3;6
I ve used other options but they're much slower than COPY...FROM CSV
Do you know if I can do that using COPY...FROM CSV?
You can't do this with only copy command.
If you are using Linux then
First dumb the csv to file with copy command let's say csv_test.csv
1;2
2;4
3;6
Then use the below command to combine first two column into one.
cat csv_test.csv | awk -F ";" '{print $1$2 ";" $0}' > csv_test_combine.csv
Output file csv_test_combine.csv :
12;1;2
24;2;4
36;3;6
I have a directory containing more than 1100 directories, i want to move about 400 directories which name i have stored in a sql table? Is there a way to achieve this? I have searched on google but i can't find anything. Maybe one possiblity should be to export the table records to a text file but i still don't know how to connect the text file to the directories. Thanks.
#!/bin/bash
DIRLIST='file'
SOURCE='/my/source/directory'
TARGET='/my/target/directory'
while read -r dir; do
echo mv "$SOURCE/$dir" "$TARGET"
done < "$DIRLIST"
where file contains
directory1
directory2
directory3
(customize the example to your specific taste, and remove the echo statement in front of the mv after testing)
What I'm trying to do is export a view/table from Sybase ASE 12.0 into a CSV file, but I'm having a lot of difficulty in it.
We want to import it into IDEA or MS-Access. The way that these programs operate is with the text-field encapsulation character and a field separator character, along with new lines being the record separator (without being able to modify this).
Well, using bcp to export it is ultimately fruitless with its built in options. It doesn't allow you to define a text field encapsulation character (as far as I can tell). So we tried to create another view that reads from the other view/table that concatenates the fields that have new lines in them (text fields), however, you may not do that without losing some of the precision because it forces the field into a varchar of 8000 characters/bytes, of which our max field used is 16000 (so there's definitely some truncation).
So, we decided to create columns in the new view that had the text field delimiters. However, that put our column count for the view at 320 -- 70 more than the 250 column limit in ASE 12.0.
bcp can only work on existing tables and views, so what can we do to export this data? We're pretty much open to anything.
If its only the new line char that is causing problems can you not just do a replace
create new view as
select field1, field2, replace(text_field_with_char, 'new line char,' ' ')
from old_view
You may have to consider exporting as 2 files, importing into your target as 2 tables and then combining them again in the target. If both files have a primary key this is simple.
That sounds like bcp's right, but process the output via awk or perl.
But are those things you have and know? That might be a little overhead for you.
If you're on Windows you can get Active Perl free and it could be quick.
something like:
perl -F, -lane 'print "\"$F[0]\",$F[1],\"$F[2]\",$F[3]\n" ;' bcp-output-file
how's that? $F is an array of fields. The text ones you encircle with \"
You can use BCP format files for this.
bcp .... -f XXXX.fmt
BCP can also produce this format files interactively if you don't state
any of -c -n -f flags. Then you can save the format file and experiment with it, editing it and runnign BCP.
To safe time while exporting and debugging, use -F -L flags like "-F 1 -L 10" -- this gets only first 10 lines.