DB load CSV into multiple tables - mysql

UPDATE: added an example to clarify the format of the data.
Considering a CSV with each line formatted like this:
tbl1.col1,tbl1.col2,tbl1.col3,tbl1.col4,tbl1.col5,[tbl2.col1:tbl2.col2]+
where [tbl2.col1:tbl2.col2]+ means that there could be any number of these pairs repeated
ex:
tbl1.col1,tbl1.col2,tbl1.col3,tbl1.col4,tbl1.col5,tbl2.col1:tbl2.col2,tbl2.col1:tbl2.col2,tbl2.col1:tbl2.col2,tbl2.col1:tbl2.col2,tbl2.col1:tbl2.col2,tbl2.col1:tbl2.col2,tbl2.col1:tbl2.col2,tbl2.col1:tbl2.col2
The tables would relate to eachother using the line number as a key which would have to be created in addition to any columns mentioned above.
Is there a way to use mysql load
data infile to load the data into
two separate tables?
If not, what Unix command line tools
would be best suited for this?

no, not directly. load data can only insert into one table or partitioned table.
what you can do is load the data into a staging table, then use insert into to select the individual columns into the 2 final tables. you may also need substring_index if you're using different delimiters for tbl2's values. the line number is handled by an auto incrementing column in the staging table (the easiest way is to make the auto column last in the staging table definition).
the format is not exactly clear, and is best done w/perl/php/python, but if you really want to use shell tools:
cut -d , -f 1-5 file | awk -F, '{print NR "," $0}' > table1
cut -d , -f 6- file | sed 's,\:,\,,g' | \
awk -F, '{i=1; while (i<=NF) {print NR "," $(i) "," $(i+1); i+=2;}}' > table2
this creates table1 and table 2 files with these contents:
1,tbl1.col1,tbl1.col2,tbl1.col3,tbl1.col4,tbl1.col5
2,tbl1.col1,tbl1.col2,tbl1.col3,tbl1.col4,tbl1.col5
3,tbl1.col1,tbl1.col2,tbl1.col3,tbl1.col4,tbl1.col5
and
1,tbl2.col1,tbl2.col2
1,tbl2.col1,tbl2.col2
2,tbl2.col1,tbl2.col2
2,tbl2.col1,tbl2.col2
3,tbl2.col1,tbl2.col2
3,tbl2.col1,tbl2.col2

As you say, the problematic part is the unknown number of [tbl2.col1:tbl2.col2] pairs declared in each line. I would tempted to solve this through sed: split the one file into two files, one for each table. Then you can use load data infile to load each file into its corresponding table.

Related

Cassandra CQLSH COPY FROM CSV: Can I create my own colum from others

I often use the cqlsh command COPY...FROM CSV... but I have new needs.
I'd like to add an extra colum in my cassandra table that would be created from two other columns.
Example (cvs file)
1;2
2;4
3;6
would become a table with these values:
my table: 12;1;2
24;2;4
36;3;6
I ve used other options but they're much slower than COPY...FROM CSV
Do you know if I can do that using COPY...FROM CSV?
You can't do this with only copy command.
If you are using Linux then
First dumb the csv to file with copy command let's say csv_test.csv
1;2
2;4
3;6
Then use the below command to combine first two column into one.
cat csv_test.csv | awk -F ";" '{print $1$2 ";" $0}' > csv_test_combine.csv
Output file csv_test_combine.csv :
12;1;2
24;2;4
36;3;6

Is there a work-around that allows missing data to equal NULL for LOAD DATA INFILE in MySQL?

I have a lot of large csv files with NULL values stored as ,, (i.e., no entry). Using LOAD DATA INFILE makes these NULL values into zeros, even if I create the table with a string like var DOUBLE DEFAULT NULL. After a lot of searching I found that this is a known "bug", although it may be a feature for some users. Is there a way that I can fix this on the fly without pre-processing? These data are all numeric, so a zero value is very different from NULL.
Or if I have to do pre-processing, is there one that is most promising for dealing with tens of csv files of 100mb to 1gb? Thanks!
With minimal preprocessing with sed, you can have your data ready for import.
for csvfile in *.csv
do
sed -i -e 's/^,/\\N,/' -e 's/,$/,\\N/' -e 's/,,/,\\N,/g' -e 's/,,/,\\N,/g' $csvfile
done
That should do an in-place edit of your CSV files and replace the blank values with \N. Update the glob, *.csv, to match your needs.
The reason there are two identical regular expressions matching ,, is because I couldn't figure out another way to make it replace two consecutive blank values. E.g. ,,,.
"\N" (without quotes) in a data file signifies that the value should be null when the file is imported into MySQL. Can you edit the files to replace ",," with ",\N,"?

Sybase ASE 12.0 CSV Table Export

What I'm trying to do is export a view/table from Sybase ASE 12.0 into a CSV file, but I'm having a lot of difficulty in it.
We want to import it into IDEA or MS-Access. The way that these programs operate is with the text-field encapsulation character and a field separator character, along with new lines being the record separator (without being able to modify this).
Well, using bcp to export it is ultimately fruitless with its built in options. It doesn't allow you to define a text field encapsulation character (as far as I can tell). So we tried to create another view that reads from the other view/table that concatenates the fields that have new lines in them (text fields), however, you may not do that without losing some of the precision because it forces the field into a varchar of 8000 characters/bytes, of which our max field used is 16000 (so there's definitely some truncation).
So, we decided to create columns in the new view that had the text field delimiters. However, that put our column count for the view at 320 -- 70 more than the 250 column limit in ASE 12.0.
bcp can only work on existing tables and views, so what can we do to export this data? We're pretty much open to anything.
If its only the new line char that is causing problems can you not just do a replace
create new view as
select field1, field2, replace(text_field_with_char, 'new line char,' ' ')
from old_view
You may have to consider exporting as 2 files, importing into your target as 2 tables and then combining them again in the target. If both files have a primary key this is simple.
That sounds like bcp's right, but process the output via awk or perl.
But are those things you have and know? That might be a little overhead for you.
If you're on Windows you can get Active Perl free and it could be quick.
something like:
perl -F, -lane 'print "\"$F[0]\",$F[1],\"$F[2]\",$F[3]\n" ;' bcp-output-file
how's that? $F is an array of fields. The text ones you encircle with \"
You can use BCP format files for this.
bcp .... -f XXXX.fmt
BCP can also produce this format files interactively if you don't state
any of -c -n -f flags. Then you can save the format file and experiment with it, editing it and runnign BCP.
To safe time while exporting and debugging, use -F -L flags like "-F 1 -L 10" -- this gets only first 10 lines.

Import specific columns from text-file into mysql.. is this possible?

I've just downloaded a bunch of text files from data.gov, and there are fields in the text file that I really don't need.
Is there a way to import columns [1,3] and leave the rest?
I figure I'll import using 'load data in file', but didn't see anything on the mysql page as to how to only import certain columns.
http://dev.mysql.com/doc/refman/5.0/en/load-data.html
The fields are delimited by ^.
Just so I'm clear, if a line in the txt file is
00111^first column entry^second column entry^this would be the 3rd column
I am trying to get my mysql table to contain
first column entry | this would be the 3rd column
You can import the specific columns with:
LOAD DATA LOCAL INFILE 'yourFile' INTO TABLE table_name
FIELDS TERMINATED BY '^' (column1, #dummy, column3, #dummy);
Put all columns which you don't need in #dummy.
You could always create a table with a dummy column(s) which you drop after loading the file (assuming you don't have to load the file very often).
Something like this:
LOAD DATA LOCAL INFILE '/path/to/file' INTO TABLE table_name
FIELDS TERMINATED BY '^' (dummy_column1, column1, dummy_column2, column2);
ALTER TABLE table_name DROP dummy_column1;
ALTER TABLE table_name DROP dummy_column2;
Assuming a Unix platform, you could filter the fields upstream.
cut -d^ -f2,4 mygovfile.dat > mytable.txt
To filter the first and third column, then import using your preferred method.
For instance
mysqlimport --local -uxxx -pyyy mydb --fields-terminated-by="^" mytable.txt ....
The two most common ways of dealing with this:
Import the data just as it is into a
staging table, move what you need
into your "real" tables, then
truncate the staging table.
Use a text utility to snip out just
what you need.
My text utility of choice is awk. A minimal awk script--which probably won't work for you without some tweaking--would look like this.
$ awk 'BEGIN { FS="^";OFS=",";}{print $2, $4}' test.dat
first column entry,this would be the 3rd column
What kind of tweaking? It usually involves taking care of embedded commas, single quotes, and double quotes.
This part
BEGIN { FS="^";OFS=",";}{print $2, $4}
is the whole awk program.
awk rocks.

Manipulating giant MySQL dump files

What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone