Powershell calling MySQL from command adding odd characters - mysql

I have a real odd one...
I'm outputting a table from a local MySQL database into a text file that has INSERT statements for each record (it's part of a much larger script and is the most efficient way to load data into an Aurora table).
All is working well except for one bug-bear. The first insert adds odd characters in the very first field inserted, but no others.
My insert statement:
.\mysqldump.exe -h localhost -u $localuser --password=$localpass --default-character-set=utf8 --extended-insert=FALSE --add-drop-table abcdatabase exporttable | Out-File $dataoutfile
The first insert statement saya "INSERT INTO exporttable VALUES ('13150',..."
Any idea what those first three characters are and, more importantly, how I get rid of them?
Thanks in advance

Ok, so I solved it with thanks to this post link near the bottom a comment by JBurace. I added -Encoding default to the end of the Out-File statement and the problem went away. Bizzare it was only one part of one field, but hey!

Related

bash concat strings in single variable using while read

In the following script, I try to get all tables name from a mysql database and I expect all table's name printed out, but no matter what I do or which method I use, it just doesn't work. the printed string I suppose are tables name overlapped on each other:
watchdoglescabularyrchygsey
What's wrong with this script?
mysql -Nse 'show tables' DATABASE |
{
while read table
do
alltables="$alltables $table"
done
echo $alltables;
}
Could it be that mysql separates the table names by \n\r instead of \n? The read would then read First Table, \rSecond Table, and so on. In most linux terminals \r causes the cursor to jump back to the start of the current line. ABC\r_ will be printed as _BC.
Checking for \r
Execute mysql -Nse 'show tables' DATABASE | sed 's:\r:\\r:' and look at the output. The control character \r will be printed as the literal string \r.
Deleting the \r
Insert a ... | tr -d '\r' | ... between the commands.

Need to add a delimiter in MySQL output from SHELL

First off, due to my MySQL user not having FILE rights on the server, I am having to use the below line to pipe my SELECT statement output to a file in shell instead of doing it directly in MySQL and being able to use INTO OUTFILE & FIELDS TERMINATED BY '|' which I'm guessing would solve all my problems.
So I have the following line to grab my fields:
echo "select id, UNIX_TIMESTAMP(time), company from database.table_name" | mysql -h database.mysql.host.com -u username -ppassword user > /root/sql/output.txt
This outputs the following 3 columns:
63 1414574321 person one
50 1225271921 Another person
8 1225271921 Company with many names
10 1414574567 Person with Company
I then use that data in other scripts to do some tasks.
My issue is that some columns, of which the third here, 'company', is an example, has spaces in its data meaning my WHILE loops later get thrown off.
I would like to add a delimiter to my output so it looks like this instead:
63|1414574321|person one
50|1225271921|Another person
8|1225271921|Company with many names
10|1414574567|Person with Company
and that way I could hopefully manipulate the data in blocks using awk -F| and IFS=| later.
There are many many more columns with variable lengths and number of words pr column to be added when I get it working, so I cannot use a method that relies on position to add the delimiter.
I feel the delimiter needs to be set when the data is dumped in the first place.
I've tried things like:
echo "select (id, + '|' + UNIX_TIMESTAMP(time), + '|' + company) from database.table_name" | mysql -h database.mysql.host.com -u username -ppassword user > /root/sql/output.txt
without any luck, its just adds the characters to the header of the output file.
Does anyone out there see a solution to what I could do?
In case anyone wonders, I'm dumping data from 2 databases, comparing timestamps and writing back the latest data to both databases.
You could use concat_ws function to recieve one concateneted string per row:
select concat_ws( '|', id, UNIX_TIMESTAMP(time) , company ) from database.table_name
Edit: Missing comma added, sorry!

Text being pasted in the wrong order over SSH to Mysql

I'm not sure if this is a Mysql problem or an SSH problem. However the issue does not happen when using another terminal program such as nano or a bash script.
I have a mysqldump file containing a bunch of lines that look like
INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:4...
with a lot of different entries (the lines are really long). I'm trying to paste it into my terminal to insert this data into a remote database, but when I paste, it seems to do it in pieces, inserting the chunks out of order. Here is an example post-mangling:
','May 2011',8,45);','April 2011',7,45),(21,'2011-05-09 09:31:28','2011-05-09 09:31:28','2011-05-12 08:48:16','','March 2011',6,45),(20,'2011-04-07 13:45:14','2011-04-07 13:45:14','2011-04-13 16:00:28','','February 2011',5,45),(19,'2011-03-03 13:36:26','2011-03-03 13:36:26','2011-03-10 08:34:19','','December 2010',4,45),(18,'2011-02-01 13:43:16','2011-02-01 13:43:16','2011-02-15 11:22:09','','November 2010',3,45),(17,'2010-12-07 12:04:53','2010-12-07 12:04:53','2010-12-09 10:00:02','','October 2010',2,45),(16,'2010-11-05 13:04:06','2010-11-05 13:04:06','2010-11-15 11:29:29','','September 2010',1,45),(14,'2010-10-05 08:58:27','2010-10-11 13:28:54','2010-10-12 07:21:20','INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:44:18','2010-10-11 12:33:46','\0','June 2010',0,45),(11,'2010-06-24 15:13:00','2010-06-24 15:13:22',NULL,'\0','May 2010',0,45),(12,'2010-08-25 12:47:42','2010-08-31 11:44:02','2010-10-11 12:33:59','
Does anyone know what is causing this issue and a workaround? I've tried assuming it's a display issue and inserting it anyway but that is a no-go. It seems it really is out of order. I'm using 10.6.6 Snow Leopard & Apple's terminal.
Thanks
Had a similar issue with pasting to both mysql and bash over ssh. The culprit was Unicode characters which got interpreted as control characters.
Symptoms: When pasting the text, the insert position would move back to pos 1 several times, seemingly randomly, without starting a new line, and then some new text would be inserted or some already pasted text would be overwritten, resulting in garbled mess.
Cause: It turned out that the text to paste had some unicode dash-like characters in them where dashes should be, and these were interpreted by bash and mysql as kind of control characters to move the insert column position.
So make sure your text does not contain unwanted characters before pasting.
Test: A good test for me has been to open vi (via ssh or not), go into insert mode (press "i") and insert (press Shift+Insert). It will display Unicode characters broken up into Unicode sequences.
Example: My text to paste started like this:
mysqldump –-opt –-no-create-db
Pasting this into bash or mysql via ssh resulted in:
--no-create-dbopt mysqldump
Pasting this into vi made the bad dash-like characters visible, resulting in:
mysqldump �~#~S-opt �~#~S-no-create-db
Means the first dash of every option was the wrong character. I corrected them and now everything worked fine:
mysqldump --opt --no-create-db
I like to do:
ssh hostname mysqldump database | mysql localdatabase
And just do the whole thing in one command.
You can of course add various options to the dump command to skip table drop and creation or other things you don't need.

mysqlimport - issue with spaces in table name

Dealing with some seriously incompetent database design here. Moving an app from MS Access to mySQL and for the moment it is important to preserve table names. However the Access db creator has spaces in his table names...
I tried doing this import with soft quotes, hard quotes, backticks, and no quotes but all give
"check the manual that corresponds to your MySQL server version for the right
syntax to use near 'Citation Table' at line 1,
when using table: Chain Citation Table"
I saw that you can escape spaces in some commands eg rm Chain\ Citation\ Table.txt but I get the same error from that.
Here is an example:
mysqlimport --host=mysql.myhost.com --user=dbuser -p \
--local --delete \
--fields-optionally-enclosed-by='|' \
--fields-terminated-by=';' \
--lines-terminated-by='\n' \
dbname "Chain Citation Table.txt"
What is the right way to handle this messed up situation? Do I have to make a holding table named SomethingWithoutSpaces and import to it and then copy across?
Thanks for any advice.
You could try using this Access to MySQL converter , it has worked well for me in the past.
In the end my solution is to split the massive delimited file into parts and import via phpMyAdmin. If anyone knows a way to specify a table name with spaces in it for mysqlimport syntax I would appreciate their help!
// 4.19.2011 Update
Found a better way using SQL, duh.
TRUNCATE TABLE `Chain Citation Table`;
LOAD DATA LOCAL INFILE ' Chain Citation Table.txt' INTO TABLE `Chain Citation Table`;

Manipulating giant MySQL dump files

What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone