Text being pasted in the wrong order over SSH to Mysql - mysql

I'm not sure if this is a Mysql problem or an SSH problem. However the issue does not happen when using another terminal program such as nano or a bash script.
I have a mysqldump file containing a bunch of lines that look like
INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:4...
with a lot of different entries (the lines are really long). I'm trying to paste it into my terminal to insert this data into a remote database, but when I paste, it seems to do it in pieces, inserting the chunks out of order. Here is an example post-mangling:
','May 2011',8,45);','April 2011',7,45),(21,'2011-05-09 09:31:28','2011-05-09 09:31:28','2011-05-12 08:48:16','','March 2011',6,45),(20,'2011-04-07 13:45:14','2011-04-07 13:45:14','2011-04-13 16:00:28','','February 2011',5,45),(19,'2011-03-03 13:36:26','2011-03-03 13:36:26','2011-03-10 08:34:19','','December 2010',4,45),(18,'2011-02-01 13:43:16','2011-02-01 13:43:16','2011-02-15 11:22:09','','November 2010',3,45),(17,'2010-12-07 12:04:53','2010-12-07 12:04:53','2010-12-09 10:00:02','','October 2010',2,45),(16,'2010-11-05 13:04:06','2010-11-05 13:04:06','2010-11-15 11:29:29','','September 2010',1,45),(14,'2010-10-05 08:58:27','2010-10-11 13:28:54','2010-10-12 07:21:20','INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:44:18','2010-10-11 12:33:46','\0','June 2010',0,45),(11,'2010-06-24 15:13:00','2010-06-24 15:13:22',NULL,'\0','May 2010',0,45),(12,'2010-08-25 12:47:42','2010-08-31 11:44:02','2010-10-11 12:33:59','
Does anyone know what is causing this issue and a workaround? I've tried assuming it's a display issue and inserting it anyway but that is a no-go. It seems it really is out of order. I'm using 10.6.6 Snow Leopard & Apple's terminal.
Thanks

Had a similar issue with pasting to both mysql and bash over ssh. The culprit was Unicode characters which got interpreted as control characters.
Symptoms: When pasting the text, the insert position would move back to pos 1 several times, seemingly randomly, without starting a new line, and then some new text would be inserted or some already pasted text would be overwritten, resulting in garbled mess.
Cause: It turned out that the text to paste had some unicode dash-like characters in them where dashes should be, and these were interpreted by bash and mysql as kind of control characters to move the insert column position.
So make sure your text does not contain unwanted characters before pasting.
Test: A good test for me has been to open vi (via ssh or not), go into insert mode (press "i") and insert (press Shift+Insert). It will display Unicode characters broken up into Unicode sequences.
Example: My text to paste started like this:
mysqldump –-opt –-no-create-db
Pasting this into bash or mysql via ssh resulted in:
--no-create-dbopt mysqldump
Pasting this into vi made the bad dash-like characters visible, resulting in:
mysqldump �~#~S-opt �~#~S-no-create-db
Means the first dash of every option was the wrong character. I corrected them and now everything worked fine:
mysqldump --opt --no-create-db

I like to do:
ssh hostname mysqldump database | mysql localdatabase
And just do the whole thing in one command.
You can of course add various options to the dump command to skip table drop and creation or other things you don't need.

Related

Extract data from HTML table and put it in a text file with shell

I need a shell script to get a public password for VPN from a site (which refreshes the password everyday more or less). The password is a HTML table, in a specific line of the HTML code of the web page. Once that I've retrieved the password (a word made of 5 characters) I'd like to put it at the end of a simple text file. I'd need a script like this to automatically update the password in my OpenWrt-based router's OpenVPN client.
This is the webpage I'm talking about, and this is line number 265, where the password is (there are two instances of the password, doesn't matter which one the script chooses:
<td>1<td>in1.vpnjantit.com<td>53,992,1194,25000<td><a href='http://www.vpnjantit.com/assets/in1.vpnjantit.com.zip'>in1.vpnjantit.com.zip</a><td>vpnjantit.com<td>x3bu7<td>2018-03-31 at 22:00<tr><tr><td>2<td>in2.vpnjantit.com<td>53,443,1194,25000<td><a href='http://www.vpnjantit.com/assets/in2.vpnjantit.com.zip'>in2.vpnjantit.com.zip</a><td>vpnjantit.com<td>x3bu7<td>2018-03-31 at 22:00<tr></table></div>
The file where I want to put the password it will be very simple:
vpnjantit.com
passwd
The first line is the username, and it will always be the same: "vpnjantit.com". The second line is the 5 characters password. I'd need that the script first deletes the second line of the file, and then it puts the password from the html file on the second line (replace the old password with the new one).
I looked around, and tried to do something with a sequency of awk, curl, cat and other commands, but I wasn't able to get the desired result. Really have no idea about how to realize this.
Thank you a lot in advance for any advice!
I've used nokogiri, though there are other tools.
echo vpnjantit.com > file.txt # first line
curl http://www.vpnjantit.com/free-openvpn-india.html | nokogiri -e 'puts $_.at_css("table > tr > td:nth-child(6)").text >> file.txt # second line
This would replace the file outright (delete it and create a new one).
Please note that this could break anytime with even minor format changes.

What does the e flag and special characters do in mysql?

I am studying mysql from http://dev.mysql.com/doc/refman/5.7/en/batch-mode.html . On second paragraph it says:
If you are running mysql under Windows and have some special characters in the file that cause problems, you can do this:
C:\> mysql -e "source batch-file"
What are special characters? If I save a file in notepad, would there be automatic special characters saved in the file? How to know whether they are there or not? Are they hidden?
What would the -e flag do? Where can I find its explanation in mysql documentation?
-e is actually short for --execute, that's probably why you had trouble finding it. http://dev.mysql.com/doc/refman/5.7/en/mysql-command-options.html#option_mysql_execute
Execute the statement and quit. The default output format is like that
produced with --batch. See Section 5.2.4, “Using Options on the
Command Line”, for some examples. With this option, mysql does not use
the history file.
A special charater is something that needs to be escaped in an SQL query. You will know when you run into them because mysql will produce errors.

Help with query that changes values right in the db

I recently migrated domains and in my database, I had stored full paths containing the old domain, which now broke :)
What I need to do is change values in the database table from
http://www.olddomain.com/img/some/path
to
http://www.newdomain/same/dir/structure/as/old/domain
The only caveat is that the photo names at the end of the url must be preserved. So essentially, I have to just change the host name.
Is that possible to do? If so, how? :)
Try this:
UPDATE table SET column = REPLACE(column,"www.olddomain.com","www.newdomain.com");
ALWAYS make sure you do a backup of your database before running a query that updates many records (as this will).
With a MySQL database, do this on the command line:
1 - Put full DB in a file:
mysqldump -uYOURUSERNAME -pYOURPASSWORD YOURDBNAME > YOURDBNAME.sql
2 - Replace olddomain.com with newdomain.com in the previous DB file:
sed -i 's/olddomain.com/newdomain.com/g' YOURDBNAME.sql
3 - Delete all tables in original database (make sure you have a backup), and update database with replaced domain in all rows of all tables, where applicable:
mysql -uYOURUSERNAME -pYOURPASSWORD YOURDBNAME < YOURDBNAME.sql
This is guaranteed to work. I've used this to update domains on Magento databases (300+ tables) several times.
FYI, sed is a linux/unix command line tool for "filtering and transforming text", I don't know if there is a windows version.
PS - If you really need to put slashes (/) in the domain (like if you're replacing www.example.com/sitedir with www.example.com), you should escape the slashes inside the sed string, i.e. instead of using /, use \/. For this example you would do:
sed -i 's/www.example.com\/sitedir/www.example.com/g' YOURDBNAME.sql

Copy the contents of a file to one field in mysql

Is it possible to copy the contents of a file into a field in a mysql table from the command line? Either at the command line or the mysql prompt. I don't want to have to write a script if there is an easier way.
Ideally, I'd like something like:
UPDATE MYTABLE SET MYFIELD=READ_CONTENTS_OF_FILE('myfile.txt') WHERE ID=1234;
Obviously that's not a real command but it illustrates what I'd like to do.
This works from the command line:
echo UPDATE MYTABLE SET MYFIELD=\'`cat myfile.txt`\' WHERE ID=1234 |mysql
But it doesn't preserve new lines and it gets screwed up if the file contains apostrophes.
This is something I've looked off-and-on into for years now. The issue came up so seldom I would quickly give up and just copy/paste into a gui client. It would be a handy trick for testing purposes once in a while.
Thanks!
I think you're looking for the MySQL load_file function:
UPDATE MYTABLE SET MYFIELD=LOAD_FILE('myfile.txt') WHERE id=1234;

Manipulating giant MySQL dump files

What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone