Is it possible to query Postgresql in order to get correct CSV line? For instance select concat (a,',',b) from t but with correctly escaped commas and quotes.
A couple of options.
Using psql
select * from some_table \g (format=csv) output.csv
This will create a CSV file named output.csv.
\copy cell_per to 'output.csv' WITH(format csv, header, delimiter '|');
The above allows you to use the options as explained here COPY to do things like change the delimiter, quoting, etc.
You can also use COPY directly as a query. Though in that case it is important to note that COPY runs as the server user and can only write files to directories the server user has permissions on. The work around is to make the output go to STDOUT and and capture it. For instance using the Python driver psycopg2 there are copy methods copy.
I have hundreds of SQL file which I want to restore all of the databases in different database name for each file.
I look around for a solution, but what I got is something like concat all the files into one SQL file using cat.* and then restore using the concatenated file.
But, what I want is to restore it to a different database so, I think concat is not suitable for my case.
Here's one solution: alternate USE commands with your sql files, so you change the default database before the respective database's content. Gather the whole collection together and then pipe that to the input of the mysql client.
Example using bash syntax:
(
echo "USE database1;"
cat file1.sql
echo "USE database2;"
cat file2.sql
...
) | mysql
Another solution is to run the mysql client once for each file, and specify the database name as the argument:
mysql database1 < file1.sql
mysql database2 < file2.sql
...
Re your comment:
You can write a loop in bash too.
for file in *.sql
do
db=...
mysql $db < $file
done
The tricky part above is the "..." — deciding which db goes with each input SQL file. You haven't described any way to match them, so I don't know what you'd have to do to figure that out. But if you can make that inference somehow from the filename, then you can do this without having to type every file.
I created a shell script such that will create a string that contain the process of table creation for db2 . As in Example:
string=" db2 "CREATE TABLE foo (......... ""
Now my script will connect to the database and input the string which translate to db2 that will create a table .Before shell inputs the string , I enabled on db2 the command
db2 update command options using z on test-database.txt
so that I want to save all the outputs on textfile
However, my problem is I want to for that string to show in the output file created by db2 just like when you are typing in db2 to create a table, but in never shows in the output file. It rather will show the result whether table successfully created or not in test-database.txt , e.g
The SQL command completed successfully.
Is there a way to make the output file show the creation of table ? . Thanks in advance
You are talking about the options for the db2clp, which has many different options.
If I understood, you are writing a script (a bash script, I think so) and you want to retrieve the command output. For this, you have two options
Write the command output into a file, and then read the file.
Redirect the command output to a varaible.
The first option is the easier one. This option uses the z option, that writes the whole output to a file. You can change this behaviour just by printing out what you want, and then redirecting the output to a file.
db2 -tf myfile.sql -z /tmp/output
VAR=$(cat /tmp/output)
The second option is a little tricky, because redirection implies the creation of another shell, and then you should reload the db2 profile. This option uses the v option, that is the standard output, and I hope the output is what you want to have.
VAR=$(. ~db2inst1/sqllib/db2profile ; db2 -tvf myfile.sql)
Finally, you just need to process the content of VAR, via awk, sed, grep, etc.
For more information: http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0010410.html
I'm not sure if this is a Mysql problem or an SSH problem. However the issue does not happen when using another terminal program such as nano or a bash script.
I have a mysqldump file containing a bunch of lines that look like
INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:4...
with a lot of different entries (the lines are really long). I'm trying to paste it into my terminal to insert this data into a remote database, but when I paste, it seems to do it in pieces, inserting the chunks out of order. Here is an example post-mangling:
','May 2011',8,45);','April 2011',7,45),(21,'2011-05-09 09:31:28','2011-05-09 09:31:28','2011-05-12 08:48:16','','March 2011',6,45),(20,'2011-04-07 13:45:14','2011-04-07 13:45:14','2011-04-13 16:00:28','','February 2011',5,45),(19,'2011-03-03 13:36:26','2011-03-03 13:36:26','2011-03-10 08:34:19','','December 2010',4,45),(18,'2011-02-01 13:43:16','2011-02-01 13:43:16','2011-02-15 11:22:09','','November 2010',3,45),(17,'2010-12-07 12:04:53','2010-12-07 12:04:53','2010-12-09 10:00:02','','October 2010',2,45),(16,'2010-11-05 13:04:06','2010-11-05 13:04:06','2010-11-15 11:29:29','','September 2010',1,45),(14,'2010-10-05 08:58:27','2010-10-11 13:28:54','2010-10-12 07:21:20','INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:44:18','2010-10-11 12:33:46','\0','June 2010',0,45),(11,'2010-06-24 15:13:00','2010-06-24 15:13:22',NULL,'\0','May 2010',0,45),(12,'2010-08-25 12:47:42','2010-08-31 11:44:02','2010-10-11 12:33:59','
Does anyone know what is causing this issue and a workaround? I've tried assuming it's a display issue and inserting it anyway but that is a no-go. It seems it really is out of order. I'm using 10.6.6 Snow Leopard & Apple's terminal.
Thanks
Had a similar issue with pasting to both mysql and bash over ssh. The culprit was Unicode characters which got interpreted as control characters.
Symptoms: When pasting the text, the insert position would move back to pos 1 several times, seemingly randomly, without starting a new line, and then some new text would be inserted or some already pasted text would be overwritten, resulting in garbled mess.
Cause: It turned out that the text to paste had some unicode dash-like characters in them where dashes should be, and these were interpreted by bash and mysql as kind of control characters to move the insert column position.
So make sure your text does not contain unwanted characters before pasting.
Test: A good test for me has been to open vi (via ssh or not), go into insert mode (press "i") and insert (press Shift+Insert). It will display Unicode characters broken up into Unicode sequences.
Example: My text to paste started like this:
mysqldump –-opt –-no-create-db
Pasting this into bash or mysql via ssh resulted in:
--no-create-dbopt mysqldump
Pasting this into vi made the bad dash-like characters visible, resulting in:
mysqldump �~#~S-opt �~#~S-no-create-db
Means the first dash of every option was the wrong character. I corrected them and now everything worked fine:
mysqldump --opt --no-create-db
I like to do:
ssh hostname mysqldump database | mysql localdatabase
And just do the whole thing in one command.
You can of course add various options to the dump command to skip table drop and creation or other things you don't need.
I recently migrated domains and in my database, I had stored full paths containing the old domain, which now broke :)
What I need to do is change values in the database table from
http://www.olddomain.com/img/some/path
to
http://www.newdomain/same/dir/structure/as/old/domain
The only caveat is that the photo names at the end of the url must be preserved. So essentially, I have to just change the host name.
Is that possible to do? If so, how? :)
Try this:
UPDATE table SET column = REPLACE(column,"www.olddomain.com","www.newdomain.com");
ALWAYS make sure you do a backup of your database before running a query that updates many records (as this will).
With a MySQL database, do this on the command line:
1 - Put full DB in a file:
mysqldump -uYOURUSERNAME -pYOURPASSWORD YOURDBNAME > YOURDBNAME.sql
2 - Replace olddomain.com with newdomain.com in the previous DB file:
sed -i 's/olddomain.com/newdomain.com/g' YOURDBNAME.sql
3 - Delete all tables in original database (make sure you have a backup), and update database with replaced domain in all rows of all tables, where applicable:
mysql -uYOURUSERNAME -pYOURPASSWORD YOURDBNAME < YOURDBNAME.sql
This is guaranteed to work. I've used this to update domains on Magento databases (300+ tables) several times.
FYI, sed is a linux/unix command line tool for "filtering and transforming text", I don't know if there is a windows version.
PS - If you really need to put slashes (/) in the domain (like if you're replacing www.example.com/sitedir with www.example.com), you should escape the slashes inside the sed string, i.e. instead of using /, use \/. For this example you would do:
sed -i 's/www.example.com\/sitedir/www.example.com/g' YOURDBNAME.sql