How do I write results of a MySQL query to a text file inside a loop? - mysql

I have a large database of individuals (say 1000 individuals/ 5,000 records per individual). I'd like to write a couple of fields for each individual (in this example let's say lat and long) to a text file (preferably comma separated).
The algorithm would look like this:
foo=select distinct (id) from <table-name>;
for each id in foo
{
smaller_result= select lat,long from <table-name> where id=$id;
write smaller_result to a text file with unique name (e.g. id.txt);
}
I can easily code this up in PHP (which I frequently use to interface with MySQL if I cannot run a command line SQL query directly). However, in this case, I need to share the code with a collaborator and have him run it (and he does not have and cannot install php). Also, the database is quite large and cannot easily be uploaded online (which would allow me to run the query through the web). So how else would I accomplish this?
a) Can this algorithm be written as a sql query that can be executed in a command line?
b) If not, can this be written in python such that my collaborator would just run a .py file?
We are both on OSX (Lion) and can access mysql and python from our shell / terminal.

You can output the result of any select query with SELECT ... INTO OUTFILE. This will generate a text file with the output (on the server).
So you need to create a single SELECT query which generates the file... I think in your case it can be easily done with a subquery, if not, you can create a stored procedure with a cursor that loops your 'foo' result set and appends it to a temp table, and then use SELECT ... INTO OUTFILE with that table.

Related

Export blob column from mysql dB to disk and replace it with new file name

So I'm working on a legacy database, and unfortunately the performance of database is very slow. Simple select query can take up to 10 seconds in tables with less than 10000 record.
So i tried to investigate problem and found out that deleting column that they have used to store files (mostly videos and images) fix the problem and improve performance a lot.
Along with adding proper indexes I was able to run exact same query that used to take 10-15sec to run in under 1sec.
So my question is. Is there any already existing tool or script I can use to help me export those blobs (videos) from database and save the to disk and update row with new file name/path on file system?
If not is there any proper way to optimize database so that those blob would not impact performance that much?
Hint some one clients consuming this database use high level orms so we don't have much control on queries orm use to fetch rows and its relations. So I cannot optimize queries directly.
SELECT column FROM table1 WHERE id = 1 INTO DUMPFILE 'name.png';
How about this way?
These is also INTO_OUTFILEinstead of INTO_DUMPFILE
13.2.10.1 SELECT ... INTO Statement The SELECT ... INTO form of SELECT enables a query result to be stored in variables or written to a file:
SELECT ... INTO var_list selects column values and stores them into
variables.
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
SELECT ... INTO DUMPFILE writes a single row to a file without any
formatting.
Link: https://dev.mysql.com/doc/refman/8.0/en/select-into.html
Link: https://dev.mysql.com/doc/refman/8.0/en/select.html

query to replace term in all tables of database

I want to search for http://example.com and replace with https://example.com.
I know I can target a specific table and column with this approach:
UPDATE table_name SET post_content = REPLACE(column_name, 'http://example.com', 'https://example.com');
But how do I run a query which targets all tables/columns: the entire database?
Do a DB dump and open it as a text file. Find and replace. Save and then re-import.
As far as I know, I don't think you can use REPLACE on all tables in one query.
There a two ways to do it. The first is to create SQL UPDATE via the information_schema and execute it as prepared statement. this is much work.
you must look at each column if you can do a replace, so you must ignore INTS and ENUMs etc.
The second way is not a real SQL change, but it works: Generate a full SQL-Dump from your database and make the changes in this file via editor or via commandline with AWK or SED. After this you can import the changed file

Mysql Workbench - The best way to organize running frequently used SQL queries while development

I'm a java dev who uses Mysql Workbench as a database client and IntelliJ IDEA as an IDE. Every day I do SQL queries to the database from 5 up to 50 times a day.
Is there a convenient way to save and re-run frequently used queries in Mysql Workbench/IntelliJ IDEA so that I can:
avoid typing a full query which has already been used again
smoothly access a list of queries I've already used (e.g by auto-completion)
If there is no way to do it using Mysql Workbench / IDEA, could you please advise any good tools providing this functionality?
Thanks!
Create Stored Procedures, one per query (or sequence of queries). Give them short names (to avoid needing auto-completion).
For example, to find out how many rows in table foo (SELECT COUNT(*) FROM foo;).
One-time setup:
DELIMITER //
CREATE PROCEDURE foo_ct
BEGIN;
SELECT COUNT(*) FROM foo;
END //
DELIMITER ;
Usage:
CALL foo_ct();
You can pass arguments in in order to make minor variations. Passing in a table name is somewhat complex, but numbers of dates, etc, are practical and probably easy.
If you have installed SQLyog for your mysql then you can use Favorites menu option in which you can save your query and in one click it will automatically writes the saved query on Query Editor.
The previous answers are correct - depending on the version of the Query Browser they are either called Favorites or Snippets - the problem being you can't create sub-folders to group them. And keeping tabs open is an option - but sometimes the browser 'dies' - and you're back to ground 0. So the obvious solution I came up with - create a database table! I have a few 'metadata' fields for descriptions - the project a query is associated to; problem the query solves; and the actual query.
You could keep your query library in an SQL file and load that when WB opens (it's automatically opened when you restart WB and that file was open on last close). When you want to run a specific query place the caret in it's text and press Ctrl+Enter (Cmd+Enter on Mac) to run only this query. The organization of that SQL file is totally up to you. You have more freedom than any "favorites" solution can give you. You can even have more than one file with grouped statements.
Additionally, MySQL Workbench has a query history (see the Output Tab), which is saved to disk, so you can return to a query even month's after you wrote it.

Dump MySQL Table(s) as XML using SQL command (like table_to_xml() or query_to_xml() in postgresql?)

is there a way to retrieve the result of a query / or to dump a whole table into an XML fragment which can be retrieved by using an XML query? I know there is something like this for PostgreSQL (9.0), table_to_xml()and query_to_xml().
I also know that mysqldump --xml can export XML, but I do seek for something that allows me to issue a simple query. The application I’m working on should allow some users to dump a certain table into an XML file on their machine, therefor I need to issue a query and obtain a String or something (is there an XML type in MySQL?).
I need the result to be XML and a Result Set of a query, not a file on server or something.
The query resulting in a SQL script similar to a MySQL dump for a single table would have three parts:
SHOW CREATE TABLE tblname - to generate CREATE TABLE statement.
DESCRIBE tblname - to retrieve column names for the construction of INTO(...) part of the INSERT queries.
SELECT * FROM tblname - to retrieve values for the construction of VALUES(...) part of the INSERT queries. Each row in the result set will correspond to an INSERT statement. INSERT statements will be generated in the loop handling the result set.
If this is to be done from MySQL, it can be wrapped into a stored procedure.
Found this in a question here at stackoverflow, as linked in the comments. Proposes to manually build XML in a query, like
SELECT concat("<this-is-xml>", field1, "</this-is-xml>") FROM ...
Of course, xml-charcter escaping and so on has to be done manually.
There seems to be no native way to directly get the result of a query as xml.
There is also a library (lib_mysqludf_xql) for mysql which provides XML functionality for MySQL.
INTO OUTFILE will dump the results to an XML file, so you could then send that to a client.

MySQL BinLog Statement Retrieval

I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program.
Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as:
INSERT INTO table SET field1='a';
INSERT INTO table SET field1='tommy';
INSERT INTO table SET field1='2';
I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor.
Thanks for your help in advance.
I never received an answer, but I will tell you what I did to get by.
1. Ran mysqlbinlog to a textfile
2. Created a PHP script that uses fgets to read each line of the log
3. While looping through each line, the script parses it using the stristr function
4. If the line matches the string I am looking for, it logs the line to a file
It takes a while to run mysqlbinlog and the PHP script, but it no longer times out. I originally used fread in PHP, but that reads the entire file into memory and caused the script to crash on large (1G) log files. Now, it takes several minutes to run (I also set the max_execution_time variable to be larger), but it works like a charm. fgets gets one line at a time, so it doesn't take up nearly as much memory.