Export blob column from mysql dB to disk and replace it with new file name - mysql

So I'm working on a legacy database, and unfortunately the performance of database is very slow. Simple select query can take up to 10 seconds in tables with less than 10000 record.
So i tried to investigate problem and found out that deleting column that they have used to store files (mostly videos and images) fix the problem and improve performance a lot.
Along with adding proper indexes I was able to run exact same query that used to take 10-15sec to run in under 1sec.
So my question is. Is there any already existing tool or script I can use to help me export those blobs (videos) from database and save the to disk and update row with new file name/path on file system?
If not is there any proper way to optimize database so that those blob would not impact performance that much?
Hint some one clients consuming this database use high level orms so we don't have much control on queries orm use to fetch rows and its relations. So I cannot optimize queries directly.

SELECT column FROM table1 WHERE id = 1 INTO DUMPFILE 'name.png';
How about this way?
These is also INTO_OUTFILEinstead of INTO_DUMPFILE
13.2.10.1 SELECT ... INTO Statement The SELECT ... INTO form of SELECT enables a query result to be stored in variables or written to a file:
SELECT ... INTO var_list selects column values and stores them into
variables.
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
SELECT ... INTO DUMPFILE writes a single row to a file without any
formatting.
Link: https://dev.mysql.com/doc/refman/8.0/en/select-into.html
Link: https://dev.mysql.com/doc/refman/8.0/en/select.html

Related

how create a sql insert query from php select query

My problem:
I am trying to delete some important rows from multiple tables, around 20 tables, I am afraid that deleting the rows might cause some problem(I am not the creator of this website), so before deleting the rows I am selecting the rows and writing it into a file. But I write it as an array.
Is there a way to write it as an sql insert statement, to a file, so that it would be easy for me to update the database if there is some problem.
For me it would be easier to store the information in a way that would allow me to understand the data. Then IF I need it, I could mutate the data into an INSERT statement.
I strongly encourage you as a professional software engineer, to try not to solve a problem that you might encounter, until you DO encounter it.
If you use phpMyAdmin you can run a query that selects those rows, then click the Export link under Query results operations:
In the next page, select Custom - display all possible options and SQL Format:
Then, further down the page, select data under Format specific options:
And then press Go. You will be prompted to Save or Open a file, which will include the appropriate INSERT statements to recreate the data from those rows.

Fetch from large DB doesn't work

I have this MySQL database with over a million records. I am not the owner of the database and dont have write/modify permissions to it. I have a small target db called MyDB that fetches some fields from the giant view. Now these are the problems I face working with the huge million-record table on MysqlWorkbench..
GiantDB(MySQL database)
--gview(over a million records. No permissions to write)
+id(PK)
+name-String
+marks-Integer
+country-String
myDB(Target SQLite DB)
--mytable
+id(PK)
+name-String
So this is a rough sketch of these two databases. I am not able to query gtable without setting row limits(to 1000).
count(*) doesnt work either.
My ultimate goal is to insert the million names into myTable from gtable.
Inserting gView's fields into myTable takes forever, and automatically gets killed.
Any way of doing this efficiently?
I looked up and people were talking about indexes and stuff. I am completely clueless on what to do. A clear explanation would be of great help. Thanks and regards.
(A million rows is a medium sized table. Don't let its size throw you.)
From the comment thread it sounds like you're taking too long to read the result set from MySQL, because it takes time to create your rows in your output database.
Think of this as an export from MySQL followed by an import to sqlite.
The export you can do with MySQL Workbench's export... feature, which itself uses the mysqldump command-line tool.
You then may need to edit the .sql file created by the export command so it's compatible with sqlite. Then import it into sqliite. There are multiple tools that can do this.
Or, if you're doing this in a program (a python program, perhaps) try reading your resultset from the MySQL database row by row and writing it to a temporary disk file.
Then disconnect from the MySQL database, open up your sqlLite database and the file, read the file line by like and load it into the database.
Or, if you write the file so it looks like this
1,"Person Name"
2,"Another Name"
3,"More Name"
etc, you'll have a so-called CSV (comma-separated value) file. There are many tools that can load such files into SQLlite.
Another choice: this will be mandatory if your MySQL database has very tight restrictions on what you can do. For example, they may have given you a 30-second query time limit. ASK your database administrator for help exporting this table to your sqlite databse. Tell her you need a .csv file.
You should be able to say SELECT MAX(id) FROM bigtable to get the largest ID value. If that doesn't work the table is probably corrupt.
One more suggestion: fetch the rows in batches of, say, ten thousand.
SELECT id, name FROM bigtable LIMIT 0,10000
SELECT id, name FROM bigtable LIMIT 10000,10000
SELECT id, name FROM bigtable LIMIT 20000,10000
SELECT id, name FROM bigtable LIMIT 30000,10000 etc etc.
This will be a pain in the neck, but it will get you your data if your dba is uncooperative.
I hope this helps.

How to insert data to mysql directly (not using sql queries)

I have a MySQL database that I use only for logging. It consists of several simple look-alike MyISAM tables. There is always one local (i.e. located on the same machine) client that only writes data to db and several remote clients that only read data.
What I need is to insert bulks of data from local client as fast as possible.
I have already tried many approaches to make this faster such as reducing amount of inserts by increasing the length of values list, or using LOAD DATA .. INFILE and some others.
Now it seems to me that I've came to the limitation of parsing values from string to its target data type (doesn't matter if it is done when parsing queries or a text file).
So the question is:
does MySQL provide some means of manipulating data directly for local clients (i.e. not using SQL)? Maybe there is some API that allow inserting data by simply passing a pointer.
Once again. I don't want to optimize SQL code or invoke the same queries in a script as hd1 adviced. What I want is to pass a buffer of data directly to the database engine. This means I don't want to invoke SQL at all. Is it possible?
Use mysql's LOAD DATA command:
Write the data to file in CSV format then execute this OS command:
LOAD DATA INFILE 'somefile.csv' INTO TABLE mytable
For more info, see the documentation
Other than LOAD DATA INFILE, I'm not sure there is any other way to get data into MySQL without using SQL. If you want to avoid parsing multiple times, you should use a client library that supports parameter binding, the query can be parsed and prepared once and executed multiple times with different data.
However, I highly doubt that parsing the query is your bottleneck. Is this a dedicated database server? What kind of hard disks are being used? Are they fast? Does your RAID controller have battery backed RAM? If so, you can optimize disk writes. Why aren't you using InnoDB instead of MyISAM?
With MySQL you can insert multiple tuples with one insert statement. I don't have an example, because I did this several years ago and don't have the source anymore.
Consider as mentioned to use one INSERT with multiple values:
INSERT INTO table_name (col1, col2) VALUES (1, 'A'), (2, 'B'), (3, 'C'), ( ... )
This leads to you only having to connect to your database with one bigger query instead of several smaller. It's easier to take in the entire couch through the door once than running back and forth with all disassembled pieces of the couch, opening the door every time. :)
Apart from that, you can also run LOCK TABLES table_name WRITE before INSERT and UNLOCK TABLES afterwards. That will secure that nothing else is inserted during.
Lock tables
INSERT into foo (foocol1, foocol2) VALUES ('foocol1val1', 'foocol2val1'),('foocol1val2','foocol2val2') and so on should sort you. More information and sample code will be found here. If you have further problems, do leave a comment.
UPDATE
If you don't want to use SQL, then try this shell script to do as many inserts as you want, put it in a file, say insertToDb.sh, and get on with your day/evening:
#!/bin/sh
mysql --user=me --password=foo dbname -h foo.example.com -e "insert into tablename (col1, col2) values ($1, $2);"
Invoke as sh insertToDb.sh col1value col2value. If I've still misunderstood your question, leave another comment.
After making some investigation I found no way of passing data directly to mysql database engine (without parsing it).
My aim was to speed up communication between local client and db server as much as possible. The idea was if client is local then it could use some api functions to pass data to db engine thus not using (i.e. parsing) SQL and values in it. The only closest solution was proposed by bobwienholt (using prepared statement and binding parameters). But LOAD DATA .. INFILE appeared to be a bit faster in my case.
The best way to insert data on MS SQL without using insert into or update queries is just to access MS SQL Interface. Right click on the table name and select "Edit top 200 rows". Then you will be able to add data on the database directly by just typing per cell. For you to enable searching or using select or other sql commands just right click on any of the 200 rows you have selected. Go to pane then select SQL and you can add sql command. Check it out. :D
without using insert statement , use " Sqllite Studio " for inserting data in mysql. It's free and open source so u can download and check.

Querying multiple MySQL tables

What is the best thing to approach something like:
select * from (show tables like "T_DATA___") // Invalid
There are over 600 tables with the name T_DATAxy where x and y are letters
Something went seriously wrong with this design. Accessing 600 tables at once means accessing as much as 1800 files on disk. You should've partitioned this data instead.
As far as th question goes, Im afraid that you will need to use a stored procedure or external application, to build a multiple UNION query statement. Still, I seem to remember that there's a limit of 32 tables merged in a UNION.
You could get the list of tables whose data you want (show tables like __) and then use mysql dump, passing in that list.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
If you are determined to get it from SQL queries, you could generate appropriate sql queries using macros and execute them all at once. e.g. get the list of tables, replace newline with "; (newline) select * from ", execute all queries. (The emacs mysql mode makes this super easy).
As the other commenter says, you won't be able to do it in a single query due to #-table limits.

MySQL: Dump a database from a SQL query

I'm writing a test framework in which I need to capture a MySQL database state (table structure, contents etc.).
I need this to implement a check that the state was not changed after certain operations. (Autoincrement values may be allowed to change, but I think I'll be able to handle this.)
The dump should preferably be in a human-readable format (preferably an SQL code, like mysqldump does).
I wish to limit my test framework to use a MySQL connection only. To capture the state it should not call mysqldump or access filesystem (like copy *.frm files or do SELECT INTO a file, pipes are fine though).
As this would be test-only code, I'm not concerned by the performance. I do need reliable behavior though.
What is the best way to implement the functionality I need?
I guess I should base my code on some of the existing open-source backup tools... Which is the best one to look at?
Update: I'm not specifying the language I write this in (no, that's not PHP), as I don't think I would be able to reuse code as is — my case is rather special (for practical purposes, lets assume MySQL C API). Code would be run on Linux.
Given your requirements, I think you are left with (pseudo-code + SQL)
tables = mysql_fetch "SHOW TABLES"
foreach table in tables
create = mysql_fetch "SHOW CREATE TABLE table"
print create
rows = mysql_fetch "SELECT * FROM table"
foreach row in rows
// or could use VALUES (v1, v2, ...), (v1, v2, ...), .... syntax (maybe preferable for smaller tables)
insert = "INSERT (fiedl1, field2, field2, etc) VALUES (value1, value2, value3, etc)"
print insert
Basically, fetch the list of all tables, then walk each table and generate INSERT statements for each row by hand (most apis have a simple way to fetch the list of column names, otherwise you can fall back to calling DESC TABLE).
SHOW CREATE TABLE is done for you, but I'm fairly certain there's nothing analogous to do SHOW INSERT ROWS.
And of course, instead of printing the dump you could do whatever you want with it.
If you don't want to use command line tools, in other words you want to do it completely within say php or whatever language you are using then why don't you iterate over the tables using SQL itself. for example to check the table structure one simple technique would be to capture a snapsot of the table structure with SHOW CREATE TABLE table_name, store the result and then later make the call again and compare the results.
Have you looked at the source code for mysqldump? I am sure most of what you want would be contained within that.
DC
Unless you build the export yourself, I don't think there is a simple solution to export and verify the data. If you do it table per table, LOAD DATA INFILE and SELECT ... INTO OUTFILE may be helpful.
I find it easier to rebuild the database for every test. At least, I can know the exact state of the data. Of course, it takes more time to run those tests, but it's a good incentive to abstract away the operations and write less tests that depend on the database.
An other alternative I use on some projects where the design does not allow such a good division, using InnoDB or some other transactional database engine works well. As long as you keep track of your transactions, or disable them during the test, you can simply start a transaction in setUp() and rollback in tearDown().