What is the best thing to approach something like:
select * from (show tables like "T_DATA___") // Invalid
There are over 600 tables with the name T_DATAxy where x and y are letters
Something went seriously wrong with this design. Accessing 600 tables at once means accessing as much as 1800 files on disk. You should've partitioned this data instead.
As far as th question goes, Im afraid that you will need to use a stored procedure or external application, to build a multiple UNION query statement. Still, I seem to remember that there's a limit of 32 tables merged in a UNION.
You could get the list of tables whose data you want (show tables like __) and then use mysql dump, passing in that list.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
If you are determined to get it from SQL queries, you could generate appropriate sql queries using macros and execute them all at once. e.g. get the list of tables, replace newline with "; (newline) select * from ", execute all queries. (The emacs mysql mode makes this super easy).
As the other commenter says, you won't be able to do it in a single query due to #-table limits.
Related
So I'm kind of stumped.
I have a MySql project that involves a database table that is being manipulated and altered by scripts on a regular basis. This isn't so unusual, but I need to automate a script to run (after hours, when changes aren't happening) that would save the result of the following:
SHOW CREATE TABLE [table-name];
This command generates the ready-to-run script that would create the (empty) table in it's current state.
In SqlWorkbench and Navicat it displays the result of this SHOW command in a field in a result set, as if it was the result of a SELECT statement.
Ideally, I want to take into a variable in a procedure, and change the table name; adding a '-mm-dd-yyyy' to end of it, so I could show the day-to-day changes in the table schema on an active server.
However, I can't seem to be able to do that. Unlike a Select result set, I can't use it like that. I can't get it in a variable, or save it to a temporary, or physical table or anything. I even tried to return this as a value in a function, from which I got the error that a function cannot return a result set - which explains why it's displayed like one in the db clients.
I suspect that this is a security thing in MySql? If so, I can totally understand why and see the dangers exposed to a hacker, but this isn't a public-facing box at all, and I have full root/admin access to it. Hopefully somebody has already tackled this problem before.
This is on MySql 8, btw.
[Edit] After my first initial comments, I need to add; I'm not concerned about the data with this question whatsoever, but rather just these schema changes.
What I'd really -like- to do is this:
SELECT `Create Table` FROM ( SHOW CREATE TABLE carts )
But this seems to be mixing apples and oranges, as SHOW and SELECT aren't created equal, although they both seem to return the same sort of object
You cannot do it in the MySQL stored procedure language.
https://dev.mysql.com/doc/refman/8.0/en/show.html says:
Many MySQL APIs (such as PHP) enable you to treat the result returned from a SHOW statement as you would a result set from a SELECT; see Chapter 29, Connectors and APIs, or your API documentation for more information. In addition, you can work in SQL with results from queries on tables in the INFORMATION_SCHEMA database, which you cannot easily do with results from SHOW statements. See Chapter 26, INFORMATION_SCHEMA Tables.
What is absent from this paragraph is any mention of treating the results of SHOW commands like the results of SELECT queries in other contexts. There is no support for setting a variable to the result of a SHOW command, or using INTO, or running SHOW in a subquery.
So you can capture the result returned by a SHOW command in a client programming language (Java, Python, PHP, etc.), and I suggest you do this.
In theory, all the information used by SHOW CREATE TABLE is accessible in the INFORMATION_SCHEMA tables (mostly TABLES and COLUMNS), but formatting a complete CREATE TABLE statement is a non-trivial exercise, and I wouldn't attempt it. For one thing, there are new features in every release of MySQL, e.g. new data types and table options, etc. So even if you could come up with the right query to produce this output, in a couple of years it would be out of date and it would be a thankless code maintenance chore to update it.
The closest solution I can think of, in pure MySQL, is to regularly clone the table structure (no data), like so:
CREATE TABLE backup_20220618 LIKE my_table;
As far as I know, to get your hands on the full explicit CREATE TABLE statement, as a string, would require the use of an external tool like mysqldump which was designed specifically for that purpose.
So I'm working on a legacy database, and unfortunately the performance of database is very slow. Simple select query can take up to 10 seconds in tables with less than 10000 record.
So i tried to investigate problem and found out that deleting column that they have used to store files (mostly videos and images) fix the problem and improve performance a lot.
Along with adding proper indexes I was able to run exact same query that used to take 10-15sec to run in under 1sec.
So my question is. Is there any already existing tool or script I can use to help me export those blobs (videos) from database and save the to disk and update row with new file name/path on file system?
If not is there any proper way to optimize database so that those blob would not impact performance that much?
Hint some one clients consuming this database use high level orms so we don't have much control on queries orm use to fetch rows and its relations. So I cannot optimize queries directly.
SELECT column FROM table1 WHERE id = 1 INTO DUMPFILE 'name.png';
How about this way?
These is also INTO_OUTFILEinstead of INTO_DUMPFILE
13.2.10.1 SELECT ... INTO Statement The SELECT ... INTO form of SELECT enables a query result to be stored in variables or written to a file:
SELECT ... INTO var_list selects column values and stores them into
variables.
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
SELECT ... INTO DUMPFILE writes a single row to a file without any
formatting.
Link: https://dev.mysql.com/doc/refman/8.0/en/select-into.html
Link: https://dev.mysql.com/doc/refman/8.0/en/select.html
I am in a process on converting a legacy system to web app using Ruby on Rails and MySQL.
There are few places that I'm stuck at while converting the data layer to MySQL procedures.
Giving a scenario below;
FUNCTION first_function
SELE Table1
REPL Table1.SmaCode WITH SMA(code,HcPc,FromDate)
ENDFUNC
FUNCTION SMA
... Lot of conditions ...
Lookup(param1,param2) * Parameters are based on the conditions above
.. Lot more conditions ....
ENDFUNC
FUNCTION Lookup
temp = Output of select on Check table
return temp
ENDFUNC
Here SMA is another function which has so many conditions and it also calls another function Lookup. In Lookup function it query a table named Checks, the parameter to Lookup is based on the SMA.
Please see the pastebin of the source code in disucssion, if you need more insight. http://pastebin.com/raw/Hvx3b8zN
How can I go and convert this kind of functions to MySQL procedures?
Edit:
I'm looking for insights on this from people who've already done these types of conversions, from procedure oriented languages to set based stored procedures to be exact.
The commentators are all right and I upticked them all. You have to actually write the code but it's not too hard once you get going.
The first thing I do is to examine my code and rewrite all the straightforward things like DELETE FOR .... into DELETE WHERE...
Then I look at my loops and think about how I can treat that data as a set. A lot of times, SCANs can be written as a regular query when you use appropriate JOIN conditions and WHERE conditions. There are a lot of query tools like CASE and subqueries that let you get a lot done with very little code. MySQL allows temporary tables and that can come in very useful. Lookups can often be done with subqueries.
On occasions, I have to use FETCH and WHILE loops but I avoid that as much as possible because it is slow and SQL is set based.
Just get started on the easy stuff and you'll get the hang of it :)
So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported.
What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x => !idList.Contains(x.Id)) clause on the remote query.
This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query.
Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure?
Thanks in advance.
This might not be the best answer, depending on how many records you're dealing with, but you could force the SQL to execute and just deal with it as in-memory objects. Calling the ToList() method will execute the SQL and convert to an IEnumerable .
What I might suggest is to have started by querying the vendor database first ordering the results by some kind of criteria (perhaps a date field, oldest to most recent).
You could do a Skip().Take() to "skim" the results and then take each bulk set and insert them into the custom db where the ID doesn't already exist. That way you avoid the problem you have now.
If you have db-create access to the SQL Server that the vendor's db is running on (or if your custom db is on the same server), you could create a "has been imported" table in a different database on that same server, and then write a stored proc that does a cross-database join of that table against the vendor db, e.g.:
select top 250 from vendordb.to_be_imported
where not exists
(select 1 from customdb.has_been_imported where idWasImported = idToBeImported)
order by whatever;
You might even be able to do this in Linq 2 SQL -- I've never tried adding objects from different databases into a single DataContext...
I'm writing a test framework in which I need to capture a MySQL database state (table structure, contents etc.).
I need this to implement a check that the state was not changed after certain operations. (Autoincrement values may be allowed to change, but I think I'll be able to handle this.)
The dump should preferably be in a human-readable format (preferably an SQL code, like mysqldump does).
I wish to limit my test framework to use a MySQL connection only. To capture the state it should not call mysqldump or access filesystem (like copy *.frm files or do SELECT INTO a file, pipes are fine though).
As this would be test-only code, I'm not concerned by the performance. I do need reliable behavior though.
What is the best way to implement the functionality I need?
I guess I should base my code on some of the existing open-source backup tools... Which is the best one to look at?
Update: I'm not specifying the language I write this in (no, that's not PHP), as I don't think I would be able to reuse code as is — my case is rather special (for practical purposes, lets assume MySQL C API). Code would be run on Linux.
Given your requirements, I think you are left with (pseudo-code + SQL)
tables = mysql_fetch "SHOW TABLES"
foreach table in tables
create = mysql_fetch "SHOW CREATE TABLE table"
print create
rows = mysql_fetch "SELECT * FROM table"
foreach row in rows
// or could use VALUES (v1, v2, ...), (v1, v2, ...), .... syntax (maybe preferable for smaller tables)
insert = "INSERT (fiedl1, field2, field2, etc) VALUES (value1, value2, value3, etc)"
print insert
Basically, fetch the list of all tables, then walk each table and generate INSERT statements for each row by hand (most apis have a simple way to fetch the list of column names, otherwise you can fall back to calling DESC TABLE).
SHOW CREATE TABLE is done for you, but I'm fairly certain there's nothing analogous to do SHOW INSERT ROWS.
And of course, instead of printing the dump you could do whatever you want with it.
If you don't want to use command line tools, in other words you want to do it completely within say php or whatever language you are using then why don't you iterate over the tables using SQL itself. for example to check the table structure one simple technique would be to capture a snapsot of the table structure with SHOW CREATE TABLE table_name, store the result and then later make the call again and compare the results.
Have you looked at the source code for mysqldump? I am sure most of what you want would be contained within that.
DC
Unless you build the export yourself, I don't think there is a simple solution to export and verify the data. If you do it table per table, LOAD DATA INFILE and SELECT ... INTO OUTFILE may be helpful.
I find it easier to rebuild the database for every test. At least, I can know the exact state of the data. Of course, it takes more time to run those tests, but it's a good incentive to abstract away the operations and write less tests that depend on the database.
An other alternative I use on some projects where the design does not allow such a good division, using InnoDB or some other transactional database engine works well. As long as you keep track of your transactions, or disable them during the test, you can simply start a transaction in setUp() and rollback in tearDown().