How to count number of SQL statements in text file? - mysql

My program restores a MySQL database from SQL file. If I wanted to display progress of SQL execution in my program, I would need to know the number of SQL statements in the file. How can I do this in MySQL? (The queries may consist of mysql specific multi-row insert statements)
I could use either MySQL command line tools or the Python API. You're welcome to post solutions for other DBMS too.

The simple (and easy) way: Add PRINT statements to your SQL script file, displaying progess messages.
The advantage (apart from the obvious 'it's hard to parse multi-statement constructs') is that you get precise control over the progress. For example, some statements might take much longer to run than others so you would need to weight them.
I wouldn't think of progress in terms of number of statements executed. What I do is print out feedback that specific tasks have been started and completed, such as 'Synchronising Table 'blah'', 'Updating Stored Procedure X' etc

The naive solution is to count the number of semicolons in the file (or any other character used as delimited in the file).
It usually works pretty well, except when the data you are inserting has many semicolons and then you have to start dealing with actual parsing of the SQLs, which is a headache.

Related

Save MySql 'Show' result in db

So I'm kind of stumped.
I have a MySql project that involves a database table that is being manipulated and altered by scripts on a regular basis. This isn't so unusual, but I need to automate a script to run (after hours, when changes aren't happening) that would save the result of the following:
SHOW CREATE TABLE [table-name];
This command generates the ready-to-run script that would create the (empty) table in it's current state.
In SqlWorkbench and Navicat it displays the result of this SHOW command in a field in a result set, as if it was the result of a SELECT statement.
Ideally, I want to take into a variable in a procedure, and change the table name; adding a '-mm-dd-yyyy' to end of it, so I could show the day-to-day changes in the table schema on an active server.
However, I can't seem to be able to do that. Unlike a Select result set, I can't use it like that. I can't get it in a variable, or save it to a temporary, or physical table or anything. I even tried to return this as a value in a function, from which I got the error that a function cannot return a result set - which explains why it's displayed like one in the db clients.
I suspect that this is a security thing in MySql? If so, I can totally understand why and see the dangers exposed to a hacker, but this isn't a public-facing box at all, and I have full root/admin access to it. Hopefully somebody has already tackled this problem before.
This is on MySql 8, btw.
[Edit] After my first initial comments, I need to add; I'm not concerned about the data with this question whatsoever, but rather just these schema changes.
What I'd really -like- to do is this:
SELECT `Create Table` FROM ( SHOW CREATE TABLE carts )
But this seems to be mixing apples and oranges, as SHOW and SELECT aren't created equal, although they both seem to return the same sort of object
You cannot do it in the MySQL stored procedure language.
https://dev.mysql.com/doc/refman/8.0/en/show.html says:
Many MySQL APIs (such as PHP) enable you to treat the result returned from a SHOW statement as you would a result set from a SELECT; see Chapter 29, Connectors and APIs, or your API documentation for more information. In addition, you can work in SQL with results from queries on tables in the INFORMATION_SCHEMA database, which you cannot easily do with results from SHOW statements. See Chapter 26, INFORMATION_SCHEMA Tables.
What is absent from this paragraph is any mention of treating the results of SHOW commands like the results of SELECT queries in other contexts. There is no support for setting a variable to the result of a SHOW command, or using INTO, or running SHOW in a subquery.
So you can capture the result returned by a SHOW command in a client programming language (Java, Python, PHP, etc.), and I suggest you do this.
In theory, all the information used by SHOW CREATE TABLE is accessible in the INFORMATION_SCHEMA tables (mostly TABLES and COLUMNS), but formatting a complete CREATE TABLE statement is a non-trivial exercise, and I wouldn't attempt it. For one thing, there are new features in every release of MySQL, e.g. new data types and table options, etc. So even if you could come up with the right query to produce this output, in a couple of years it would be out of date and it would be a thankless code maintenance chore to update it.
The closest solution I can think of, in pure MySQL, is to regularly clone the table structure (no data), like so:
CREATE TABLE backup_20220618 LIKE my_table;
As far as I know, to get your hands on the full explicit CREATE TABLE statement, as a string, would require the use of an external tool like mysqldump which was designed specifically for that purpose.

Mysql Workbench - The best way to organize running frequently used SQL queries while development

I'm a java dev who uses Mysql Workbench as a database client and IntelliJ IDEA as an IDE. Every day I do SQL queries to the database from 5 up to 50 times a day.
Is there a convenient way to save and re-run frequently used queries in Mysql Workbench/IntelliJ IDEA so that I can:
avoid typing a full query which has already been used again
smoothly access a list of queries I've already used (e.g by auto-completion)
If there is no way to do it using Mysql Workbench / IDEA, could you please advise any good tools providing this functionality?
Thanks!
Create Stored Procedures, one per query (or sequence of queries). Give them short names (to avoid needing auto-completion).
For example, to find out how many rows in table foo (SELECT COUNT(*) FROM foo;).
One-time setup:
DELIMITER //
CREATE PROCEDURE foo_ct
BEGIN;
SELECT COUNT(*) FROM foo;
END //
DELIMITER ;
Usage:
CALL foo_ct();
You can pass arguments in in order to make minor variations. Passing in a table name is somewhat complex, but numbers of dates, etc, are practical and probably easy.
If you have installed SQLyog for your mysql then you can use Favorites menu option in which you can save your query and in one click it will automatically writes the saved query on Query Editor.
The previous answers are correct - depending on the version of the Query Browser they are either called Favorites or Snippets - the problem being you can't create sub-folders to group them. And keeping tabs open is an option - but sometimes the browser 'dies' - and you're back to ground 0. So the obvious solution I came up with - create a database table! I have a few 'metadata' fields for descriptions - the project a query is associated to; problem the query solves; and the actual query.
You could keep your query library in an SQL file and load that when WB opens (it's automatically opened when you restart WB and that file was open on last close). When you want to run a specific query place the caret in it's text and press Ctrl+Enter (Cmd+Enter on Mac) to run only this query. The organization of that SQL file is totally up to you. You have more freedom than any "favorites" solution can give you. You can even have more than one file with grouped statements.
Additionally, MySQL Workbench has a query history (see the Output Tab), which is saved to disk, so you can return to a query even month's after you wrote it.

Sybase to MySQL automatic exportation

I have two databases: Sybase and MySQL. I need to export records to MySql when these are inserted in Sybase or export in some scheduled event.
I've tried with output statement but this can not be used in triggers or procedures.
Any suggestion to solve this problem?
(disclaimer, I've done similar things previously, but by no means would I consider the answer below the state of the art - just one possible approach
google around something like 'cross-database replication' or 'cross rdbms replication' to see who's done this before.
).
I would first of all see if you can't score an ETL tool do the job without too much work. There are free open source ones and even things like Microsoft SSIS might work on non-MS databases.
If not, I would split this into different steps.
Find an appropriate Sybase output command that exports a subset of rows from one or more tables. By subset I mean you need to be able to add a WHERE clause, not just do a full table dump.
Use an appropriate MySQL import script/command to load the data gotten out of step #1. You may need to cycle back and forth between the 2 till you have something that works manually.
Write a Sybase trigger to insert lookup keys into a to-export table. You want to store at least the tablename & source Sybase table's keys for each inserted row. Use column names like key1_char, key2_char, not the actual column names, that makes it easier to extend to other source tables as needed. keep trigger processing as light as possible. What about updates btw?
Write a scheduled batch on Sybase side to run step #1 for the rows flagged in #3.
Write a scheduled batch on Mysql to import ,via #2, the results of #4. Or kick it off from #4.
Another approach is to do the #3 flagging bit as needed, but use to drive one scheduled batch that SELECTs data from Sybase and INSERTs it into mysql directly.
You'll have to pick up the data from Sybase's SELECT and bind it manually to the INSERT of mysql. But you probably get finer control over whats going on and you don't have to juggle 2 batches. That's what I think a clever ETL would already be doing on your behalf. Any half clever scripting language like php, python or ruby ought to handle it easily. Especially important if you have things like surrogate/auto-generated keys.
Keep in mind that in both cases you'll have to either delete the to-export rows that you've successfully inserted or flag them as done.

phpMyAdmin reports one extra query

I have a large file that contains a mysqldump of a database. I searched for all the semicolons I come up with 378. When I upload the file to phpMyAdmin it reports 379 queries when doing an import. (I know that phpMyAdmin might not be reliable, but I just want to make sure I don't have a problem with the output of mysqldump)
Is there a way to find out what it thinks the extra query is? The dump file is pretty large and I don't want to share the information.
EDIT:
I have posted the database schema at:
http://blastohosting.com/pos_database.sql
It would be great if it could be determined to as why there is an extra query being reported by phpMyAdmin
I can't tell you why, necessarily, but I can say that I got exactly the same results: 378 semicolons, "379 queries executed".
However, here's the weird thing: when I removed the last two lines in the script:
(blank)
-- Dump completed on 2014-02-17 17:52:43
...PMA said "378 queries executed". It's evidently counting that final comment as a "query" even though it's ignored.
A dump file doesn't necessarily contain one semicolon per statement.
Any CREATE TRIGGER or CREATE PROCEDURE or CREATE FUNCTION statements have literal semicolons within their bodies, so it is common practice to change the client's statement terminator to something besides semicolon, to resolve the ambiguity.
So you could be counting semicolons that aren't statement terminators. Likewise, MySQL may recognize more SQL statements executed that didn't use semicolon.
MySQL builtin commands such as DELIMITER, CHARSET, USE, SOURCE, etc. don't need a terminator at all, but they may have one without error. They shouldn't be reported as statements executed by the server, but they may throw off your count of semicolons.
Do you have any semicolons inside string literals or SQL comments?

SQL Server sp_help : how to limit the number of output windows

Normally successful execution of
sp_help [object_name]
in SQL Server returns a total of 7 output windows with various results out of which normally I am interested in only 2 windows namely the one with all the column information and the one with the constraints.
Is there a way I can tell SQLserver to only display these while formulating the command?
Short answer: no, you can't do this directly because the procedure is written to return that data, and TSQL has no mechanism for accessing specific result sets.
Long answer: but you can easily get the same information from other procedures or directly from the system catalog:
sp_columns, sp_helpconstraint (this is actually called by sp_help) etc.
sys.columns, sys.objects etc.
There's also the option of copying the source code from sp_help and using it as the basis of a new procedure that you create yourself, although personally I would just write it myself from scratch. If you do decide to write your own stored proc, you might find this question relevant too.