how to cross check all the tables in MYSQL database [duplicate] - mysql

Is there a mySQL query to search all tables within a database?
If not can you search all tables within a database from the mySQL workbench GUI?
From phpmyadmin there's a search panel you can use to select all tables to search through. I find this super effective since magento, the ecommerce package I'm working with has hundreds of tables and different product details are in different tables.

Alternatively, if your database is not that huge, you can make a dump and make your search in the .sql generated file.

If you want to do it purely in MySQL, without the help of any programming language, you could use this:
## Table for storing resultant output
CREATE TABLE `temp_details` (
`t_schema` varchar(45) NOT NULL,
`t_table` varchar(45) NOT NULL,
`t_field` varchar(45) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
## Procedure for search in all fields of all databases
DELIMITER $$
#Script to loop through all tables using Information_Schema
DROP PROCEDURE IF EXISTS get_table $$
CREATE PROCEDURE get_table(in_search varchar(50))
READS SQL DATA
BEGIN
DECLARE trunc_cmd VARCHAR(50);
DECLARE search_string VARCHAR(250);
DECLARE db,tbl,clmn CHAR(50);
DECLARE done INT DEFAULT 0;
DECLARE COUNTER INT;
DECLARE table_cur CURSOR FOR
SELECT concat('SELECT COUNT(*) INTO #CNT_VALUE FROM `',table_schema,'`.`',table_name,'` WHERE `', column_name,'` REGEXP ''',in_search,''';')
,table_schema,table_name,column_name
FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA NOT IN ('information_schema','test','mysql');
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1;
#Truncating table for refill the data for new search.
PREPARE trunc_cmd FROM "TRUNCATE TABLE temp_details;";
EXECUTE trunc_cmd ;
OPEN table_cur;
table_loop:LOOP
FETCH table_cur INTO search_string,db,tbl,clmn;
#Executing the search
SET #search_string = search_string;
SELECT search_string;
PREPARE search_string FROM #search_string;
EXECUTE search_string;
SET COUNTER = #CNT_VALUE;
SELECT COUNTER;
IF COUNTER>0 THEN
# Inserting required results from search to table
INSERT INTO temp_details VALUES(db,tbl,clmn);
END IF;
IF done=1 THEN
LEAVE table_loop;
END IF;
END LOOP;
CLOSE table_cur;
#Finally Show Results
SELECT * FROM temp_details;
END $$
DELIMITER ;
Source: http://forge.mysql.com/tools/tool.php?id=232

If you are using MySQL Workbench, you can do this by doing right click on the DB Schema you want to search into, and then "Search Table Data...".
In there you can select the "Search using REXEXP" option, and then type your text of search as usual. It will provide the DB rows matching your specific text.
You will need to check the "Search columns of all types" box as well.

In MySQL Workbench you can use the Table Data Search feature. It can search across multiple tables and/or multiple databases.

Search string in all tables on a database is a complex task. Normally you don't need to use exactly all tables and results are complex to read without a specific layout (tree of tables with matches or the like)
SQL Workbench/J offers a GUI and a command-line version to do such task:
More info:
http://www.sql-workbench.net/manual/wb-commands.html#command-search-data
http://www.sql-workbench.net/manual/dbexplorer.html#search-table-data
NOTE: Search with JDBC driver uses a lot of memory if it is not configured properly. SQL Workbench/J warns about that and although online documentation is a bit outdated, the sources of documentation (doc/xml/db-problems.xml) explain how to fix it for different BBDD:
Here an extract for Postgres:
The PostgreSQL JDBC driver defaults to buffer the results obtained from the database
in memory before returning them to the application. This means that when retrieving data, &wb-productname; uses (for a short amount of time) twice as much memory as really needed. This also means that WbExport or WbCopy will
effectively read the entire result into memory before writing it into the output file.
For large exports, this is usually not wanted.
This behavior of the driver can be changed so that the driver uses cursor based retrieval.
To do this, the connection profile must disable the "Autocommit" option and must define a default fetch size that is greater than zero. A recommended value is e.g. 10, it might be that higher numbers give a better performance. The number defined for the fetch size,
defines the number of rows the driver keeps in its internal buffer before requesting more
rows from the backend.

Related

how to take periodically database script backup using event in mysql

I want to take database script backup every day using event in mySql ..I am new to mySql , so unable to find out exact solution..can anybody help me to do so??
Tried it using mysqldump utility but it is command promt oriented , i want it to be done through event scheduler only.
DELIMITER $$
create EVENT `Backup`
ON SCHEDULE EVERY 1 minute
STARTS '2016-02-25 17:08:06' ON COMPLETION PRESERVE ENABLE
DO
BEGIN
SET #sql_text = CONCAT("SELECT * FROM purpleaid INTO OUTFILE '/C:/Users/Admin123/Desktop/db/" , DATE_FORMAT( NOW(), '%Y%m%d') , "db.csv'" );
PREPARE s1 FROM #sql_text;
EXECUTE s1;
DEALLOCATE PREPARE s1;
END $$
DELIMITER ;
tried this , but its for single table only.I want complete database script
You can use information_schema.tables table to get list of tables within a database, and information_schema.columns table to get list of columns (just in case you want to have column names included in the backup files).
Create a cursor by getting all table names from your database
Loop through the cursor and get the table name into a variable
Construct your select ... into outfile ... statements the same way as you do in your current code, just add the table name from the variable.
Execute the prepared statement.
If you want to add the column names dynamically to the output, then combine Joe's and matt's answers from this SO topic.
UPDATE
For views, stored procedures, functions, and triggers (and tables, for that matter) the issue is that you can't really interact with show create ... statements' results within sql. You can try to recreate their definitions from their respective information_schema tables, but as far as I know, it is not possible to fully reconstruct each object just based on these tables. You need to use an external tool for that, such us mysqldump. If you want a full backup option, then you would be a lot better off, if you used an external tool, that is scheduled by the OS' task scheduler.
Since table structures and other database objects do not change that often (at least, not in production), you can use external tool to back up the structure and use the internal scheduled script to regularly back up the contents.

Execute Query based on database version MySQL

I want to execute the queries based on Database version, this is my requirement
If the MySQL database version is 5.6 i want to alter the table column and add full text otherwise if version is less i don't want to alter the table.
The reason behind is i want to do LIKE %Something% against the column and the table is created using INNODB schema, as am not an expert in MySQL DB i googled for LIKE query performance for 2M+ records. Most of them are against using LIKE with double %. And also this InnoDB Fulltext support. So if i make the query to be executed based on DB version most of the users will get the benefit but who ever in the older version of DB(not willing to update) should adjust with performance.
Thanks in advance.
You can do this using a stored procedure like this:
DELIMITER $$
CREATE
PROCEDURE `test`.`temp`()
BEGIN
declare version1 varchar(10);
set version1 = (select version());
if(version1 like '5.6%') then
---- your alter query here
end if;
END$$
DELIMITER ;

Building and Testing with MySQL Workbench for SSMS People

I am great with Microsoft's SQL Server and SQL Server Management Studio (SSMS).
I'm trying to get things I used to do there to work in MySQL Workbench, but it is giving me very unhelpful errors.
Currently, I am trying to write an INSERT statement. I want to declare my variables and test it with a few values, then turn the end result into a stored procedure.
Right now, I have a syntax error that is not allowing me to continue, and the error message is not helpful either:
syntax error, unexpected DECLARE_SYM
There is no error list at the bottom and no way to copy the text of that error to the clipboard, so it all has to be studied on one screen, then flip to this screen so I can write it down.
Irritating!
The MySQL documentation surely has what I'm looking for, but I can learn much faster by doing than spending weeks reading their online manual.
DELIMITER $$
declare cGroupID char(6) DEFAULT = 'ABC123';
declare subGroupRecords int;
declare nDocTypeID int(11);
declare bDocActive tinyint(1) DEFAULT '1';
declare cDocID varchar(256) DEFAULT NULL;
insert into dbo_connection.documents
(group_id, subgroup_id, type_id, active, title, doc_id, priority, ahref, description, last_modified)
values
(cGroupID,cSubGroupID,nDocTypeID,bDocActive,cTitle,cDocID,0,ahref1, docDesc,NOW());
select * from dbo_connection.documents where group_id='ABC123';
END
So, for right now, I'm looking for why MySQL does not like my declare statement.
For the long term, I'm interested in finding a short article that shows a cookbook approach to doing some of the basic tasks that SQL developers would need (i.e. skips the Hello World program and discussion on data types).
DECLARE is only valid within stored programs. In other words, unlike T-SQL, you can't build up your query and then wrap CREATE PROCEDURE around it to turn the end result into a stored procedure, you have to build it up as a stored procedure from the get-go.

Microsoft Access - stored procedure with cursor

I have a stored procedure in SQL Server that I am using to delete duplicates from one of the tables.
This stored procedure makes use of a cursor.
I tried to create the same stored procedure in Microsoft Access by just replacing the 'CREATE PROCEDURE' with 'CREATE PROC' but it didn't seem to work.
Can anyone provide some workaround?
Here is the SQL Server stored procedure:
ALTER PROCEDURE [dbo].[csp_loginfo_duplicates]
AS
BEGIN
SET NOCOUNT ON;
declare #minrowid bigint
declare #empid nvarchar(15)
declare #dtpunched datetime
declare #count tinyint
declare curDuplicate cursor for
select empid,dtpunched,count(*),min(row_id) from loginfo
group by empid,dtpunched
having count(*)>1
open curDuplicate
fetch next from curduplicate into #empid,#dtpunched,#count,#minrowid
while (##fetch_status=0)
begin
delete from loginfo where empid=#empid and dtpunched=#dtpunched and row_id<>#minrowid
fetch next from curduplicate into #empid,#dtpunched,#count,#minrowid
end
close curDuplicate
deallocate curDuplicate
END
Beginning with Jet 4 (Access 2000), Access DDL includes support for CREATE PROCEDURE when you execute the DDL statement from ADO. However, those "procedures" are limited compared to what you may expect based on your SQL Server experience. They can include only one SQL statement, and no procedural type code. Basically an Access stored procedure is the same as a saved query, and is therefore subject to the same limitations which include the fact that Access' db engine doesn't speak T-SQL.
As for a workaround, you could create an Access-compatible DELETE statement to remove the duplicates. But I would first look for "the simplest thing which could possibly work".
Run your stored procedure on SQL Server to remove the duplicates.
Create a unique constraint to prevent new duplicates of the empid/dtpunched pairs in your loginfo table.
In your Access application, create a link to the SQL Server loginfo table.
However, I have no idea whether that suggestion is appropriate for your situation. If you give us more information to work with, chances are you will get a better answer.

mySQL query to search all tables within a database for a string?

Is there a mySQL query to search all tables within a database?
If not can you search all tables within a database from the mySQL workbench GUI?
From phpmyadmin there's a search panel you can use to select all tables to search through. I find this super effective since magento, the ecommerce package I'm working with has hundreds of tables and different product details are in different tables.
Alternatively, if your database is not that huge, you can make a dump and make your search in the .sql generated file.
If you want to do it purely in MySQL, without the help of any programming language, you could use this:
## Table for storing resultant output
CREATE TABLE `temp_details` (
`t_schema` varchar(45) NOT NULL,
`t_table` varchar(45) NOT NULL,
`t_field` varchar(45) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
## Procedure for search in all fields of all databases
DELIMITER $$
#Script to loop through all tables using Information_Schema
DROP PROCEDURE IF EXISTS get_table $$
CREATE PROCEDURE get_table(in_search varchar(50))
READS SQL DATA
BEGIN
DECLARE trunc_cmd VARCHAR(50);
DECLARE search_string VARCHAR(250);
DECLARE db,tbl,clmn CHAR(50);
DECLARE done INT DEFAULT 0;
DECLARE COUNTER INT;
DECLARE table_cur CURSOR FOR
SELECT concat('SELECT COUNT(*) INTO #CNT_VALUE FROM `',table_schema,'`.`',table_name,'` WHERE `', column_name,'` REGEXP ''',in_search,''';')
,table_schema,table_name,column_name
FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA NOT IN ('information_schema','test','mysql');
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1;
#Truncating table for refill the data for new search.
PREPARE trunc_cmd FROM "TRUNCATE TABLE temp_details;";
EXECUTE trunc_cmd ;
OPEN table_cur;
table_loop:LOOP
FETCH table_cur INTO search_string,db,tbl,clmn;
#Executing the search
SET #search_string = search_string;
SELECT search_string;
PREPARE search_string FROM #search_string;
EXECUTE search_string;
SET COUNTER = #CNT_VALUE;
SELECT COUNTER;
IF COUNTER>0 THEN
# Inserting required results from search to table
INSERT INTO temp_details VALUES(db,tbl,clmn);
END IF;
IF done=1 THEN
LEAVE table_loop;
END IF;
END LOOP;
CLOSE table_cur;
#Finally Show Results
SELECT * FROM temp_details;
END $$
DELIMITER ;
Source: http://forge.mysql.com/tools/tool.php?id=232
If you are using MySQL Workbench, you can do this by doing right click on the DB Schema you want to search into, and then "Search Table Data...".
In there you can select the "Search using REXEXP" option, and then type your text of search as usual. It will provide the DB rows matching your specific text.
You will need to check the "Search columns of all types" box as well.
In MySQL Workbench you can use the Table Data Search feature. It can search across multiple tables and/or multiple databases.
Search string in all tables on a database is a complex task. Normally you don't need to use exactly all tables and results are complex to read without a specific layout (tree of tables with matches or the like)
SQL Workbench/J offers a GUI and a command-line version to do such task:
More info:
http://www.sql-workbench.net/manual/wb-commands.html#command-search-data
http://www.sql-workbench.net/manual/dbexplorer.html#search-table-data
NOTE: Search with JDBC driver uses a lot of memory if it is not configured properly. SQL Workbench/J warns about that and although online documentation is a bit outdated, the sources of documentation (doc/xml/db-problems.xml) explain how to fix it for different BBDD:
Here an extract for Postgres:
The PostgreSQL JDBC driver defaults to buffer the results obtained from the database
in memory before returning them to the application. This means that when retrieving data, &wb-productname; uses (for a short amount of time) twice as much memory as really needed. This also means that WbExport or WbCopy will
effectively read the entire result into memory before writing it into the output file.
For large exports, this is usually not wanted.
This behavior of the driver can be changed so that the driver uses cursor based retrieval.
To do this, the connection profile must disable the "Autocommit" option and must define a default fetch size that is greater than zero. A recommended value is e.g. 10, it might be that higher numbers give a better performance. The number defined for the fetch size,
defines the number of rows the driver keeps in its internal buffer before requesting more
rows from the backend.