I need to execute queries like this:
SELECT draw,SUM(uplata) as uplata, SUM(wonamount) as isplata,
SUM(uplata)- SUM(wonamount) as stanje,
COUNT(*) as ukupno_tiketa FROM macau.tickets
WHERE shop ='105' and status != 1 and draw = 1;
I need to execute that query 100 times changing only draw = 1 to draw = 2, draw = 3, etc...
How can i do this to get a new row for each select statement in mysql workbench to be able to export everything to csv file.??
First you need to write a store procedure with a while loop:
drop procedure if exists load_test_data;
delimiter #
create procedure load_test_data()
begin
declare v_max int unsigned default 100;
declare v_counter int unsigned default 0;
start transaction;
truncate table new_table;
while v_counter < v_max do
insert into new_table SELECT draw,SUM(uplata) as uplata, SUM(wonamount)
as isplata, SUM(uplata)- SUM(wonamount)
as stanje, COUNT(*) as ukupno_tiketa
FROM macau.tickets WHERE
shop ='105' and status != 1 and draw = v_counter;
set v_counter=v_counter+1;
end while;
commit;
end #
delimiter ;
and later you can call this procedure to run it:
call load_test_data();
And you can dump your new_table to a csv file:
SELECT *
FROM new_table
INTO OUTFILE '[path]/File.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
NOTE: Replace [path] with the folder that you want the file to be in it. Keep in mind that the folder should be writable and all full permissions needs to be set. Read this
Related
I have two (2) databases of dissimilar Schematics,
db1 migrated from MSSQL to MYSQL
and
db2 created from Laravel Migration.
Here's the challenge:
The tables of db1 do not have id columns (Primary Key) like is easily found on db2 tables. So I kept getting the warning message:
Current selection does not contain a unique column. Grid edit, checkbox, Edit, Copy and Delete features are not available.
So I had to inject the id columns on the tables in the db1
I need to extract fields [level_name, class_name] from stdlist in db1,
Create levels (id,level_name,X,Y) on db2
classes (id,class_name,level_id) on db2
To throw more light: The level_id should come from the already created levels table
I have already succeeded in extracting the first instance using the following snippet:
First Query to Create Levels
INSERT INTO db2.levels(level_name,X,Y)
SELECT class_name as level_name,1 as X,ClassAdmitted as Y
FROM db1.stdlist
GROUP BY ClassAdmitted;
This was successful.
Now, I need to use the newly created ids in levels table to fill up level_id column in the classes table.
For that to be possible, must I re-run the above selection schematics? Is there no better way to maybe join the table column from db1.levels to db2.stdlist and extract the required fields for the new insert schematics.
I'll appreciate any help. Thanks in advance.
Try adding a column for Processed and then do a while exists loop
INSERT INTO db2.levels(level_name,X,Y)
SELECT class_name as level_name,1 as X,ClassAdmitted as Y, 0 as Processed
FROM db1.stdlist
GROUP BY ClassAdmitted;
WHILE EXISTS(SELECT * FROM db2.levels WHERE Processed = 0)
BEGIN
DECLARE #level_name AS VARCHAR(MAX)
SELECT TOP 1 #level_name=level_name FROM db2.levels WHERE Processed = 0
--YOUR CODE
UPDATE db2.levels SET Processed=1 WHERE level_name=#level_name
END
You may need to dump into a temp table first and then insert into your real table (db2.levels) when you're done processing. Then you wouldn't need the Unnecessary column of processed on the final table.
This is what worked for me eventually:
First, I picked up the levels from the initial database thus:
INSERT INTO db2.levels(`name`,`school_id`,`short_code`)
SELECT name ,school_id,short_code
FROM db1.levels
GROUP BY name
ORDER BY CAST(IF(REPLACE(name,' ','')='','0',REPLACE(name,' ','')) AS UNSIGNED
INTEGER) ASC;
Then I created a PROCEDURE for the classes insertion
CREATE PROCEDURE dowhileClasses()
BEGIN
SET #Level = 1;
SET #Max = SELECT count(`id`) FROM db2.levels;
START TRANSACTION;
WHILE #Level <= #Max DO
BEGIN
DECLARE val1 VARCHAR(255) DEFAULT NULL;
DECLARE val2 VARCHAR(255) DEFAULT NULL;
DECLARE bDone TINYINT DEFAULT 0;
DECLARE curs CURSOR FOR
SELECT trim(`Class1`)
FROM db1.dbo_tblstudent
WHERE CAST(IF(REPLACE(name,' ','')='','0',REPLACE(name,' ','')) AS UNSIGNED INTEGER) =#Level
GROUP BY `Class1`;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET bDone = 1;
OPEN curs;
SET bDone = 0;
REPEAT
FETCH curs INTO val1;
IF bDone = 0 THEN
SET #classname = val1;
SET #levelID = (SELECT id FROM db2.levels WHERE short_code=#Level limit 1);
SET #schoolId = 1;
SET #classId = (SELECT `id` FROM db2.classes where class_name = #classname and level_id= #levelID limit 1);
IF #classId is null and #classname is not null THEN
INSERT INTO db2.classes(class_name,school_id,level_id)
VALUES(#classname,#schoolId,#levelID);
END IF;
END IF;
UNTIL bDone END REPEAT;
CLOSE curs;
END;
SELECT CONCAT('lEVEL: ',#Level,' Done');
SET #Level = #Level + 1;
END WHILE;
END;
//
delimiter ;
CALL dowhileClasses();
With this, I was able to dump The classes profile matching the previously created level_ids.
The whole magic relies on the CURSOR protocol.
For further details here is one of the documentations I used.
I want to do some tests with MySQL indexes and would like to see the effects of the different types of indexes (covering, clustering) on different queries myself by experimenting.
I have a very simple table with 3 cols, a, b, c.
DROP TABLE IF EXISTS test_table;
CREATE TABLE IF NOT EXISTS test_table (
a INT UNSIGNED NOT NULL,
b INT UNSIGNED NOT NULL,
c INT UNSIGNED NOT NULL
);
I then created a stored procedure to populate this 1000 times with random values from 1 to 100.
DROP PROCEDURE IF EXISTS test_procedure;
DELIMITER //
CREATE PROCEDURE test_procedure
(IN loop_amount INT)
BEGIN
DECLARE rand_max INT DEFAULT 99;
DECLARE a, b, c INT;
DECLARE i INT DEFAULT 0;
TRUNCATE TABLE test_table;
WHILE i < loop_amount DO
SET a = RAND() * rand_max + 1;
SET b = RAND() * rand_max + 1;
SET c = RAND() * rand_max + 1;
INSERT INTO test_table VALUES (a, b, c);
SET i = i + 1;
END WHILE;
END //
DELIMITER ;
CALL test_procedure(1000);
I could not run this many times because it became slow after 1000 loops.
I then doubled the table 13 times.
DROP PROCEDURE IF EXISTS test_procedure;
DELIMITER //
CREATE PROCEDURE test_procedure
(IN loop_amount INT)
BEGIN
DECLARE i INT DEFAULT 0;
WHILE i < loop_amount DO
INSERT INTO test_table SELECT * FROM test_table;
SET i = i + 1;
END WHILE;
END //
DELIMITER ;
CALL test_procedure(13);
But now it has around 16 million rows but I can't run this function anymore because it takes like a minute to run it with 1 as parameter, doubling once takes a minute, next doubling takes 2 minutes etc. How can I get to 1 billion faster?
Also the SELECT COUNT(*) FROM test_table; is really slow too. How can I speed this up to confirm table size?
load data local infile 'c:\a.txt' into table test_table;
[SQL]
load data local infile 'c:\a.txt' into table test_table;
受影响的行: 1000000
时间: 12.177s
I have a simple table test with three column
id -> primary key (auto increment)
name -> varchar
age -> int
I am using a simple stored procedure to populate 1 Million data in table.
drop PROCEDURE if EXISTS big_data;
CREATE PROCEDURE big_data()
BEGIN
DECLARE i int DEFAULT 1;
WHILE i <= 10000000 DO
INSERT INTO test(id, name, age) VALUES (i, 'name', 34);
SET i = i + 1;
END WHILE;
END;
CALL big_data();
Problem i am facing is inserting 1 million records is taking almost 6 to 7 hours with this simple schema. I want to know how to insert data fast ?
I donot want to use LOAD DATA INFILE query, I have not change any setting in my.ini file.
Simply want to know the reason of too much slow insert?
System Specification
Mysql version 5.5.25a
innodb_version =1.1.8
System = 16GB RAM / 8 cores # 2.70 GHz
Inserting 100 rows at a time is 10 times as fast as 100 1-row INSERTs:
INSERT INTO foo (a, b, c)
VALUES
(1,2,3),
(4,5,6), ...;
LOAD DATA is probably even faster.
try to make your query as single string and then execute it at once
drop PROCEDURE if EXISTS big_data;
CREATE PROCEDURE big_data()
BEGIN
DECLARE total_count TEXT DEFAULT '';
DECLARE i int DEFAULT 1;
WHILE i <= 10000000 DO
SET total_count = total_count + '(' + i+ ', name, 34),';
SET i = i + 1;
END WHILE;
SET total_count = TRIM(TRAILING ',' FROM total_count);
INSERT INTO test(id, name, age) VALUES ' + total_count;
END;
I have a table called Std_Components which acts like an index for list of components with associated tables. The column AssociatedTable holds the name of table that actually contains the component data.
Please check images below -
Here is table data for Std_SteeringPumps
I am trying to create a stored procedure that will copy Std_Components table as well as all associated tables with new name. For ex. Lets say if i provided 001 as a parameter to this stored procedure i should be able create new tables like C001_Components, C001_SteeringPumps and so on.
This is what I have done so far:
ALTER PROCEDURE [dbo].[sgi_sp_CreateTablesForNewCompany]
-- Add the parameters for the stored procedure here
#CompanyId varchar(5)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- declare variables
declare #qry as varchar(2000)
declare #compTblName as varchar(100)
set #compTblName = 'C'+#companyId +'_Components'
-- Check if table already exists
IF object_id(#compTblName) is not null
return
-- Create main component index table by copying standard component table --
set #qry = 'Select * into '+#compTblName+' From Std_Components;';
--print #qry
--execute (#qry)
set #qry =#qry + 'Update C'+#companyId +'_Components Set AssociatedTable=''C'+#companyId +'''+substring(AssociatedTable,4,200);';
--print #qry
--exec #qry
-- Create all child tables --
Select * Into #TempTbl From dbo.Std_Components
Declare #Id int
While (Select Count(*) From #TempTbl) > 0
Begin
declare #rowTableName as varchar(50)
declare #compNewTbl as varchar(50)
Select Top 1 #rowTableName=AssociatedTable, #Id = Id From #TempTbl
set #compNewTbl = 'C'+#companyId + substring(#rowTableName,4,200);
set #qry = #qry + 'Select * into '+#compNewTbl+' From ' + #rowTableName + ';'
--print #qry
--exec #qry
Delete #TempTbl Where Id = #Id
End
print #qry
exec #qry
END
Here is the output of the print statement for the query it generates -
Select * into C001_Components From Std_Components;
Update C001_Components Set AssociatedTable='C001'+substring(AssociatedTable,4,200);
Select * into C001_SteeringPumps From Std_SteeringPumps;
But when the stored procedure is executed, I get the following error -
Msg 203, Level 16, State 2, Procedure sgi_sp_CreateTablesForNewCompany, Line 56
The name 'Select * into C001_Components From Std_Components;Update C001_Components Set AssociatedTable='C001'+substring(AssociatedTable,4,200);Select * into C001_SteeringPumps From Std_SteeringPumps;' is not a valid identifier.
Can anybody help me out resolve this issue.
Thanks for sharing your time and wisdom.
The error you're getting is because the EXEC statement (the last line of the stored procedure) needs to have brackets around the #qry variable so that it becomes
exec(#qry)
Without the brackets it's treating the entire SQL string as stored procedure name.
The non valid indentifier is around the AssociatedTable part
Set AssociatedTable='C001'+substring(AssociatedTable,4,200); will not run as there is no scope for AssociatedTable to substring - the string needs to contain the name of the table completely to be able to be executed
Instead of
exec #qry;
You need
exec sp_executesql #qry;
You'll also need to change the type of #qry to NVARCHAR. Note that because of the dynamic sql, the proc is prone to SQL Injection and other escaping issues (i.e. ensure that #CompanyId is validated)
Multivalue insert example - it works manually but NOT in mySQL stored procedure.
INSERT INTO input_data1(mobile) VALUES (9619825525),(9619825255),(9324198256),(9013000002),(9999999450),(9999999876) ;
i am getting syntax error near "str" word in below proc, Can any one let me know how to implement this multi value INSERT work in procedure?
DELIMITER |
DROP PROCEDURE IF EXISTS mobile_series1;
CREATE PROCEDURE mobile_series1(IN str text)
LANGUAGE SQL READS SQL DATA
BEGIN
DROP TABLE IF EXISTS input_data1 ;
CREATE TEMPORARY TABLE input_data1 (mobile varchar(1000)) engine=memory;
INSERT INTO input_data1(mobile) VALUES str;
END |
DELIMITER ;
Thanks in Advance.
I don't have a MySQL server so there's probably syntax errors and +1 errors (i.e. may not be capturing the last on the list, may not progress past the first item etc, problems fixed by putting a +1 in the code), but you basically want to replace your INSERT statement with something this.
DECLARE INT _CURSOR 0;
DECLARE INT _TOKENLENGTH 0;
DECLARE VARCHAR _TOKEN NULL;
SELECT LOCATE(str, ",", _CURSOR) - _CURSOR INTO _TOKENLENGTH;
LOOP
IF _TOKENLENGTH <= 0 THEN
SELECT RIGHT(str, _CURSOR) INTO _TOKEN;
INSERT INTO input_data1(mobile) VALUE _TOKEN;
LEAVE;
END IF;
SELECT SUBSTRING(str, _CURSOR, _TOKENLENGTH) INTO _TOKEN;
INSERT INTO input_data1(mobile) VALUE _TOKEN;
SELECT _CURSOR + _TOKENLENGTH + 1 INTO _CURSOR;
SELECT LOCATE(str, ",", _CURSOR + 1) - _CURSOR INTO _TOKENLENGTH;
END LOOP;
Your function call would then be something like
EXEC mobile_series1('9619825525,9619825255,9324198256')