Integration testing on live test website, restoring the data after test suite completion - mysql

I want to write integration/automated test cases using selenium of live website say testing.example.com. This site is staging website of example.com.
When i run test cases, new data is created, update, deleted. After completion of the test suite, i want to restore the database to the state where it was before running the test cases.
So for example state of database before running test cases --> s1
and state of database after running test cases --> s2
I want the database to go back to s1 state.
I am using rails framework and mysql/pg database
One solution could be taking dump of the database before running the test cases and then restoring the data after test cases run is completed.
What could be other solution?
Thanks

I did exactly that solution until the database got to big and the procedure to long time.
So I changed it to a type of rollback.
As a part of the setup in
Main class called by all test cases
def run_once_beginning
DEBUG("start","Running start function")
DefaultDatabase.prepare_rollback
DefaultDatabase.resetDB
end
def run_once_end
DEBUG("start","Running end function")
DefaultDatabase.drop_all_triggers
DefaultDatabase.resetDB
end
def setup
if(State.instance.started == false)
State.instance.started = true
run_once_beginning
at_exit do
run_once_end
end
end
end
DefaultDatabase.rb
def prepare_rollback
query = "CREATE TABLE IF NOT EXISTS `changes` (`table_name` varchar(60) NOT NULL, PRIMARY KEY (`table_name`))"
HandleDB.sendQuery(query)
query = "show tables;"
tables = HandleDB.sendQuery(query)
exclude = ["phpsessions", "cameras", "changes", "connections", "sysevents"]
triggers = "";
tables.each{ |name|
if (exclude.index(name) == nil)
triggers += "DROP TRIGGER IF EXISTS ins_#{name};"
triggers += "CREATE TRIGGER ins_#{name} AFTER INSERT ON #{name}
FOR EACH ROW BEGIN
INSERT IGNORE INTO `changes` VALUES ('#{name}');
END;"
triggers += "DROP TRIGGER IF EXISTS up_#{name};"
triggers += "CREATE TRIGGER up_#{name} AFTER UPDATE ON #{name}
FOR EACH ROW BEGIN
INSERT IGNORE INTO `changes` VALUES ('#{name}');
END;"
triggers += "DROP TRIGGER IF EXISTS del_#{name};"
triggers += "CREATE TRIGGER del_#{name} AFTER DELETE ON #{name}
FOR EACH ROW BEGIN
INSERT IGNORE INTO `changes` VALUES ('#{name}');
END;"
end
}
setup_connecion = Mysql.new(Constants::DB["ServerAddress"],"root", Constants::DB["Password"],
Constants::DB["Database"],Constants::DB["ServerPort"], nil, Mysql::CLIENT_MULTI_STATEMENTS)
setup_connecion.query(triggers)
# Clear out all the results.
while setup_connecion.more_results
setup_connecion.next_result
end
query = "DROP PROCEDURE IF EXISTS RestoreProc;"
setup_connecion.query(query)
# This is a mysql stored procedure. It will use a cursor for extracting names of
# changed tables from the changed-table and then truncate these tables and
# repopulate them with data from the default db.
query = "CREATE PROCEDURE RestoreProc()
BEGIN
DECLARE changed_name VARCHAR(60);
DECLARE no_more_changes INT DEFAULT 0;
DECLARE tables_cursor CURSOR FOR SELECT table_name FROM changes;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_changes = 1;
OPEN tables_cursor;
FETCH tables_cursor INTO changed_name;
IF changed_name IS NOT NULL THEN
REPEAT
SET #sql_text=CONCAT('TRUNCATE ', changed_name);
PREPARE stmt FROM #sql_text;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET #sql_text=CONCAT('INSERT INTO ', changed_name, ' SELECT * FROM default_db.', changed_name);
PREPARE stmt FROM #sql_text;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
FETCH tables_cursor INTO changed_name;
UNTIL no_more_changes = 1
END REPEAT;
END IF;
CLOSE tables_cursor;
TRUNCATE changes;
END ;"
setup_connecion.query(query)
setup_connecion.close
end
def drop_all_triggers
query = "show tables;"
tables = HandleDB.sendQuery(query)
triggers = ""
tables.each{ |name|
triggers += "DROP TRIGGER IF EXISTS del_#{name};"
triggers += "DROP TRIGGER IF EXISTS up_#{name};"
triggers += "DROP TRIGGER IF EXISTS ins_#{name};"
}
triggers_connection = Mysql.new(Constants::DB["ServerAddress"],"root", Constants::DB["Password"],
Constants::DB["Database"],Constants::DB["ServerPort"], nil, Mysql::CLIENT_MULTI_STATEMENTS)
triggers_connection.query(triggers)
triggers_connection.close
end
def resetDB
restore_connection = Mysql.new(Constants::DB["ServerAddress"],Constants::DB["User"], Constants::DB["Password"],
Constants::DB["Database"], Constants::DB["ServerPort"], nil, Mysql::CLIENT_MULTI_RESULTS)
restore_connection.query("CALL RestoreProc();")
restore_connection.close
end
For this to work a copy is needed of the original database in the state you want it to return to named "default_db".
"Exclude" are tables which will not be changed

Related

MYSQL drop table when all the values within a specific column are equal to "DONE"

I am programming MYSQL and I use Python on Raspberry PI 4.
I need to drop table when all the values in my status_s column are equal to "DONE". I cannot figure out how to drop table under a certain condition. MYSQL tables can be found here for testing:
https://www.db-fiddle.com/f/siZmmKWLjRDdpYX6deEPYF/1
Initially, the status_s values are not "DONE". As my program runs, the values update and eventually all of them will be "DONE", at that point, I do not want to have this table anymore as it is not important.
Thanks in advance
UPDATE Adding snippet of Python program
def update_data_when_complete(conn,table_name):
cur = conn.cursor()
sql = "SELECT COUNT(DISTINCT(ID)) = SUM(Status = 'DONE') FROM {table}"
cur.execute(sql.format(table=table_name))
complete_result = cur.fetchone()
conn.commit()
#print("COmplete result = ",complete_result[0])
# if complete_result[0] is 1 here, all rows are "DONE" and must delete table after few minutes
if(complete_result[0] == 1):
sql = "DROP TABLE {table}"
cur.execute(sql.format(table=table_name))
conn.commit()
else:
print("Table not fully complete yet")
Use Event Scheduler.
Create event procedure:
CREATE EVENT remove_temptable
ON SCHEDULE
EVERY 1 MINUTE
COMMENT 'Remove `temptable` when its `status_s` column is equal to "DONE" in all rows.'
DO
BEGIN
IF EXISTS ( SELECT NULL
FROM INFORMATOIN_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'my_database'
AND TABLE_NAME = 'temptable' ) THEN
IF !( SELECT SUM(status_s != 'DONE')
FROM my_database.temptable ) THEN
DROP TABLE my_database.temptable;
END iF;
END IF;
END;
This procedure will check the table temptable for its existence firstly. If it exists then the procedure checks does a row with non-NULL value in status_s column other than 'DONE' exists. If not then the procedure drops the table.
The procedure is executed each minute. You may adjust how often it is executed. Also, when it is created, you may enable or disable it using ALTER EVENT (for example you may enable it after temptable creation and disable after you ensure the table is dropped).
Do not forget to enable Event Scheduler.

SQL Event - DELETE AND UPDATE rows on tables after UPDATE other table

I'd like to have a tricky SQL statement as an Event that runs every couple of minutes.
Currently, I'm doing so with Java, using 3 separate statements that executing sequentiality in a transaction connection.
Q: I don't know how to construct such an SQL statement without Java. If impossible to have a single SQL statement, I'd like to use transaction (as I'm using in Java) and rollback in case of failure in any of those separate statements.
My Case:
I have 3 tables: "Factory", "Plan", "Machine".
I want to do something as below:
1.
WHERE Machines.annualCheck == "TRUE"
SET Machine.status = "IN_ANNUAL_CHECK"
For machines that got updated I need to do the following:
2.1 Update the related factory
WHERE Factory.id == Machine.linkFactoryID
UPDATE Factory.totalActiveMachines = --1
2.2 Delete the upcoming plans that planned to be handled by the related machine
DELETE rows WHERE Plan.willHandleByMachineID = Machine.ID
p.s. I'm using MySQL
Thank you!
Update:
In following to Simonare suggestion, I tired to do the following:
DELIMITER $
CREATE PROCEDURE annualCheck(IN Machine_ID int, IN Factory_ID int)
BEGIN
UPDATE machine_table
SET machine_table.annualCheck = 'IN_ANNUAL_CHECK'
WHERE machine_table.machine_id = Machine_ID;
UPDATE factory_table
SET factory_table.totalActiveMachines = factory_table.totalActiveMachines - 1
WHERE factory_table.factory_id = Factory_ID;
DELETE FROM plan_table WHERE plan_table.assign_to_machine = Machine_ID
END$
DELIMITER $$
BEGIN
SELECT #m_id = machine_id, #f_id = link_factory_id
FROM machine_table
WHERE machine_table.annualCheck = 'TRUE';
END$$
CALL annualCheck(#m_id,#f_id)
I don't know why, but I'm running into syntax errors - one after the other.
It's my first time to use PROCEDURE and DELIMITER. Am I doing it right?
you can use stored procedure
delimiter //
CREATE PROCEDURE myProc (IN Machine_ID int)
BEGIN
UPDATE myTable
SET Machine.status = "IN_ANNUAL_CHECK"
WHERE Machines.annualCheck == "TRUE";
Update the related factory
WHERE Factory.id == Machine.linkFactoryID
UPDATE Factory.totalActiveMachines = totalActiveMachines -1;
DELETE FROM Plan WHERE Plan.willHandleByMachineID = Machine_ID;
END//
then you can execute it either from mysql
CALL simpleproc(#a);
or from Java
It is also possible to create trigger on the Machine table, something like this:
CREATE TRIGGER `TRG_Machines_AfterUpdate` AFTER UPDATE ON `Machine` FOR EACH ROW BEGIN
IF OLD.annualCheck = 'TRUE' AND NEW.annualCheck = 'IN_ANNUAL_CHECK' THEN
UPDATE
Factory
SET
totalActiveMachines = totalActiveMachines - 1
WHERE
id = NEW.linkFactoryID
;
DELETE FROM
Plan
WHERE
willHandleByMachineID = NEW.ID
;
END;
END
So you can just issue normal update:
UPDATE Machine SET annualCheck = 'IN_ANNUAL_CHECK' WHERE annualCheck = 'TRUE'

Informix External Tables DROP

I am trying to unload data from an internal Infomix table into an external file using the following stored procedure.
create table table_name(
column1 int,
column2 int
)
insert into table_name(column1, column2) values(1, 1);
insert into table_name(column1, column2) values(2, 2);
insert into table_name(column1, column2) values(3, 3);
=====================================================
create procedure spunloaddata(p_unload_filename varchar(128))
returning
int as num_recs ;
DEFINE l_set SMALLINT;
DEFINE l_statusCode int;
DEFINE l_exec_string lvarchar(4000);
DEFINE l_unique_id INT8;
DEFINE l_num_recs smallint;
ON EXCEPTION
SET l_set
IF (l_set = -535) THEN -- already in TRANSACTION
ELIF (l_set = -244) THEN -- row locked
RETURN l_set;
ELIF ((l_set <> -958) AND (l_set <> -310 ))THEN -- temp table already exists
RETURN -1;
END IF
END EXCEPTION WITH RESUME;
TRACE ON;
-- TRACE OFF;
SET LOCK MODE TO WAIT 30;
SET ISOLATION TO DIRTY READ;
LET l_num_recs = 0;
LET l_unique_id = MOD(DBINFO("sessionid"), 100000) * 100000 + DBINFO('UTC_CURRENT');
-- get all EMAIL notifications
LET l_exec_string = 'SELECT column1, column2 '
|| ' FROM table_name '
|| ' INTO EXTERNAL ext_temp_EMAILnotifications' || l_unique_id
|| ' USING (DATAFILES("DISK:'
|| TRIM(p_unload_filename)
|| '"))';
TRACE l_exec_string;
EXECUTE IMMEDIATE l_exec_string;
LET l_statusCode = SQLCODE;
LET l_num_recs = DBINFO('sqlca.sqlerrd2');
IF(l_statusCode <> 0) THEN
LET l_num_recs = l_statusCode;
END IF
BEGIN
ON EXCEPTION IN (-206) END EXCEPTION WITH RESUME;
LET l_exec_string = 'DROP TABLE ext_temp_EMAILnotifications' || l_unique_id;
EXECUTE IMMEDIATE l_exec_string;
END
RETURN l_num_recs ;
END PROCEDURE;
===========================================================================
execute stored proc
===========================================================================
dbaccess "databasename"<<!
execute procedure spunloaddata("/tmp/foo.unl");
!
The external tables are not getting dropped and my database is filling up with these tables. The stored procedure is going into -206 error (the specified table is not in the database) at the "DROP tablename" code. I can see the table being created.
When I do a dbaccess databasename and go into tables and Info, I see all the ext_temp_uniqueid tables listed. When I try to manually drop these tables through dbaccess, I get the same -206 error.
Any help is appreciated.

insert large volume of data in mysql

I want to insert atleast 500,000 fresh records in one shot. For which I have used while loop inside procedure. My query is working fine but it is taking alot of time to execute. So, I am looking for a solution using which I can make the process of insertion of large volume of data faster. I have gone through many links but I didn't find them resourceful.
Note:
I want to insert fresh data, not the data from existing table.
Syntactically the code provided below is correct and working. So please do not provide any suggestions regarding syntax.
Below is my procedure:
BEGIN
DECLARE x INT;
DECLARE start_s INT;
DECLARE end_s INT;
SET x = 0;
PREPARE stmt FROM
'insert into primary_packing(job_id, barcode, start_sn, end_sn, bundle_number, client_code, deno, number_sheets, number_pins, status)
values (?,?,?,?,?,?,?,?,?,?)';
SET #job_id = job_id, #barcode = barcode, #bundle_number = bundle_number, #client_code = client_code, #deno = deno, #number_sheets = number_sheets, #number_pins = number_pins, #status = 1;
set #start_sn = start_sn;
set #end_sn = end_sn;
WHILE x <= multiply DO
set #start_s = (start_sn+(diff*x));
set #end_s = ((end_sn-1)+(diff*x)+diff);
EXECUTE stmt USING #job_id, #barcode, #start_s, #end_s, #bundle_number, #client_code, #deno, #number_sheets, #number_pins ,#status;
SET x = x + 1;
END WHILE;
DEALLOCATE PREPARE stmt;
END
Use MYSQL command LOAD DATA INFILE to load your .csv files records to specific table.
For more information and eg. please reffer the following link
http://dev.mysql.com/doc/refman/5.1/en/load-data.html

perl mySQL procedure with insert and select fails when in transaction

This is my perl code:
my $dbc = DBI->connect('DBI:mysql:test', "entcfg", "entcfg") || die "Could not connect to database: $DBI::errstr";
$dbc->{TraceLevel} = "2"; #debug mode
$dbc->{AutoCommit} = 0; #enable transactions, if possible
$dbc->{RaiseError} = 1; #raise database errors
###sql commands
my $particle_value = $dbc->prepare('CALL particle_test_value(?,?,?,?)');
my $particle_name = $dbc->prepare('CALL particle_test_name(?,?,?,?)');
my $table_test = $dbc->prepare('CALL table_test(?,?,?)');
sub actionMessage {
my ($sh,$msgobj) = #_;
my #result;
my $return_ID;
eval {
$table_test->execute(undef,"value","value"); #new item
$return_ID = $table_test->fetchrow_array(); #get new row id
};
if ($#) {
warn $#; # print the error
}
}
The mySQL transaction is as follows:
CREATE DEFINER=`root`#`localhost` PROCEDURE `table_test`(
v_id INT,
v_name VARCHAR(255),
v_value VARCHAR(255)
)
BEGIN
INSERT INTO test (name,value) VALUES (v_name,v_value);
SELECT LAST_INSERT_ID();
END
If I put $dbc->commit; after the execute or the fetchrow_array,I get a Commands out of sync error.
If I remove the AutoCommit line, the code works but I can't use transactions.
If I try to change AutoCommit during the sub, I get this error:Turning off AutoCommit failed.
Any help would be much appreciated.
You can't extract values from stored procedures like that.
Make table_test a function:
CREATE DEFINER=`root`#`localhost` FUNCTION `table_test`(
v_name VARCHAR(255),
v_value VARCHAR(255)
) RETURNS integer
BEGIN
INSERT INTO test (name,value) VALUES (v_name,v_value);
RETURN LAST_INSERT_ID();
END //
and have $table_test use it like a function:
my $table_test = $dbc->prepare('SELECT table_test(?,?,?)');
edit: MySQL stored procedures can actually return results - the result of a SELECT statement inside the procedure is sent back to the SQL client. You have found a bug in DBD::mysql. The above works as a workaround.