I am developing an application that uses a database (either PostgreSQL, MySQL, Oracle or MSSQL) on the customers system.
Therefore I need to perform database updates with each new version.
I am currently in the concept phase and have nothing running in production.
All the DDL statements are in script files.
The structure looks like this:
tables\employees.sql
customers.sql
orders.sql
Those scripts are also in version control and can be used to build the database from stretch.
Of course there will be changes sometime in the future to those tables.
For example table employees gets created like this:
CREATE TABLE if not exists employees
(
EmployeeId serial,
FirstName text,
PRIMARY KEY (EmployeeId)
);
And in a future release that table gets extended:
ALTER TABLE employees ADD COLUMN address varchar(30);
On my research I found this example: https://stackoverflow.com/posts/115422/revisions.
A version number gets used to perform specific changes.
I like that concept and my idea is to implement something similar.
But instead of a system version number I was thinking about introducing a version for each table.
When creating the employee table it gets the Version number 1. With each change on that table the version number get increased by 1. After adding the address column (alter statement above) the table version would be 2.
Each table change would happen in a nested transaction like this:
BEGIN TRANSACTION;
UPDATE employees SET Version = 2;
ALTER TABLE employees
ALTER TABLE employees ADD COLUMN address varchar(30);
END TRANSACTION;
If the table version is lower than the current table version the transaction would be rolled back.
The implentation of that logic is yet to be done.
The benefit would be that all changes on a table are inside the table's script file itself and the initial statement is always up to date.
For example when first creating the employee table it would look like this:
employees.sql
CREATE TABLE if not exists employees
(
EmployeeId serial,
FirstName text,
Version int default 1 not null,
PRIMARY KEY (EmployeeId)
);
After some changes it looks like this:
employees.sql
CREATE TABLE if not exists employees
(
EmployeeId serial,
FirstName varchar(100),
address varchar(80),
Version int default 3 not null, -- notice the 3
PRIMARY KEY (EmployeeId)
);
-- First Change
BEGIN TRANSACTION;
UPDATE employees SET Version = 2;
ALTER TABLE employees
ALTER TABLE employees ADD COLUMN address varchar(30);
END TRANSACTION;
-- Second Change
BEGIN TRANSACTION;
UPDATE employees SET Version = 3;
ALTER TABLE employees
ALTER COLUMN address TYPE varchar(80),
ALTER COLUMN FirstName TYPE varchar(100);
END TRANSACTION;
Is that concept acceptable or am I reinventing the wheel here?
I think setting the version number per table is overkill. Also, it complicates managing the DB and the application. I suggest you add a new table for DB_VersionNumber and add one row in this table for each upgrade. What I have been doing is this:
1) Create a table in DB for database versions (steps)
2) Create a SP that checks this table and runs a DB upgrade step if it does not exist in the table, otherwise the step is skipped.
3) For each and every DB change, add a step in the upgrade script file (which you have already created and added to the source control).
Here is the table and the SP:
IF OBJECT_ID (N'DB_Version', N'U') IS NULL
Begin
CREATE TABLE [DB_Version](
[VersionNumber] [decimal](18, 2) NOT NULL,
[CommitTimestamp] [smalldatetime] NOT NULL
) ON [PRIMARY]
ALTER TABLE DB_Version
ADD CONSTRAINT UQ_VersionNumber UNIQUE (VersionNumber);
End
IF OBJECT_ID ( 'NewDBStep', 'P' ) IS NULL
begin
Exec ('
-- ============================================
-- Description: Applies a new DB upgrade step to the current DB
-- =============================================
CREATE PROCEDURE NewDBStep
#dbVersion [decimal](18, 2),
#script varchar (max)
AS
BEGIN
If not exists (select 1 from DB_Version Where VersionNumber = #dbVersion)
Begin
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRY
Begin tran
Exec (#script)
Insert into DB_Version (VersionNumber, CommitTimestamp) Values (#dbVersion, CURRENT_TIMESTAMP);
Commit tran
Print ''Applied upgrade step '' + Cast ( #dbVersion as nvarchar(20))
END TRY
BEGIN CATCH
Rollback tran
Print ''Failed to apply step '' + Cast ( #dbVersion as nvarchar(20))
Select ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
END CATCH
End
END ') ;
End
Then, apply your upgrades by calling the SP (the key is that you have to assign a unique step number to each upgrade script:
---------------- Add the new steps here
-- Step: 0.01
-- Adding the MyTableName table if it does not exist.
Exec NewDBStep 0.01, '
IF OBJECT_ID (N''MyTableName'', N''U'') IS NULL
Begin
CREATE TABLE [MyTableName](
[Id] [int] IDENTITY(1,1) NOT NULL,
[UserType] [nvarchar](20) NULL,
PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
End
'
Exec NewDBStep 1.00, '
-- Some other DDL script
'
Related
I would like to add a constraint that will check values from related table.
I have 3 tables:
CREATE TABLE somethink_usr_rel (
user_id BIGINT NOT NULL,
stomethink_id BIGINT NOT NULL
);
CREATE TABLE usr (
id BIGINT NOT NULL,
role_id BIGINT NOT NULL
);
CREATE TABLE role (
id BIGINT NOT NULL,
type BIGINT NOT NULL
);
(If you want me to put constraint with FK let me know.)
I want to add a constraint to somethink_usr_rel that checks type in role ("two tables away"), e.g.:
ALTER TABLE somethink_usr_rel
ADD CONSTRAINT CH_sm_usr_type_check
CHECK (usr.role.type = 'SOME_ENUM');
I tried to do this with JOINs but didn't succeed. Any idea how to achieve it?
CHECK constraints cannot currently reference other tables. The manual:
Currently, CHECK expressions cannot contain subqueries nor refer to
variables other than columns of the current row.
One way is to use a trigger like demonstrated by #Wolph.
A clean solution without triggers: add redundant columns and include them in FOREIGN KEY constraints, which are the first choice to enforce referential integrity. Related answer on dba.SE with detailed instructions:
Enforcing constraints “two tables away”
Another option would be to "fake" an IMMUTABLE function doing the check and use that in a CHECK constraint. Postgres will allow this, but be aware of possible caveats. Best make that a NOT VALID constraint. See:
Disable all constraints and table checks while restoring a dump
A CHECK constraint is not an option if you need joins. You can create a trigger which raises an error instead.
Have a look at this example: http://www.postgresql.org/docs/9.1/static/plpgsql-trigger.html#PLPGSQL-TRIGGER-EXAMPLE
CREATE TABLE emp (
empname text,
salary integer,
last_date timestamp,
last_user text
);
CREATE FUNCTION emp_stamp() RETURNS trigger AS $emp_stamp$
BEGIN
-- Check that empname and salary are given
IF NEW.empname IS NULL THEN
RAISE EXCEPTION 'empname cannot be null';
END IF;
IF NEW.salary IS NULL THEN
RAISE EXCEPTION '% cannot have null salary', NEW.empname;
END IF;
-- Who works for us when she must pay for it?
IF NEW.salary < 0 THEN
RAISE EXCEPTION '% cannot have a negative salary', NEW.empname;
END IF;
-- Remember who changed the payroll when
NEW.last_date := current_timestamp;
NEW.last_user := current_user;
RETURN NEW;
END;
$emp_stamp$ LANGUAGE plpgsql;
CREATE TRIGGER emp_stamp BEFORE INSERT OR UPDATE ON emp
FOR EACH ROW EXECUTE PROCEDURE emp_stamp();
...i did it so (nazwa=user name, firma = company name) :
CREATE TABLE users
(
id bigserial CONSTRAINT firstkey PRIMARY KEY,
nazwa character varying(20),
firma character varying(50)
);
CREATE TABLE test
(
id bigserial CONSTRAINT firstkey PRIMARY KEY,
firma character varying(50),
towar character varying(20),
nazwisko character varying(20)
);
ALTER TABLE public.test ENABLE ROW LEVEL SECURITY;
CREATE OR REPLACE FUNCTION whoIAM3() RETURNS varchar(50) as $$
declare
result varchar(50);
BEGIN
select into result users.firma from users where users.nazwa = current_user;
return result;
END;
$$ LANGUAGE plpgsql;
CREATE POLICY user_policy ON public.test
USING (firma = whoIAM3());
CREATE FUNCTION test_trigger_function()
RETURNS trigger AS $$
BEGIN
NEW.firma:=whoIam3();
return NEW;
END
$$ LANGUAGE 'plpgsql'
CREATE TRIGGER test_trigger_insert BEFORE INSERT ON test FOR EACH ROW EXECUTE PROCEDURE test_trigger_function();
One of the client has asked us to write a store procedure in mysql, which states that one data should be accessed by only one resources (Even if their happens to be Multiple resources ready to read the data, whomsoever will come first will take the lock first and would change its flag so that not other resources should be able to take a lock on this data row in a table.
Store procedure is to be written for it, i believe it to be similar to bank transaction management, but i have no clue how to write a stored procedure for it, any help will be highly appreciated, Thanks well in advance.
Step : 1
CREATE TABLE `test_db`.`Jobs` (
`id` INT NOT NULL,
`JOB` VARCHAR(45) NOT NULL,
`status` VARCHAR(45) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `id_UNIQUE` (`id` ASC));
Step : 2
DELIMITER $$
create procedure aabraKaDaabra(IN ids INT)
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK;
DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK;
START TRANSACTION;
select id from Jobs where id=ids for update;
update Jobs set status = 'Submitted' where id=ids;
commit;
END$$;
Step : 3
select * from test_db.Jobs order by id desc;
Note:
Make sure that you have inserted a few of the value for the table.
Step : 4
call test_db.aabraKaDaabra(1);
This is what i was expecting and solved it , it worked like a charm
I have some SQL Server schema changes that I'm trying to convert to MySQL. I know about CREATE TABLE IF NOT EXISTS in MySQL. I don't think I can use that here.
What I want to do is create a table in MySQL, with an index, and then insert some values all as part of the "if not exists" predicate. This was what I came up with, though it doesn't seem to be working:
SET #actionRowCount = 0;
SELECT COUNT(*) INTO #actionRowCount
FROM information_schema.tables
WHERE table_name = 'Action'
LIMIT 1;
IF #actionRowCount = 0 THEN
CREATE TABLE Action
(
ActionNbr INT AUTO_INCREMENT,
Description NVARCHAR(256) NOT NULL,
CONSTRAINT PK_Action PRIMARY KEY(ActionNbr)
);
CREATE INDEX IX_Action_Description
ON Action(Description);
INSERT INTO Action
(Description)
VALUES
('Activate'),
('Deactivate'),
('Specified');
END IF
I can run it once, and it'll create the table, index, and values. If I run it a second time, I get an error: Table Action already exists. I would have thought that it wouldn't run at all if the table already exists.
I use this pattern a lot when bootstrapping a schema. How can I do this in MySQL?
In mysql compound statements can only be used within stored programs, which includes the if statement as well.
Therefore, one solution is to include your code within a stored procedure.
The other solution is to use the create table if not exists ... with the separate index creation included within the table definition and using insert ignore or insert ... select ... to avoidd inserting duplicate values.
Examples of options:
Option 1:
CREATE TABLE IF NOT EXISTS `Action` (
`ActionNbr` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
`Description` VARCHAR(255) NOT NULL,
INDEX `IX_Action_Description` (`Description`)
) SELECT 'Activate' `Description`
UNION
SELECT 'Deactivate'
UNION
SELECT 'Specified';
Option 2:
DROP PROCEDURE IF EXISTS `sp_create_table_Action`;
DELIMITER //
CREATE PROCEDURE `sp_create_table_Action`()
BEGIN
IF NOT EXISTS(SELECT NULL
FROM `information_schema`.`TABLES` `ist`
WHERE `ist`.`table_schema` = DATABASE() AND
`ist`.`table_name` = 'Action') THEN
CREATE TABLE `Action` (
`ActionNbr` INT AUTO_INCREMENT,
`Description` NVARCHAR(255) NOT NULL,
CONSTRAINT `PK_Action` PRIMARY KEY (`ActionNbr`)
);
CREATE INDEX `IX_Action_Description`
ON `Action` (`Description`);
INSERT INTO `Action`
(`Description`)
VALUES
('Activate'),
('Deactivate'),
('Specified');
END IF;
END//
DELIMITER ;
CALL `sp_create_table_Action`;
I need to create MySQL trigger that would log user ID on delete table row statement which must fit in one query, since I'm using PHP PDO. This is what I've come up so far:
I need the way to pass user ID in the delete query even though it is irrelevant to delete action to be performed:
Normally the query would look like this:
DELETE FROM mytable WHERE mytable.RowID = :rowID
If I could use multiple queries in my statement, I would do it like this:
SET #userID := :userID;
DELETE FROM mytable WHERE mytable.RowID = :rowID;
This way the variable #userID would be set before trigger event fires and it can use it. However since I need to squeeze my delete statement in one query, so I came up with this:
DELETE FROM mytable
WHERE CASE
WHEN #userID := :userID
THEN mytable.RowID = :rowID
ELSE mytable.RowID IS NULL
END
Just a note: RowID will never be null since it's the primary key. Now I have to create a delete trigger to log the user ID to the audit table, however I suppose that in this case trigger will be fired before the delete query itself which means that #userID variable will not be created? This was my idea of passing it as a value to the trigger.
I feel like I'm close to the solution, but this issue is a blocker. How to pass user ID value to the trigger without having multiple queries in the statement? Any thoughts, suggestions?
You can use NEW / OLD mysql trigger extensions. Reference: http://dev.mysql.com/doc/refman/5.0/en/trigger-syntax.html
Here is a sample code :
drop table `project`;
drop table `projectDEL`;
CREATE TABLE `project` (
`proj_id` int(11) NOT NULL AUTO_INCREMENT,
`proj_name` varchar(30) NOT NULL,
`Proj_Type` varchar(30) NOT NULL,
PRIMARY KEY (`proj_id`)
);
CREATE TABLE `projectDEL` (
`proj_id` int(11) NOT NULL AUTO_INCREMENT,
`proj_name` varchar(30) NOT NULL,
`Proj_Type` varchar(30) NOT NULL,
PRIMARY KEY (`proj_id`)
);
INSERT INTO `project` (`proj_id`, `proj_name`, `Proj_Type`) VALUES
(1, 'admin1', 'admin1'),
(2, 'admin2', 'admin2');
delimiter $
CREATE TRIGGER `uProjectDelete` BEFORE DELETE ON project
FOR EACH ROW BEGIN
INSERT INTO projectDEL SELECT * FROM project WHERE proj_id = OLD.proj_id;
END;$
delimiter ;
DELETE FROM project WHERE proj_id = 1;
SELECT * FROM project;
SELECT * FROM projectDEL;
I'm trying to export data from a multivalue database (Unidata) into MySQL. Lets say my source data was a person's ID number, their first name and all the states they've lived in. The states field is a multi value field and I'm exporting them so that the different values within that field are seperated by a ~. A sample extract looks like:
"1234","Sally","NY~NJ~CT"
"1235","Dave","ME~MA~FL"
"3245","Fred","UT~CA"
"2344","Sue","OR"
I've loaded this data into a staging table
Table:staging
Column 1: personId
Column 2: name
Column 3: states
What I want to do is split this data out into two tables using a procedure: a persons table and a states table. A person can have many entries in the states table:
Table 1: persons
Column 1: id
Column 2: name
Table 2: states
Column 1: personId
Column 2: state
My procedure takes the data from the staging table and dumps it over to table 1 just fine. However, i'm a little lost how how to split the data up and send it to table 2. Sally would need to have three entries in the states table (NY, NJ, CT), Dave would have 3, Fred would have 2 and Sue would have1 (OR). Any ideas on how to accomplish this?
try something like this : http://pastie.org/1213943
-- TABLES
drop table if exists staging;
create table staging
(
person_id int unsigned not null primary key,
name varchar(255) not null,
states_csv varchar(1024)
)
engine=innodb;
drop table if exists persons;
create table persons
(
person_id int unsigned not null primary key,
name varchar(255) not null
)
engine=innodb;
drop table if exists states;
create table states
(
state_id tinyint unsigned not null auto_increment primary key, -- i want a nice new integer based PK
state_code varchar(3) not null unique, -- original state code from staging
name varchar(255) null
)
engine=innodb;
/*
you might want to make the person_states primary key (person_id, state_id) depending on
your queries as this is currently optimised for queries like - select all the people from NY
*/
drop table if exists person_states;
create table person_states
(
state_id tinyint unsigned not null,
person_id int unsigned not null,
primary key(state_id, person_id),
key (person_id)
)
engine=innodb;
-- STORED PROCEDURES
drop procedure if exists load_staging_data;
delimiter #
create procedure load_staging_data()
proc_main:begin
truncate table staging;
-- assume this is done by load data infile...
set autocommit = 0;
insert into staging values
(1234,'Sally','NY~NJ~CT'),
(1235,'Dave','ME~MA~FL'),
(3245,'Fred','UT~CA'),
(2344,'Sue','OR'),
(5555,'f00','OR~NY');
commit;
end proc_main #
delimiter ;
drop procedure if exists cleanse_map_staging_data;
delimiter #
create procedure cleanse_map_staging_data()
proc_main:begin
declare v_cursor_done tinyint unsigned default 0;
-- watch out for variable names that have the same names as fields !!
declare v_person_id int unsigned;
declare v_states_csv varchar(1024);
declare v_state_code varchar(3);
declare v_state_id tinyint unsigned;
declare v_states_done tinyint unsigned;
declare v_states_idx int unsigned;
declare v_staging_cur cursor for select person_id, states_csv from staging order by person_id;
declare continue handler for not found set v_cursor_done = 1;
-- do the person data
set autocommit = 0;
insert ignore into persons (person_id, name)
select person_id, name from staging order by person_id;
commit;
-- ok now we have to use the cursor !!
set autocommit = 0;
open v_staging_cur;
repeat
fetch v_staging_cur into v_person_id, v_states_csv;
-- clean up the data (for example)
set v_states_csv = upper(trim(v_states_csv));
-- split the out the v_states_csv and insert
set v_states_done = 0;
set v_states_idx = 1;
while not v_states_done do
set v_state_code = substring(v_states_csv, v_states_idx,
if(locate('~', v_states_csv, v_states_idx) > 0,
locate('~', v_states_csv, v_states_idx) - v_states_idx,
length(v_states_csv)));
set v_state_code = trim(v_state_code);
if length(v_state_code) > 0 then
set v_states_idx = v_states_idx + length(v_state_code) + 1;
-- add the state if it doesnt already exist
insert ignore into states (state_code) values (v_state_code);
select state_id into v_state_id from states where state_code = v_state_code;
-- add the person state
insert ignore into person_states (state_id, person_id) values (v_state_id, v_person_id);
else
set v_states_done = 1;
end if;
end while;
until v_cursor_done end repeat;
close v_staging_cur;
commit;
end proc_main #
delimiter ;
-- TESTING
call load_staging_data();
select * from staging;
call cleanse_map_staging_data();
select * from states order by state_id;
select * from persons order by person_id;
select * from person_states order by state_id, person_id;