Is it possible to use any sort of logic in MySQL without using any procedures? My web hosting does not let me create any procedures so I'm looking for a workaround.
The type of thing I want to do is only add an item to a table if it doesn't already exist. Or add a column to a table if it's not already there. There are some operations that can be done such as CREATE TABLE IF NOT EXISTS and so on, but some operations I require do not have such luxuries :(
I realised late on that my lovely procs won't work and so I tried writing IF/ELSE logic as top-level queries, but for MySQL, IF ELSE blocks only seem to work inside functions/procs and not at the global scope.
Any workarounds greatfully received - I've already asked the hosting to grant me privileges to create procedures but no reply as yet...
I suppose you don't have access to the INFORMATION_SCHEMA either. You can possibly find solutions but it would be better, in my oninion, to:
Change your hosting provider. Seriously. Pay more - if needed - for a MySQL instance that you can configure to your needs. You only have a crippled DBMS if you are not allowed to create procedures and functions.
Posible workarounds for the specific task: You want to add a column if it doesn't exist.
1) Just ALTER TABLE and add the column. If it already exists, you'll get an error. You can catch that error, in your application.
2) (If you have no access to the INFORMATION_SCHEMA) maintain a version of the schema, for your database.
The best solution that I can think of would be to use an additional language with SQL. For example, you can run a query for a specific record, and based on the response that you get, you can conditionally run an INSERT statement.
For inserting a table if it doesn't exist, try using the SHOW TABLES statement and testing whether or not a name exists in the result set.
MySQL supports INSERT IGNORE. and INSERT ... ON DUPLICATE KEY UPDATE.
The following will insert a new row, but only if there is no existing row with id=10. (This assumes that id is defined as a unique or primary key).
INSERT IGNORE INTO my_table (id, col1, col2) values (10, "abc", "def");
The following will insert a new row, but if there is an existing row with id=10 (again, assuming id is unique or primary), the existing row will be updated to hold the new values, instead of inserting a new row.
INSERT INTO my_table (id, col1, col2) values (10, "abc", "def")
ON DUPLICATE KEY UPDATE col1=VALUES(col1), col2=VALUES(col2)
Also, CREATE TABLE supports the IF NOT EXISTS modifier. So you can do something like:
CREATE TABLE IF NOT EXISTS my_table ...
There are many other similar options and modifiers available in MySQL. Check the docs for more.
Originally I created a big script to create or update the database schema, to make it easier to deploy database changes from my local machine to the server.
My script was doing a lot of "if table 'abc' exists and it doesn't have a FK constraint called 'blah'" then create an FK constraint called 'blah' on table 'abc'... and so on.
I now realise it's not actually necessary to check whether a table has a certain column or constraint etc, because I can just maintain a schema-versioning system, and query the DB schema-version when my app starts, or when I navigate to a certain page.
e.g. let's say I want to add a new column to a table. It works like this:
Add a new migration script to the app code, containing the SQL required to add the column to the existing table
Increment the app's schema-version by 1
On app startup, the app queries the DB for the DB's schema-version
If DB schema-version < app schema-version, execute the SQL migration scripts between the two schema-versions, and then update the DB schema-version to be the same as the app
e.g. if the DB's schema-version is 5 and the app version is 8, the app will apply migration scripts 5-6, 6-7 and 7-8 to the DB. These can just be run without having to check anything on the DB side.
The app is therefore solely responsible for updating the DB schema and there's no need for me to ever have to execute schema change scripts on the local or remote DB.
I think it's a better system than the one I was trying to implement for my question.
Related
Ok, so I have a database in my testing environment called 'Food'. In this database, there is a table called 'recipe', with a column called 'source'.
This same database exists in my local environment. However, I just received an updated database (in my local environment) where all the column values (for 'source') have changed.
Is there any way I can migrate the 'source' column from my local to my test environment, without changing the values for any other column? There are 1186 rows in the 'Food' database 'recipe' table in my test environment that need to be updated ONLY with the 'source' column.
You need some way to uniquely identify your Recipes. If both tables have a surrogate key that remained constant, use that. Otherwise figure out some way to match up the new data with your test data: you might already have a unique index in mind or you might need to decide on a combination of fields that uniquely identify your Recipes.
On a side note, why can't you just overwrite all the columns? It is just test data, right?
If only a column has changed and you have IDs (or keys) on your rows, you could follow these steps:
create an intermediate table locally
insert keys and new source values there (either those which have changed or all)
use mysqldump to selectively export the table from the local database
copy the dumped table to the remote database server
import it there
join it with the production table in an update statement to replace the values
drop the intermediate table on the server
This may not be a real world issue but is more like a learning topic.
Using PHP, MySQL and PDO, I know all about auto_increment and lastInsertId(). Consider that the primary key has no auto_incerment attribute and we have to use something like SELECT MAX(id) FROM table in order to retrieve last id, increment it manually and then INSERT INTO table (id) VALUES (:lastIdPlusOne). Wrap whole code in beginTransaction and commit.
Is this approach safe? If user A and B at the same time load this script what will happens at the end? both transaction will be failed? Or both will be successful (for instance, if the last id was 10, A will insert 11 and B will insert 12)?
Note that since I am a PHP & MySQL developer, therefor I am more interested in MySQL behavior in this case.
If both got the same max, then the one that inserts first will succeed, and other(s) will fail.
To overcome this issue without using using auto_increment fields, you may use a trigger before insert that does the job (new.id=max) i.e. same logic, but in a trigger, so the DB server is the one who controls it.
Not sure though if this is 100% safe in a master-master replication environment in case of a server failure.
This is #eggyal comment, that I quote here:
You must ensure that you use a locking read to fetch the MAX() in the first (select) query; it will then block until the transaction is committed. However, this is very poor design and should not be used in a production system.
I am trying to restore a DB using an SQL script, but things foreign key constraints get in the way
I am taking a MySQL DB and bringing it over to PostgreSQL.
Since the MySQL create table syntax ended up being quite different, I took another PostgreSQL DB with the same schema, but different data and restored the schema only, from that.
In other words, I now have a database with tables, constraints, sequences and all that shnaz but no data inside.
So, it's is time to restore data.
I take a backup of the MySQL DB with phpMyAdmin (data only) as an SQL script (pgAdmin does not seem to accept zip or gzip files for some reason) and run the SQL script.
Now, this is where the problems start to happen, it's only natural, I am going from MySQL to PostgreSQL, so syntax errors are bound to happen.
But, there are other non syntax related problems to, like this one:
ERROR: insert or update on table "_account" violates foreign key constraint "fk_1_account"
DETAIL: Key (accountid)=(2) is not present in table "_entity".
So, yeah, basically, a foreign constraint exists, the query is trying to insert data into the _account table, but the corresponding data has not been inserted into the _entity table yet.
How do I get around that? Is there a way to make pgAdmin3/PostgreSQL disable ALL OF the constraints, insert the data, and then re-enable the constraints?
A syntax related error I encountered, was this one:
INSERT INTO _accounttype_seq (id) VALUES (11);
The PostgreSQL equivalent of that statement (if I am correct) is
ALTER SEQUENCE _accounttype_seq INCREMENT BY 11;
But, it's a bit of a pain to run through the whole script and change all 200+ Sequence insert statements. So, I am being lazy here, but is there an easier way to deal with the sequences as well?
Or, do you guys have any suggestions for a different set of tools to make this easier?
Thanks for your time, have a good day.
Do not try to get around the foreign key constraints. That is the way to make sure the data is bad.
First look at the constraints and make sure you are inserting to the tables in the correct order. If _entity is parent of "_account, then it should be populated first.
Next you need to have the script move any failing records to an exception table. Then you can look at them and see what the data integrity issues is and if you need to throw the records away permanently or try to figure out what the missing parent value should be. If it is critical data such as orders where the customer no longer exists (possible in any system that didn't have correct fks to begin with) and you must keep the record and cannot determine what the parent value should have been, you can create an 'Unknown" record in the customer table and assign all bad orders to that customer id.
And manually changing the alter sequences shouldn't take long even if it is boring. There wil be plently of other things you need to handle manually in a conversion of this type.
I would try to find a data import tool for PostgreSQL - I live in SQL server world where I would use SSIS but you need the equivalent of SSIS for the PostgreSQL world.
Aparently the foreign keys weren't actually enforced in MySQL (maybe because of using MyISAM) or the generated SQL just does it in the wrong order.
If it's "only" the wrong order, I see two possible solutions:
edit the generated script and either move all FK definitions to the end of the script
Edit the definition of each FK constraint and set them all to initially deferred. Then run the script as one single transaction with only on commit at the very end.
Edit (because this is too much to be put as a comment)
Using SET CONSTRAINTS ALL DEFERRED will only work if the constraints have been created with the option DEFERRABLE.
To run everything in one single transaction, you have to make sure you have turned autocommit off. Then simply run the INSERTs and at the very end issue a COMMIT. A ; will only commit if you have autocommit on.
If you want to be independent of the autocommit setting, then start your script with [BEGIN][1] and make sure there is only a single COMMIT at the very end.
BEGIN DEFERRABLE
INSERT INTO table_one ... ;
INSERT INTO table_two ... ;
.....
COMMIT;
Here is a chunk of the SQL I'm using for a Perl-based web application. I have a number of requests and each has a number of accessions, and each has a status. This chunk of code is there to update the table for every accession_analysis that shares all these fields for each accession in a request.
UPDATE accession_analysis
SET analysis_id = ? ,
reference_id = ? ,
status = ? ,
extra_parameters = ?
WHERE analysis_id = ?
AND reference_id = ?
AND status = ?
AND extra_parameters = ?
and accession_id is (
SELECT accesion_id
FROM accessions
where request_id = ?
)
I have changed the tables so that there's a status table for accession_analysis, so when I update, I update both accession_analysis and accession_analysis_status, which has status, status_text and the id of the accession_analysis, which is a not null auto_increment variable.
I have no strong idea about how to modify this code to allow this. My first pass grabbed all the accessions and looped through them, then filtered for all the fields, then updated. I didn't like that because I had many connections with short SQL commands, which I understood to be bad, but I can't help but think the only way to really do this is to go back to the loop in Perl holding two simpler SQL statements.
Is there a way to do this in SQL that, with my relative SQL inexperience, I'm just not seeing?
The answer depends on which DBMS you're using. The easiest way is to create a trigger on one table that provides the logic of updating the other table. (For any DB newbies -- a trigger is procedural code attached to a table at the DBMS (not application) layer that runs in response to an insert, update or delete on the table.). A similar, slightly less desirable method is to put the logic in a stored procedure and execute that instead of the update statement you're now using.
If the DBMS you're using doesn't support either of these mechanisms, then there isn't a good way to do what you're after while guaranteeing transactional integrity. However if the problem you're solving can tolerate a timing difference in the two tables' updates (i.e. The data in one of the tables is only used at predetermined times, like reporting or some type of batched operation) you could write to one table (live) and create a separate process that runs when needed (later) to update the second table using data from the first table. The correctness of allowing data to be updated at different times becomes a large and immovable design assumption, however.
If this is mostly about connection speed, then one option you have is to write a stored procedure that handles the "double update or insert" transparently. See the manual for stored procedures:
http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html
Otherwise, You probably cannot do it in one statement, see the MySQL INSERT syntax:
http://dev.mysql.com/doc/refman/5.5/en/insert.html
The UPDATE syntax allows for multi-table updates (not in combination with INSERT, though):
http://dev.mysql.com/doc/refman/5.5/en/update.html
Each table needs its own INSERT / UPDATE in the query.
In fact, even if you create a view by JOINing multiple tables, when you INSERT into the view, you can only INSERT with fields belonging to one of the tables at a time.
The modifications made by the INSERT statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For example, an INSERT into a multitable view must use a column_list that references only columns from one base table. For more information about updatable views, see CREATE VIEW.
Inserting data into multiple tables through an sql view (MySQL)
INSERT (SQL Server)
Same is true of UPDATE
The modifications made by the UPDATE statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For more information on updatable views, see CREATE VIEW.
However, you can have multiple INSERTs or UPDATEs per query or stored procedure.
I need the sample program in Java for keeping the history of table if user inserted, updated and deleted on that table. Can anybody help in this?
Thanks in advance.
If you are working with Hibernate you can use Envers to solve this problem.
You have two options for this:
Let the database handle this automatically using triggers. I don't know what database you're using but all of them support triggers that you can use for this.
Write code in your program that does something similar when inserting, updating and deleting a user.
Personally, I prefer the first option. It probably requires less maintenance. There may be multiple places where you update a user, all those places need the code to update the other table. Besides, in the database you have more options for specifying required values and integrity constraints.
Well, we normally have our own history tables which (mostly) look like the original table. Since most of our tables already have the creation date, modification date and the respective users, all we need to do is copy the dataset from the live table to the history table with a creation date of now().
We're using Hibernate so this could be done in an interceptor, but there may be other options as well, e.g. some database trigger executing a script, etc.
How is this a Java question?
This should be moved in Database section.
You need to create a history table. Then create database triggers on the original table for "create or replace trigger before insert or update or delete on table for each row ...."
I think this can be achieved by creating a trigger in the sql-server.
you can create the TRIGGER as follows:
Syntax:
CREATE TRIGGER trigger_name
{BEFORE | AFTER } {INSERT | UPDATE |
DELETE } ON table_name FOR EACH ROW
triggered_statement
you'll have to create 2 triggers one for before the operation is performed and another after the operation is performed.
otherwise it can be achieved through code also but it would be a bit tedious for the code to handle in case of batch processes.
You should try using triggers. You can have a separate table (exact replica of your table of which you need to maintain history) .
This table will then be updated by trigger after every insert/update/delete on your main table.
Then you can write your java code to get these changes from the second history table.
I think you can use the redo log of your underlying database to keep track of the operation performed. Is there any particular reason to go for the program?
You could try creating say a List of the objects from the table (Assuming you have objects for the data). Which will allow you to loop through the list and compare to the current data in the table? You will then be able to see if any changes occurred.
You can even create another list with a object that contains an enumerator that gives you the action (DELETE, UPDATE, CREATE) along with the new data.
Haven't done this before, just a idea.
Like #Ashish mentioned, triggers can be used to insert into a seperate table - this is commonly referred as Audit-Trail table or audit log table.
Below are columns generally defined in such audit trail table : 'Action' (insert,update,delete) , tablename (table into which it was inserted/deleted/updated), key (primary key of that table on need basis) , timestamp (the time at which this action was done)
It is better to audit-log after the entire transaction is through. If not, in case of exception being passed back to code-side, seperate call to update audit tables will be needed. Hope this helps.
If you are talking about db tables you may use either triggers in db or add some extra code within your application - probably using aspects. If you are using JPA you may use entity listeners or perform some extra logic adding some aspect to your DAO object and apply specific aspect to all DAOs which perform CRUD on entities that needs to sustain historical data. If your DAO object is stateless bean you may use Interceptor to achive that in other case use java proxy functionality, cglib or other lib that may provide aspect functionality for you. If you are using Spring instead of EJB you may advise your DAOs within application context config file.
Triggers are not suggestable, when I stored my audit data in file else I didn't use the database...my suggestion is create table "AUDIT" and write java code with help of servlets and store the data in file or DB or another DB also ...