Context
I have a dynamically generated .sql file that contains a series of insert statements which are going to be used for inserting data into many different tables that are dependent on other tables.
After each insert statement is ran, if a particular table has an autoincremented id column, then the text "SET #autoIncrementColumnName = LAST_INSERT_ID();" is generated which stores the last insert id of that insert statement in a mysql variable. Then if there is another INSERT statement for that particular table, the process is repeated. The problem is that each statement "SET #autoIncrementColumnName = LAST_INSERT_ID();" overwrites the previous variable before it is able to use the variable later on in the .sql file.
So then later on in the .sql script where you see two lines like these:
INSERT INTO relatedTable (col1,col2,specialColumn,col3,col4) VALUES ('','',#autoIncrementColumnName,'','');
INSERT INTO relatedTable (col1,col2,specialColumn,col3,col4) VALUES ('','',#autoIncrementColumnName,'','');
It needs to insert the mysql value it stored earlier but all of the variables are being overwritten except for one.
Two Questions
Is it possible to create variable variables using only MYSQL? Like this:
SET #dynamicVarName = CONCAT('guestCreditCardId', LAST_INSERT_ID());
SET ##dynamicVarName = LAST_INSERT_ID();
If variable variables are not possible, what solution could I use?
While looking deeply into the problem at hand, I found that I was able to avoid headaches by creating multiple functions/methods, each one being responsible for a specific part of the sql generation. That way later down the road, if you needed to create another dynamic sql statement, you can place it where you need to by just calling another function/method from wherever you need to.
For instance
If you have 5 tables you are generating "INSERT INTO" sql statements for, and 3 of those tables have records that are not being used in the creation of other dynamic sql statements, then you can have one function/method handle all tables that don't require data from other tables.
For any special cases
Then create separate functions/methods for special cases.
Related
I am working with an application which needs to function with any of 300+ different MySQL databases on the same server. The databases all have nearly identical table structures, with slight variations. For example, a particular column might be present in a table for only some of the databases.
I'm wondering if there is a way that, when performing an update on a table, I can update a specific column if it exists, but still successfully execute if the column does not exist.
For example, say I have a basic update statement like this:
UPDATE some_table
SET col1 = "some value",
col2 = "another value",
col3 = "a third value"
WHERE id = 567
What can I do to make it so that, if col3 doesn't actually exist when that query is run, the statement still executes and col1 and col2 are still updated with the new values?
I have tried using IF and CASE, but those seem to only allow changing the value based on some condition, not whether or not a column actually gets updated.
I know I can query the database for the existence of the column, then use a simple if condition in the application code use a different query. However, that requires me to query the database twice: once to see if the column exists, and again to actually update it. I'd prefer to do it with one SQL query if possible. I feel like that application code might start to get unwieldy with lots of extra code to check the existence of this-or-that column and conditionally build queries, instead of just having one query which works regardless of which database the application happens to be running against at the time.
To clarify, any given instance of the application is ever only running against one database; there is a different application instance for each database, but the instances will all be running the same code. These are legacy databases that legacy code is also relying on, so I don't want to modify the actual structures in the database to make them more consistent, for fear of breaking the legacy code.
No, the syntax of your SQL query, including all column identifiers you reference, must be fixed at the time it is parsed, before it validates that the columns exist.
A given UPDATE will either succeed fully or fail fully. There is no way to update some of the columns if the query fails to update all of them.
You have two choices:
Query INFORMATION_SCHEMA.COLUMNS first, to check what columns exist in the table for a given schema. Then format your UPDATE query, including clauses to set each column only if the column exists in that instance of the table.
Or...
Run several UPDATE statements, one for each column you want to update. Each statement will succeed or fail independently, but you can catch the error and continue on to the remaining statements. You can put all these statements in a transaction, so the set of changes is committed atomically, regardless of how many succeed (a single failed statement does not roll back a transaction).
Either way, it requires you to write more code. That's the unavoidable cost of supporting such variable table structure.
I'm wondering if this is possible in Oracle 12g to make a procedure that will be called on update from triggers on multiple tables with different columns.
From my understanding I got two values OLD and NEW. I'm making trigger that work AFTER UPDATE & FOR EACH ROW. Does this possible to send whole row ( variables :OLD or :NEW) to some function like JSON_OBJECT or etc. which will parse row and produce output that can be stored in some audit table?
Main reason for such needs is to not keep in every trigger their own list of columns names for table. Because it different and every change in every table structure will affect trigger.
Or maybe I'm wrong and you can suggest how to properly solve this?
For test cases I have something like this:
AUDIT_TBL (ID, TABLE_NAME,OLD_JSON,NEW_JSON,DATE);
TABLE1 (ID,KEY,VALUE);
TABLE2 (ID,NAME,SURNAME,KEY);
TABLE3 (KEY,COLUMN1,CLOMUNT2,COLUMN3);
I was planning to have trigger on TABLE1,TABLE2,TABLE3 that will run some procedure with parameters as row (:OLD or :NEW), will get a result and put it into AUDIT_TBL.
Any suggestions or ideas how to do this properly and without blood?
I'm working on a Task where I should get the datas from all the table from the Database without knowing the table names. What I'm planning to do is to write a query to get all the table names and column names and store it in a temporary table.
Once I got the table name and column name I have to store the values to the variables like #Table_Name and #Column_Name. Then I have to call the table names within a Loop and have to write a query like this
"select * from '''+#Table_Name+''' where '''+#Column_Name+''' = 1"
Earlier I used cursor to get the values and store it in a variable and call it within the loop. But it seems that the cursor is consuming more time and memory. So I thought of changing it using temporary table.
My doubt is, is it possible to assign the Temporary table values to a variable to use it later. If possible how to do it?
Your idea may not work, I feel, since you are trying to incorporate the concept of arrays in SQL Server. SQL Server does not have arrays, but it has tables. You can obtain the data you need by creating a temporary table. Inserting into it by firing a query like
select * from sys.tables.
Then trying to do your operation by loading each row from this table. After your work is done, by all means drop the table.
I'm connecting and querying a database, and I'm getting data from two tables like this:
SELECT * FROM tblOne, tblTwo
I'm binding some textboxes to fields in the database, and added a BindingNavigator for easy paging, and inserting data. Therefore I planned to use MySqlCommandBuilder to automatically build the INSERT command.
But, of course MySqlCommandBuilder can't generate a INSERT command based on a query with two tables.
Is there a easy way to fix this? Like adding a custom INSERT command?
Here is a chunk of the SQL I'm using for a Perl-based web application. I have a number of requests and each has a number of accessions, and each has a status. This chunk of code is there to update the table for every accession_analysis that shares all these fields for each accession in a request.
UPDATE accession_analysis
SET analysis_id = ? ,
reference_id = ? ,
status = ? ,
extra_parameters = ?
WHERE analysis_id = ?
AND reference_id = ?
AND status = ?
AND extra_parameters = ?
and accession_id is (
SELECT accesion_id
FROM accessions
where request_id = ?
)
I have changed the tables so that there's a status table for accession_analysis, so when I update, I update both accession_analysis and accession_analysis_status, which has status, status_text and the id of the accession_analysis, which is a not null auto_increment variable.
I have no strong idea about how to modify this code to allow this. My first pass grabbed all the accessions and looped through them, then filtered for all the fields, then updated. I didn't like that because I had many connections with short SQL commands, which I understood to be bad, but I can't help but think the only way to really do this is to go back to the loop in Perl holding two simpler SQL statements.
Is there a way to do this in SQL that, with my relative SQL inexperience, I'm just not seeing?
The answer depends on which DBMS you're using. The easiest way is to create a trigger on one table that provides the logic of updating the other table. (For any DB newbies -- a trigger is procedural code attached to a table at the DBMS (not application) layer that runs in response to an insert, update or delete on the table.). A similar, slightly less desirable method is to put the logic in a stored procedure and execute that instead of the update statement you're now using.
If the DBMS you're using doesn't support either of these mechanisms, then there isn't a good way to do what you're after while guaranteeing transactional integrity. However if the problem you're solving can tolerate a timing difference in the two tables' updates (i.e. The data in one of the tables is only used at predetermined times, like reporting or some type of batched operation) you could write to one table (live) and create a separate process that runs when needed (later) to update the second table using data from the first table. The correctness of allowing data to be updated at different times becomes a large and immovable design assumption, however.
If this is mostly about connection speed, then one option you have is to write a stored procedure that handles the "double update or insert" transparently. See the manual for stored procedures:
http://dev.mysql.com/doc/refman/5.5/en/create-procedure.html
Otherwise, You probably cannot do it in one statement, see the MySQL INSERT syntax:
http://dev.mysql.com/doc/refman/5.5/en/insert.html
The UPDATE syntax allows for multi-table updates (not in combination with INSERT, though):
http://dev.mysql.com/doc/refman/5.5/en/update.html
Each table needs its own INSERT / UPDATE in the query.
In fact, even if you create a view by JOINing multiple tables, when you INSERT into the view, you can only INSERT with fields belonging to one of the tables at a time.
The modifications made by the INSERT statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For example, an INSERT into a multitable view must use a column_list that references only columns from one base table. For more information about updatable views, see CREATE VIEW.
Inserting data into multiple tables through an sql view (MySQL)
INSERT (SQL Server)
Same is true of UPDATE
The modifications made by the UPDATE statement cannot affect more than one of the base tables referenced in the FROM clause of the view. For more information on updatable views, see CREATE VIEW.
However, you can have multiple INSERTs or UPDATEs per query or stored procedure.