This situation makes no sense.
I have the following sequence of SQL operations in my php code:
DROP TABLE IF EXISTS tablename;
CREATE TABLE tablename;
Of course the php code does not look like that, but those are the commands being executed.
Every once in a while on the CREATE statement, the system returns "table already exists".
I would not think this could happen, unless it is some kind of delay in the dropping. The table is Innodb and I read that there could be processes using the table. However, the tablename has embedded within it a session_id for the user, because this table is somewhat transient and is dedicated to the specific user only--no other user can be using the table, and not even any other script can be using it. It is a "user-specific, script-specific" table. However, it is possible that the user could execute this script, go away to a different script, then come back to this script.
The describe code is in a routine that decides whether it can re-use the table, or whether it has to be recreated. If it has to be recreated, then the two lines execute.
Any ideas what is causing this error condition?
EDIT:
The problem with "actual code" is that sometimes it just leads to more questions that diverge from the actual point. Neverthess, here is a copy from the actual script:
$query1 = "DROP TABLE IF EXISTS {$_SESSION['tmpContact']}";
SQL($query1);
$memory_table = "CREATE TABLE {$_SESSION['tmpContact']}";
The SQL() function executes the command and has error handling.
Plan A: Check for errors after the DROP. There may be a clue there.
Plan B: CREATE TEMPORARY TABLE ... -- That will be local to the connection, so [presumably] you won't need the DROP.
$a = mysql_query("SELECT TABLE");
if($a != ''){}else{}
try mixing the php with the sql.
Related
In MySQL, which is a better practice? Always use "CREATE TABLE IF NOT EXISTS", or check first the existence of the table using "SHOW TABLES LIKE" before making the table?
I have to regularly save a page where the table for it may or may not be there (sometimes, it is deliberately deleted when not in use). Previously, I used to do "SHOW TABLES LIKE" to check if that table exists before I insert new entries. But I changed it to "CREATE TABLE IF NOT EXISTS". Either way, I just do a "INSERT .. ON DUPLICATE UPDATE" to add new or update existing data.
I don't know how to benchmark this, which is why I am asking.
Performance isn't critical with these operations.
The key aspect is race conditions. If you use CREATE TABLE IF NOT EXISTS you know it will happen or it won't. If two threads happen to be doing this statement one will succeed and the other won't be negatively affected.
If a SHOW TABLE LIKE was used in both threads, both could detect the table didn't exist, and upon trying to create the table, one would fail.
So use CREATE TABLE IF NOT EXISTS to mitigate race conditions. Also in general is better to use a database provided feature than roll your own.
CREATE TABLE IF NOT EXISTS is option provided by MySQL and good to use. If table will not exist this statement will create else will skip.
other end, if you first check table existence by show table like then either skip or create as per result of condition/check. Ultimately you are increasing one step or runtime of you script or program for same functionality which you can achieve in single step.
I have a file with .sql extension in which I write a query like insertions, deletions or updates, and execute it in toadformysql . I repeat the same action many times because I have lot of queries, so it turned out that I have a lot .sql files.
for the first query It works . but When I tried to add a second query in the same file and execute it , there are errors because the first query has already been executed. If the first query is delete for example, it displays an error "no such column" , which is logic beacuse I already delete the column.
Is there a way that I can have a single file in which I add all my queries and while executing it , I won't have errors from old queries like duplicates or others, something like errors handling. It is because I have to keep an history of all queries.
Only the query that I didn't already execute will throw an error if there is.
for example if my first query is
ALTER TABLE adbproject DROP COLUMN imageFormat
and I execute it. for the second time I want to add another query which is:
ALTER TABLE PERSON ADD MATRICULE VARCHAR(50) AFTER CODE;
So the file will to be executed will be :
ALTER TABLE adbproject DROP COLUMN imageFormat;
ALTER TABLE PERSON ADD MATRICULE VARCHAR(50) AFTER CODE;
but I have logically this error : Can't DROP 'imageFormat'; check that column/key exists. I am searching a way to avoid this error.
Thanks in advance
Two options:
Write all the commands to the file and execute the whole file only once.
After execution of each command, delete content of the file.
I have to create a certain MySql log table. That table should contain all the changes that happened to the table "A".
In order to do that, i have created a stored procedure "writeLog" that's populating the log table. That procedure is a bit complex and needs exclusive access to few tables so i'm using "start transaction" and "commit" in that procedure. It works.
The problem is that table "A" (the one that should be logged) is being populated from many different parts of the system that's using my db and in order to avoid adding "log" code all over the place i decided to call my "writeLog" stored procedure after each update and insert in the "A". It's important to note that i need to log one format of data after "update", another after "insert" and software that's pushing data to the "A" has no idea if data is being updated or inserted (it's all done, again, using stored procedures that have "ON DUPLICATE KEY UPDATE" part).
When i try to perform "call writeLog(....old.data...new.data...)" from the "after update" trigger i get an error that basically says i'm not allowed to have explicit or implicit transactions in trigger.
What should i do? I'm trying to create as simple as possible logging so i'm using the trigger, but again i need to perform transactions because i don't want several different parts of the software to mess with "LogTable" in the same time.
Any idea?
I want to get the new table definition(create table statement) and save it to a txt file when the ddl alter table trigger fires.
I have tried xp_helptext, this only works for view or stored proc. I hope there's something like this for altering table to get new create statement.
I have also tried use xp_cmdshell to launch a .net. And hard code the script according to the info from INFORMATION_SCHEMA. However, it is locked because the trigger is not closed during the .net is running.
You can't do this. SQL Server does not generate a new CREATE TABLE statement when you run an ALTER TABLE. You will see that EVENTDATA() in your DDL trigger only contains the ALTER command, not the entire table definition. You will also notice that neither INFORMATION_SCHEMA nor the catalog views like sys.tables ever store a copy of the CREATE TABLE generation, even when the original command was CREATE TABLE, so there is nothing to get from that route.
You can see what Management Studio does to generate the create table script by running a server-side trace, then right-clicking a table and choosing Script As > Create To > New Query Editor Window. I can assure you this is not some simple SELECT create_table FROM sys.something but rather a set of metadata queries against a slew of DMVs. You don't even see the CREATE TABLE being assembled because it is done in the code, not from the database or in the query.
More accessible: SMO has methods for scripting objects such as tables. But I don't think you should try to do this from within the trigger - bad idea to make the ALTER TABLE transaction wait for your ability to invoke some external process, write to a file system, etc. Here is what I propose: when a table has its definition changed, in the DDL trigger add a row to a queue table. On the OS have a PowerShell script or C# command line executable that uses SMO methods to generate the script given a table name. Schedule that script to run every minute or every 5 minutes, and check the queue table for any new alters that have happened but that you haven't captured. The program writes the file based on the CREATE TABLE script that SMO generated, then updates the queue table (or removes the row) so that it is only processed once.
I have some code which re-arranges some items on a form, but only one SQL query. All my tables aren't locked before the code runs but for some reason I get an error when running:
DoCmd.RunSQL ("Select * Into MasterTable From Year07 Where 'ClassName' = '7A'")
Error:
The database engine could not lock table because it is already in use by another person or process. (Error 3211) To complete this operation, a table currently in use by another user must be locked. Wait for the other user to finish working with the table, and then try the operation again.
Any ideas what I can do to stop the table being locked?
Is MasterTable included in your form's Record Source? If so, you can't replace it, or modify its structure, while the form is open.
Apart from the table lock issue, there is a logic error in the SELECT statement.
Where 'ClassName' = '7A'
The string, ClassName, will never be equal to the string, 7A. Therefore your SELECT can never return any records. If ClassName is the name of a field in your Year07 table, discard the quotes which surround the field name.
Where ClassName = '7A'
I'm guessing, but if you're using a form that is bound to MasterTable, you can't run a query to replace it with a new MasterTable while you've got it open in the form.
I would suggest that you get rid of the MakeTable query (SELECT INTO) and instead use a plain append query (INSERT). You'll want to clean out the old data before appending the new, though.
Basically, a MakeTable query is, in my opinion, something that does not belong in a production app, and any process that you've automated with a MakeTable query should be replaced instead with a persistent temp table that is cleared before the new data is appended to it.
I have seen this when you re-open a database after Access has crashed. Typically for me a reboot has fixed this.
What version of MSAccess? Not sure about newer ones, but for Access 2003 and previous, if you were sure nobody was in the database, you could clear up locks after a crash by deleting the .ldb file.