I encounter a dead lock problem when updating a separate records in two transactions.
According to our diagnose, it should be caused by index merge.
And
select ##optimizer_switch;
index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on....
indeed shows the index merge is on . So we hope to turn off it. But I cannot find the command, can anyone give me an answer?
Thanks!
It is given in the Documentation:
To change the value of optimizer_switch, assign a value consisting of
a comma-separated list of one or more commands:
SET [GLOBAL|SESSION] optimizer_switch='command[,command]...';
Try the following to disable it Globally:
SET GLOBAL optimizer_switch='index_merge=off,index_merge_union=off,index_merge_sort_union=off,index_merge_intersection=off';
Personally I never used it but I found this in the reference manual, maybe can help you:
https://dev.mysql.com/doc/refman/5.7/en/switchable-optimizations.html
SET [GLOBAL|SESSION] optimizer_switch='command[,command]...';
Each command value should have one of the forms shown in the following table.
Command Syntax Meaning
default Reset every optimization to its default value
opt_name=default Set the named optimization to its default value
opt_name=off Disable the named optimization
opt_name=on Enable the named optimization
Related
In the latest Django (2.2), when I add a new field to a model like this:
new_field= models.BooleanField(default=False)
Django runs the following commands for MySQL:
ALTER TABLE `app_mymodel` ADD COLUMN `new_field` bool DEFAULT b'0' NOT NULL;
ALTER TABLE `app_mymodel` ALTER COLUMN `new_field` DROP DEFAULT;
COMMIT;
While this works when everything is updated, this is very problematic because old versions of the application can no longer create models after this migration is run (they do not know about new_field). Why not just keep the DEFAULTconstraint?
Why not just keep the DEFAULT constraint?
Because Django handles the default model field option at the application level, not the database level. So the real question is why it sets the DEFAULT constraint at all.
But first: Django does not use database-level defaults. (From the documentation: "Django never sets database defaults and always applies them in the Django ORM code."). This has been true from the beginning of the project. There has always been some interest in changing it (the first issue on the subject is 14 years old), and certainly other frameworks (Rails, I think, and SQLAlchemy) have shown that it is possible.
But there are good reasons beyond backwards compatibility for handling defaults at the application level. Such as: the ability to express arbitrarily complex computations; not having to worry about subtle incompatibilities across database engines; the ability to instantiate a new instance in code and have immediate access to the default value; the ability to present the default value to users in forms; and more.
Based on the two most recent discussions on the subject, I'd say there's little appetite to incorporate database defaults into the semantics of default, but there is support for adding a new db_default option.
Now, adding a new non-nullable field to an existing database is a very different use case. In that situation, you have to provide a default to the database for it to perform the operation. makemigrations will try to infer the right value from your default option if it can, and if not it will force you to specify a value from the command line. So the DEFAULT modifier is used for this limited purpose and then removed.
As you've noticed, the lack of database-level defaults in Django can make continuous deployment harder. But the solution is fairly straightforward: just re-add the default yourself in a migration. One of the great benefits of the migrations system is that it makes it easy to make arbitrary, repeatable, testable changes to your database outside of Django's ORM. So just add a new RunSQL migration operation:
operations = [
# Add SQL for both forward and reverse operations
migrations.RunSQL("ALTER TABLE app_mymodel ALTER COLUMN new_field SET DEFAULT 0;",
"ALTER TABLE app_mymodel ALTER COLUMN new_field DROP DEFAULT;")
]
You can put that in a new migration file or simply edit the automatically generated one. Depending on your database and its support for transactional DDL, the sequence of operations may or may not be atomic.
I found this ticket from 2 years ago: https://code.djangoproject.com/ticket/28000
It is stated in there that:
Django uses database defaults to set values on existing rows in a table. It doesn't leave the default values in the database so dropping the default is correct behavior. There might be an optimization not to set/drop the default in this case -- I'm not sure it's needed since the column isn't null. A separate ticket could be opened for this.
I also saw the same reference in another question here: Django Postgresql dropping column defaults at migrate
And searching a bit more I came upon this SO question: Django implementation of default value in database that led to the code of the _alter_field method from django.db.backends.base.schema where this comment exists:
# When changing a column NULL constraint to NOT NULL with a given
# default value, we need to perform 4 steps:
# 1. Add a default for new incoming writes
# 2. Update existing NULL rows with new default
# 3. Replace NULL constraint with NOT NULL
# 4. Drop the default again.
Although the last one is about altering an existing nullable field to a non-nullable, this seems to be the way that Django handles the default case :/
I recently moved my SQL Database to another Amazon RDS server with version 5.7.
Before that, the application was working fine but now I started logging errors:
"ER_BAD_NULL_ERROR: Column xyz cannot be null" - The column already has a default value CURRENT_TIMESTAMP
I checked online and people suggested to have the sql_mode equal to NO_ENGINE_SUBSTITUTION
I checked the existing settings and it is already like that.
Any other reason I am getting this error? Any tricks?
Thanks.
After searching more, the problem was only in timestamp fields with current_timestamp default value. I searched in the parameters and found explicit_defaults_for_timestamp that was enabled (value 1) and with a bit more research, I had to disable this parameter as per the documentation here
https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_explicit_defaults_for_timestamp
in order to get the required result and fix the problem.
Simply deactivate explicit_defaults_for_timestamp
SET GLOBAL explicit_defaults_for_timestamp = 0;
I have no idea why it works like that in this particular case, so I would concentrate on fixing a problem.
According to the docs NO_ENGINE_SUBSTITUTION has nothing to do with the error, during application run. I would select rows with column "xyz" of NULL value and update it to something - not null.
Default is applied when row is created. Let's say you have a table with some millions of rows, and want to add column with not null. That would block your table for significant amount of time. So you can create column without not null, but with default. That operation deals only with metadata, so is fast. Default will deal with all new rows. After that you can slowly update all rows. At the end not null constraint can be added. Not sure if DB is checking constraint when adding it at last step. Or maybe prev. version had problem with it? With MySQL things like that happens.
Hi I have Amazon rds which i can connect to using the mysql prompt
I want to empty a table using the prompt command line
What's the best way to do the above? Thanks!
You can use the standard truncate command to empty the required tables. If you want to truncate multiple tables follow this question.
Try
DELETE FROM `table_name`;
Notice the missing where clause, this will delete all rows.
Data can't be deleted that's connected by a constraint.
What you need is code that first removes constraints, then deletes the
data, and finally restores the constraints:
See more at: http://www.devx.com/dbzone/Article/40967#sthash.NUmxsFr3.dpuf
Moreover, use DELETE command as you can ROLLBCK the operation. While using truncate, operation cannot be rolled back and no triggers will be fired
I'm using MySQL 5.6 (and its Online-DDL feature) to generate some in-place alter table operations like "ADD COLUMN." I see that the LOCK parameter defaults to the highest level of concurrency allowed (for ADD COLUMN this should be "NONE") but what is the default behavior for the ALGORITHM parameter? In the documentation it says "ALGORITHM = DEFAULT is the same a specifying no ALGORITHM clause at all." but that's not helpful because it doesn't say what ALGORITHM = DEFAULT is equal to.
http://dev.mysql.com/doc/refman/5.6/en/alter-table.html
Anyone know?
The default depends on what kind of change you're trying to apply.
Some changes can make use of ALGORITHM=INPLACE, so this is their default. Other changes can never use online DDL, so their default is ALGORITHM=COPY. For example, changing a data type or dropping a primary key cannot be done inplace.
See https://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html. They document how different operations are handled, and the ones that say "No" in the "Inplace" column use ALGORITHM=COPY by default, and fail if you try to use ALGORITHM=INPLACE.
You can force an operation to use ALGORITHM=COPY even if it could do its work inplace, but you cannot request a operation to use ALGORITHM=INPLACE if it can't do it.
I have a MySQL script which is executed automatically under certain conditions. That script executes an ALTER TABLE command, because that column is needed in the database, but it may or may not have it...
Is it possible to make MySQL 4 execute the ALTER TABLE statement if the column doesn't exist or ignore the duplicate column error for this single command and allow the script execution to continue?
ALTER [IGNORE] TABLE will only ignore certain errors, like duplicate key errors when adding a new UNIQUE index, or SQL mode errors.
http://dev.mysql.com/doc/refman/4.1/en/alter-table.html
More details about the "script" you are using would help to answer the question. In python for example, the error would raise an exception which could then be caught and dealt with or ignored.
[EDIT] From the comment below, seems like you're looking for the mysql -f command line option.
You can first check the table schema before you attempt an addition of the column? However , I strongly suspect the design where you need to add columns on the fly. Something is not quite right. Can you explain the requirement in a little detail. I'm sure there are other cleaner way around this.