I have a model:
class MyModel(models.Model)
class Meta:
unique_together=["a", "b"]
index_together=["a", "b"]
a=models.IntegerField(null=True, blank=True)
b=models.ForeignKey("othermodel")
Migrations for this model:
class Migration(migrations.Migration):
dependencies = [
('app', 'previous_migration'),
]
operations = [
migrations.AlterUniqueTogether(
name='mymodel',
unique_together=set([('a', 'b')]),
),
migrations.AlterIndexTogether(
name='mymodel',
index_together=set([('a', 'b')]),
),
]
./manage.py sqlmigrate app mymigration
BEGIN;
CREATE INDEX `app_mymodel_id_asdfasfd_idx` ON `app_mymodel` (`a`, `b`);
COMMIT;
And i am using a MySQL database.
Django (1.8.5) now creates an index over both fields together, but with type INDEX and not type UNIQUE, which does not result in the expected IntegrityError when saving a duplicate. Manually changing the index results in the correct behaviour.
With just the AlterUniqueTogether migration, i get an empty output for /manage.py sqlmigrate.
How do i tell Django to create an UNIQUE index? Or is there a good reason why the created index is not setup this way?
With django 1.8 and a fresh installation with syncdb of the app, all needed indexes are created. I currently cannot reproduce the issue, as the old installation has the index manually created and fresh installations create it with syncdb and with migrate without any problems.
I did not test more recent django yet, but i assume it works as well.
Related
On a django 1.11 application which uses mysql , I have 3 apps and in one of them I have a 'Country' model:
class Country(models.Model):
countryId = models.AutoField(primary_key=True, db_column='country_id')
name = models.CharField(max_length=100)
code = models.CharField(max_length=3)
class Meta:
db_table = 'country'
Whaen I try to makemigrations I get this error:
django.db.utils.ProgrammingError: (1146, "Table 'dbname.country' doesn't exist")
If I run making migration for another app which is not related to this model and its database table using ./manage.py makemigrations another_app, I still get this error.
I've had this problem and it's because I was initializing a default value somewhere in a model using... the database that I had just dropped. In a nutshell I had something like forms.ChoiceField(choices=get_some_data(),...) where get_some_data() used the database to retrieve some default values.
I wish you had posted the backtrace because in my case it's pretty obvious by looking at the backtrace that get_some_data() was using the orm (using something like somemodel.objetcs.filter(...)).
Somehow, Django thinks you've already created this table and are now trying to modify it, while in fact you've externally dropped the table and started over. If that's the case, delete all of the files in migrations folders belong to your apps and start over with ./manage.py makemigrations.
Review, if you have any dependencies, is possible same Model need the Model Country in the same app or other app like:
class OtherModel(models.Model):
country = models.ForeignKey(Country)
1.- If is True, you need to review if installed_apps in settings.py have the correct order of apps, if is in the same app, you need to declare first a Country app and then the dependents.
2.- If dependent is in the same app, the dependent Model need to be declared after Country model in models.py.
3.- Review if the error track on console talk about same erros on models.py or forms.py
4.- Review if when executing makemigrations and migrate is the correct order of apps: python manage.py makemirgations app_of_country, other_app_name
I have a table 'test' having a column 'Name' with no constraints. I need to ALTER this column by giving it a UNIQUE constraint. How should I do it?
Should I use op.alter_column('???') or create_unique_constraint('???')?
Isn't create_unique_constraint for new column and not for existing one?
To add, you'd need:
https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.create_unique_constraint
from alembic import op
op.create_unique_constraint('uq_user_name', 'user', ['name'], schema='my_schema')
To drop, you'd need:
https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.drop_constraint
op.drop_constraint('uq_user_name', 'user', schema='my_schema')
Note: SQLAlchemy Migrations
Updated = Version: 0.7.3
to add unique constraints use create() on UniqueConstraint
to remove unique contraints use drop() on UniqueConstraint
Create a migration script. The script can be created in 2 ways.
# create manage.py
migrate manage manage.py --repository=migrations --url=postgresql://<user>:<password>#localhost:5432/<db_name>
# create script file
python manage.py script "Add Unique Contraints"
Or if you don't want to create manage.py then use the below commands
migrate script --repository=migrations --url=postgresql://<user>:<password?#localhost:5432/<db_name> "Add Unique Contraint"
it will create 00x_Add_Unique_Constraints.py
File: 00x_Add_Unique_Constraints.py
from migrate import UniqueConstraint
from sqlalchemy import MetaData, Table
def upgrade(migrate_engine):
# Upgrade operations go here. Don't create your own engine; bind
# migrate_engine to your metadata
# Table Name: user_table
# Column Name: first_name
metadata = MetaData(bind=migrate_engine)
user_table = Table('user_table', metadata, autoload=True)
UniqueConstraint(user_table.c.first_name, table=user_table).create()
def downgrade(migrate_engine):
# Operations to reverse the above upgrade go here.
# Table Name: user_table
# Column Name: first_name
metadata = MetaData(bind=migrate_engine)
user_table = Table('user_table', metadata, autoload=True)
UniqueConstraint(user_table.c.first_name, table=user_table).drop()
Following Mario Ruggier's answer,
I tried using his example code for my MySQL database and I didn't use the schema arguments because my database didn't have a schema.
I used:
from alembic import op
op.create_unique_constraint('uq_user_name', 'user', ['name'])
to drop the unique constraint, and
op.drop_constraint(constraint_name='uq_user_name', table_name='user', type_='unique')
Notice the different that I used a third argument, type_='unique', because without it, MySQL would return an error message that states something like
No generic 'DROP CONSTRAINT' in MySQL - please specify constraint type ...
I need to add a FULLTEXT index to one of my Django model's fields and understand that there is no built in functionality to do this and that such an index must be added manually in mysql (our back end DB).
I want this index to be created in every environment. I understand model changes can be dealt with Django south migrations, but is there a way I could add such a FULLTEXT index as part of a migration?
In general, if there is any custom SQL that needs to be run, how can I make it a part of a migration.
Thanks.
You can write anything as a migration. That's the point!
Once you have South up and running, type in python manage.py schemamigration myapp --empty my_custom_migration to create a blank migration that you can customize.
Open up the XXXX_my_custom_migration.py file in myapp/migrations/ and type in your custom SQL migration there in the forwards method. For example you could use db.execute
The migration might look something like this:
class Migration(SchemaMigration):
def forwards(self, orm):
db.execute("CREATE FULLTEXT INDEX foo ON bar (foobar)")
print "Just created a fulltext index..."
print "And calculated {answer}".format(answer=40+2)
def backwards(self, orm):
raise RuntimeError("Cannot reverse this migration.")
# or what have you
$ python manage.py migrate myapp XXXX # or just python manage.py migrate.
"Just created fulltext index...."
"And calculated 42"
In newer versions of Django, you can create an empty migration for execute custom sql: python3 manage.py makemigrations --empty app_name
Then, in the generated migration:
from django.db import migrations
class Migration(migrations.Migration):
operations = [
migrations.RunSQL(
sql="CREATE FULLTEXT INDEX `index_name` on table_name(`column_name`);",
reverse_sql="ALTER TABLE table_name DROP INDEX index_name"
)
]
I'm getting an error when I'm trying to dump data to a JSON fixture in Djanog 1.2.1 on my live server. On the live server it's running MySQL Server version 5.0.77 and I imported a lot of data to my tables using the phpMyAdmin interface. The website works fine and Django admin responds as normal. But when I try and actually dump the data of the application that corresponds to the tables I get this error:
$ python manage.py dumpdata --indent=2 gigs > fixtures/gigs_100914.json
/usr/local/lib/python2.6/site-packages/MySQLdb/__init__.py:34: DeprecationWarning: the sets module is deprecated
from sets import ImmutableSet
Error: Unable to serialize database: Location matching query does not exist.
My Django model for 'gigs' that I'm trying to dump from looks like this in the models.py file:
from datetime import datetime
from django.db import models
class Location(models.Model):
name = models.CharField(max_length=120, blank=True, null=True)
class Meta:
ordering = ['name']
def __unicode__(self):
return "%s (%s)" % (self.name, self.pk)
class Venue(models.Model):
name = models.CharField(max_length=120, blank=True, null=True)
contact = models.CharField(max_length=250, blank=True, null=True)
url = models.URLField(max_length=60, verify_exists=False, blank=True, null=True) # because of single thread problems, I left this off (http://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.URLField.verify_exists)
class Meta:
ordering = ['name']
def __unicode__(self):
return "%s (%s)" % (self.name, self.pk)
class Gig(models.Model):
date = models.DateField(blank=True, null=True)
details = models.CharField(max_length=250, blank=True, null=True)
location = models.ForeignKey(Location)
venue = models.ForeignKey(Venue)
class Meta:
get_latest_by = 'date'
ordering = ['-date']
def __unicode__(self):
return u"%s on %s at %s" % (self.location.name, self.date, self.venue.name)
Like I say, Django is fine with the data. The site works fine and the relationships seem to operate absolutely fine. When a run the command to get what SQL Django is using:
$ python manage.py sql gigs
/usr/local/lib/python2.6/site-packages/MySQLdb/__init__.py:34: DeprecationWarning: the sets module is deprecated
from sets import ImmutableSet
BEGIN;CREATE TABLE `gigs_location` (
`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`name` varchar(120)
)
;
CREATE TABLE `gigs_venue` (
`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`name` varchar(120),
`contact` varchar(250),
`url` varchar(60)
)
;
CREATE TABLE `gigs_gig` (
`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`date` date,
`details` varchar(250),
`location_id` integer NOT NULL,
`venue_id` integer NOT NULL
)
;
ALTER TABLE `gigs_gig` ADD CONSTRAINT `venue_id_refs_id_3d901b6d` FOREIGN KEY (`venue_id`) REFERENCES `gigs_venue` (`id`);
ALTER TABLE `gigs_gig` ADD CONSTRAINT `location_id_refs_id_2f8d7a0` FOREIGN KEY (`location_id`) REFERENCES `gigs_location` (`id`);COMMIT;
I've triple checked the data, gone through to make sure all the relationships and data is ok after importing. But I'm still getting this error, three days on... I'm stuck with what to do about it. I can't imagine the "DeprecationWarning" is going to be a problem here. I really need to dump this data back out as JSON.
Many thanks for any help at all.
Could be something similar to this.
Run it with:
python manage.py dumpdata --indent=2 -v 2 --traceback gigs
To see the underlying error.
I once ran in a similar problem where the error message was as mesmerizing as yours. The cause was a lack of memory on my server. It seems that generating dumps in json is quite memory expensive. I had only 60meg of memory (at djangohosting.ch) and it was not enough to get a dump for a mysql DB for which the mysql dump was only 1meg.
I was able to find out by watching the python process hit the 60meg limit using the top command in a second command line while running manage.py dumpdata in a first one.
My solution : get the mysql dump and then load it on my desktop pc, before generating the json dump. That said, for backup purposes, the mysql dumps are enough.
The command to get a mysql dump is the following :
mysqldump -p [password] -u [username] [database_name] > [dump_file_name].sql
That said, your problem could be completely different. You should really look at every table that has a foreign key to your Location table, and check if there is no field pointing to a previously deleted location. Unfortunately MySQL is very bad at maintaining Referential integrity, and you cannot count on it.
you can --exclude that particular app which is creating problem , still there will be database tables , it worked for me
python manage.py dumpdata > backedup_data.json --exclude app_name
This error shows because there's a mismatch between your DB's schema and your Models.
You can try find it manually or you could just go ahead and install django-extensions
pip install django-extensions
and use the sqldiff command which will print you exactly wheres the problem.
python manage.py sqldiff -a -t
First and foremost, make your models match what your db has. Then run migrations and a fake migrate:
python manage.py makemigrations && python manage.py migrate --fake
That alone should let you run a dump. As soon as django makes sure the DB's schema matches your models, it will let you.
Moving forward, you can update your models and re-run the migrations as usual:
python manage.py makemigrations && python manage.py migrate
I'm trying to run rake test:units and I keep getting this:
Mysql::Error: Duplicate entry '2147483647' for key 1: INSERT INTO `ts_schema_migrations` (version) VALUES ('20081008010000')
The "ts_" is there because I have ActiveRecord::Base.table_name_prefix set. I'm confused because there is no value '20081008010000' already in the table, and there is no migration with the value '2147483647' (though the value does appear in the table).
In Rails' schema_statments.rb, there is the following:
def initialize_schema_migrations_table
sm_table = ActiveRecord::Migrator.schema_migrations_table_name
unless tables.detect { |t| t == sm_table }
create_table(sm_table, :id => false) do |schema_migrations_table|
schema_migrations_table.column :version, :string, :null => false
end
...
In my development database, ts_schema_migrations.version is a VARCHAR. In test, though it's an INTEGER. I've dropped the tables and re-run the migrations (and/or a rake db:schema:load RAILS_ENV=test) several times. No changes.
Is something wrong with my MySQL adapter?
It looks as though your test schema is Rails 1.x somehow, whereas development is Rails 2. Perhaps you could set RAILS_ENV to test and run rake db:reset
It looks like you skipped some steps when upgrading from Rails 1.x to 2.0.
Go through and read the upgrade notes:
http://www.slashdotdash.net/2007/12/03/rails-2-upgrade-notes/
And the release notes:
http://weblog.rubyonrails.org/2007/12/7/rails-2-0-it-s-done
They will tell you all the steps you need to follow. Particularly regenerating all the scripts and migrating your database to the new system of database migrations by timestamp instead of incrementing migration id.