Can anybody help me? How can i update mysql database schema (entity, doctrine yml etc) after inserting new integer field type?
I added manually a new field into mysql, I get semantical error like this :
Site\SiteBundle\Entity\Yorum has no field or association named yayin
using cmd or a console
go to the root of the project
and type
php app/console doctrine:schema:update --force
and remember to clear up your cache
NOTE: You shouldn't do the changes directly into mysql, add the field into the entity and remember to put the right annotations with the field type, length, etc, when you do the schema:update, that would update the sql schema and you'd probably avoid getting this kind of error messages
Check the docs for more info on this subject
Do you have any relations between two tables?
Because the error
Site\SiteBundle\Entity\Yorum has no field or association named yayin
This error is thrown when there is relation between tables
Related
I am creating Schema-Based Multi Tenant application with spring boot(2.5.2) , mySQL 8.0 and Liquibae 4.4.1.
I expect that all my tenant tables and liquibase internal tables should be created inside a tenant schema which I specify before running the migration.
liquibase.setDefaultSchema(tenantSchema); --sets the schema for changesets
liquibase.setLiquibaseSchema(tenantSchema); -- sets schema for liquibase internal tables
I can see that liquibase internal tables are getting created inside tenant schema correctly .
But liquibase is always executing my changesets against public schema. Looks like setting the deafult schema prior to liquibase migration has no effect. I am using sql-based changesets and I don't want to switch to xml.
I found a workaround but it is risky . I am looking for a better solution.
Workaround:
Before running migration, I save current tenant's schema in a jvm argumanet as below
System.setProperty("changeSetSchemaName", tenant.getSchema());
Then in my change set , I read this schema value and explicitly set the schema name as below
--liquibase formatted sql
--changeset author-id:alter_xxx_table
use ${changeSetSchemaName} --switch to tenant schema
alter table table_name add column new_column data_type; --execute sql
use master; --switch back to master schema
--rollback ....
This does work but I am forced to change every single changeset. If I forget to do this then some unexpected stuff will happen. I can add unit test to parse the changesets and verify that schema statements are used before and after the sql (I will have to do this in case I dont get any solution) .
It will be great if someone knows a risk-free solution for my problem.
You can use these two arguments liquibaseSchemaName and defaultSchemaName. liquibaseSchemaName is for internal liquibase tables and defaultSchemaName specifies the default schema name to use for the database connection.
Hey everyone I'm fairly new to Python Anywhere, I'm running a django app on python 3.7, I've manually added the tables and fields I need via SSH in MySQL Workbench and ran [migrate.py inpectdb > /app/models.py] then makemigrations then when I run Migrate I get this:
ERRORS: auth.Group.permissions: (fields.E340) The field's intermediary table 'auth_group_permissions' clashes with the table name of 'app.AuthGroupPermissions'. auth.User.groups: (fields.E340) The field's intermediary table 'auth_user_groups' clashes with the table name of 'app.AuthUserGroups'. auth.User.user_permissions: (fields.E340) The field's intermediary table 'auth_user_user_permissions' clashes with the table name of 'app.AuthUserUserPermissions'.
If I remove the auth tables from the models.py and try to migrate I get:
File "/usr/lib/python3.7/site-packages/MySQLdb/connections.py", line 276, in query _mysql.connection.query(self, query) django.db.utils.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes')
From what I've read it is conflicting with the settings.py [INSTALLED_APPS] but I'm not sure where to go from here to get the migration to work properly.
The extra tables that you noticed were created by Django because you have the app that creates those tables enabled in your INSTALLED_APPS. From the table names involved, I'm guessing it's django.contrib.auth that's adding them. There are probably other tables that are being created that way, but they are just not clashing with the tables you've already created.
The second error you're getting is because you have tried to create a key on a column (or columns) that is too big to be a key. That may still be as a result of the auth_ tables clashing. For instance, the Django model may be specifying a key on the id of a table, expecting it to be an integer column, but your database has a large string column for id instead.
I suspect that you may continue to have issues as long as you try to get the Django database and your database to be in the same database. Django does, however, support multiple databases so you could put your legacy database in one database and have your Django database in another. That way, they have no way on stepping on each other.
I managed to connect Drill and PostgreSQL but even for a simple command like show tables I am receiving:
org.apache.drill.common.exceptions.UserException: VALIDATION ERROR: Multiple entries with same key: campaign_items=JdbcTable {campaign_items} and campaign_items=JdbcTable {campaign_items}
I have two schemas public and fdw which contains the same table name campaign_items. How can I force Drill to use the fully qualified name to avoid confusion? Any other suggestions?
To use show tables, you need to select the schema first:
First issue the USE command to identify the schema for which you want to view tables or views. For example, the following USE statement tells Drill that you only want information from the dfs.myviews schema:
USE dfs.myviews;
https://drill.apache.org/docs/show-tables/
During my experimentation with my blog app (blogapp) in Django, I created two models (Category and Language), connected them to another model (Post) using following connections:
category = models.ManyToManyField(Category)
language = models.ForeignKey(Language)
Then it gave an error like THIS due to the lack of default value. Tried to roll that back by using an amalgam of THIS and THIS. Then I tried to add a default value using THIS. I've got an error "django.db.utils.OperationalError 1050, Table XXX already exists", then I tried THIS. Tried to revert back migrations by deleting the created migrations from the migrations folder manually. At some point I got django (1054, "Unknown column in 'field list") error.
Finally I decided to revert back to my original starting place. When I connect to my MySQL database using python manage.py dbshell, I realized that my MySQL server still have two tables that should have been deleted, blogapp_category and blogapp_language. Server is working properly but I keep getting "Table XXX already exists" error when I try to add those models.
Dropping tables from MySQL seems to be the only option at the moment.
When I run
mysql> SHOW COLUMNS FROM blogapp_post;
I did not see any reference to language or category, i.e. no columns named language_id or category_id. I have two questions at the moment:
Is it safe to delete tables manually using:
DROP TABLE blogapp_language;
DROP TABLE blogapp_category;
Will there be any negative effects?
Is there a way to freeze database like git so that when I revert to the old database, such tables added to the database by django migrations automatically dropped??
Delete respective entry from table django_migrations.
Delete migration folder from your app.
Delete table created by the app.
Now do makemigrations and migrate.
You can revert back using git but there will be errors and data correction requirements.
I have a Symfony 2 front end to a MySQL database. I changed my database by dropping a column on one of my tables. Within the app I have done the following to reflect this change:
removed the field from the form class
removed the field, getter & setter and annotations from the entity class
removed the field from the templates
All the pages display correctly, but when I submit after editing or creating a new record, I get the following error:
SQLSTATE[42S22]: Column not found: 1054 Unknown column 'trial_abbrev' in 'NEW'
500 Internal Server Error - PDOException
I guess I must have missed something, but I suspect it is in the depths of the Doctrine magic stuff that happens automatically.
Any suggestions much appreciated.
Not an answer but the comments to the question got confusing and somewhat disjointed. So back to basics. Run:
app/console doctrine:schema:update --dump-sql
It should say:
Nothing to update - your database is already in sync with the current entity metadata.
Please indicate what response you get and we can work from there.