Migrate Django / MySQL foreign key to accept null values - mysql

Have a Django / MySQL set up. There's a model, Survey, which currently looks like...
class Survey(models.Model):
company = models.ForeignKey('Company')
I want to set up the model, so company can be a null value:
company = models.ForeignKey('Company', blank = True, null = True)
However, I'm not sure what I should do on the MySQL side to ensure all the existing constraints / models. Do I just alter the column through the console to accept null values? It's a live database, so I don't want to experiment too much (my development environment uses SqlLite3).

Update your model so that blank=True, null=True. Then run the sqlall command on your production server (so that it gives the output for MySQL)
./manage.py sqlall myapp
Find the create table statement This will show the new definition for the survey_id field.
CREATE TABLE `myapp_survey` (
...
`survey_id` integer
...
Then, in your database shell, modify the column to accept null values using the ALTER TABLE command.
ALTER TABLE myapp_survey MODIFY company integer;
Be careful, and consider whether you want to run MySQL in your development environment as well. Do you really want to be copying and pasting commands from Stack Overflow into your live DB shell without testing them first?

Related

Update a table (that has relationships) using another table in SSIS

I want to be able to update a specific column of a table using data from another table. Here's what the two tables look like, the DB type and SSIS components used to get the tables data (btw, both ID and Code are unique).
Table1(ID, Code, Description) [T-SQL DB accessed using ADO NET Source component]
Table2(..., Code, Description,...) [MySQL DB accessed using ODBC Source component]
I want to update the column Table1.Description using the Table2.Description by matching them with the right Code first (because Table1.Code is the same as Table2.Code).
What i tried:
Doing a Merge Join transformation using the Code column but I couldn't figure out how to reinsert the table because since Table1 has relationships i can't simply drop the table and replace it with the new one
Using a Lookup transformation but since both tables are not the same type it didn't allow me to create the lookup table's connection manager (which would be for in my case MySQL)
I'm still new to SSIS but any ideas or help would be greatly appreciated
My solution is based on #Akina's comments. Although using a linked server would've definitely fit, my requirement is to make an SSIS package to take care of migrating some old data.
The first and last are SQL tasks, while the Migrate ICDDx is the DFT that transfers the data to a staging table created during the first SQL task.
Here's the SQL commands that gets executed during Create Staging Table :
DROP TABLE IF EXISTS [tempdb].[##stagedICDDx];
CREATE TABLE ##stagedICDDx (
ID INT NOT NULL,
Code VARCHAR(15) NOT NULL,
Description NVARCHAR(500) NOT NULL,
........
);
and here's the sql command (based on #Akina's comment) for transferring from staged to final (inside Transfer Staged):
UPDATE [MyDB].[dbo].[ICDDx]
SET [ICDDx].[Description] = [##stagedICDDx].[Description]
FROM [dbo].[##stagedICDDx]
WHERE [ICDDx].[Code]=[##stagedICDDx].[Code]
GO
Here's the DFT used (both TSQL and MySQL sources return sorted output using ORDER BY Code, so i didnt have to insert Sort components before the Merge Join) :
Note: Btw, you have to setup the connection manager to retain/reuse the same connection so that the temporary table doesn't get deleted before we transfer data to it. If all goes well, then after the Transfer Staged SQL Task, the connection would be closed and the global temporary table would be deleted.

Unable to create or change a table without a primary key - Laravel DigitalOcean Managed Database

I've just deployed my app to DigitalOcean using (Managed Database) and I'm getting the following error when calling php artisan migrate
SQLSTATE[HY000]: General error: 3750 Unable to create or change a
table without a primary key, when the system variable 'sql_require_primary_key'
is set. Add a primary key to the table or unset this variable to avoid
this message. Note that tables without a primary key can cause performance
problems in row-based replication, so please consult your DBA before changing
this setting. (SQL: create table `sessions` (`id` varchar(255) not null,
`user_id` bigint unsigned null, `ip_address` varchar(45) null,
`user_agent` text null, `payload` text not null, `last_activity` int not null)
default character set utf8mb4 collate 'utf8mb4_unicode_ci')
It appears that Laravel Migrations doesn't work when mysql var sql_require_primary_key is set to true.
Do you have any solutions for that?
From March 2022, you can now configure your MYSQL and other database by making a request to digital ocean APIs.
Here's the reference: https://docs.digitalocean.com/products/databases/mysql/#4-march-2022
STEPS TO FIX THE ISSUE:
Step - 1: Create AUTH token to access digital ocean APIs. https://cloud.digitalocean.com/account/api/tokens
STEP - 2: Get the database cluster id by hitting the GET request to the below URL with bearer token that you have just generated above.
URL: https://api.digitalocean.com/v2/databases
Step - 3: Hit the below URL with PATCH request along with the bearer token and payload.
URL: https://api.digitalocean.com/v2/databases/{YOUR_DATABASE_CLUSER_ID}/config
payload: {"config": { "sql_require_primary_key": false }}
That's all. It worked flawlessly.
For more information, please refer to API DOCS:
https://docs.digitalocean.com/products/databases/mysql/#latest-updates
I was trying to fix this problem with an import to DO Managed MySQL using a mysqldump file from a WordPress installation. I found adding this to the top of the file did work for my import.
SET #ORIG_SQL_REQUIRE_PRIMARY_KEY = ##SQL_REQUIRE_PRIMARY_KEY;
SET SQL_REQUIRE_PRIMARY_KEY = 0;
I then imported using JetBrains DataGrip and it worked without error.
Add in your first migration:
\Illuminate\Support\Facades\DB::statement('SET SESSION sql_require_primary_key=0');
Inside: Schema::create() function.
Just add set sql_require_primary_key = off
Like this
to your SQL file.
One neat solution is defined here. The solution is to add listeners to migrate scripts and turn sql_require_primary_key on and off before and after executing a migration. This solution solve the problem where one is unable modify migrations script such as when they are from a library or a framework like Voyager.
<?php
namespace App\Providers;
use Illuminate\Database\Events\MigrationsStarted;
use Illuminate\Database\Events\MigrationsEnded;
use Illuminate\Support\Facades\DB;
use Illuminate\Support\Facades\Event;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider {
/**
* Register any application services.
*
* #return void
*/
public function register() {
// check this one here https://github.com/laravel/framework/issues/33238#issuecomment-897063577
Event::listen(MigrationsStarted::class, function (){
if (config('databases.allow_disabled_pk')) {
DB::statement('SET SESSION sql_require_primary_key=0');
}
});
Event::listen(MigrationsEnded::class, function (){
if (config('databases.allow_disabled_pk')) {
DB::statement('SET SESSION sql_require_primary_key=1');
}
});
}
// rest of the class
}
For bigger sql file, can with this command (nano editor can open in 1 week if your file size is <8GB, lol):
First :
sed -i '1i SET SQL_REQUIRE_PRIMARY_KEY = 0;' db.sql
Second :
sed -i '1i SET #ORIG_SQL_REQUIRE_PRIMARY_KEY = ##SQL_REQUIRE_PRIMARY_KEY;' db.sql
According to the MySQL documentation purpose of this system variable is
to avoid replication performance issues: "Enabling this variable helps avoid performance problems in row-based replication that can occur when tables have no primary key."
IMHO, there are two possible options to consider for your problem;
Add primary key to this and every table in your migration, including temporary tables. This one is better and i think more convenient way to do it since there is no drawback to have primary key for each table.
Whether statements that create new tables or alter the structure of existing tables enforce the requirement that tables have a primary key.
Change your provider because according to here "We support only MySQL v8."
Also here is the bug report
I contacted DigitalOcean via a ticket to ask if they want to disable the requirement and they did the next day :)
So you can just ask them
Thanks for getting in touch with us!
I understand you will like to disable the primary requirement on your managed database. The primary requirement for your managed database ****** has been disabled
Unfortunately, we can't change the sql_require_primary_key value in the digital ocean MySQL database. instead, you can set the id to the primary key just by adding primary()
When enabled, sql_require_primary_key has these effects:
Attempts to create a new table with no primary key fail with an error. This includes CREATE TABLE ... LIKE. It also includes CREATE TABLE ... SELECT, unless the CREATE TABLE part includes a primary key definition.
Attempts to drop the primary key from an existing table fail with an error, with the exception that dropping the primary key and adding a primary key in the same ALTER TABLE statement is permitted.
Dropping the primary key fails even if the table also contains a UNIQUE NOT NULL index.
Attempts to import a table with no primary key fail with an error.
Default value is OFF , but in your case you need to set OFF from ON
IMPORTANT LINK
HOW TO SET
If you're importing in some SQL client, just run this query on that particular database before importing.
set sql_require_primary_key = off
Works all good for DO managed Mysql Database. Cheers!
add this line to your migration file.
$table->increments('aid');

Schema not honored in queries

SnappyData v.0.5
I cannot seem to create row tables for a specific schema. This is important in a schema-based multi-tenant application where each tenant has his own schema.
However, when I create my tables using RowStore DDL, they are queryable is all schemas for the DB.
Here were my steps. Did I do something wrong?
ubuntu#ip-172-x-x-x:~$ snappy-shell
SnappyData RowStore 1.5.0 GA
snappy> connect client '172.x.x.x:1527';
Using CONNECTION0
**snappy> set schema A;**
0 rows inserted/updated/deleted
snappy> run '/home/ubuntu/data/ddl/create_row_tables.sql';
snappy> DROP TABLE IF EXISTS road;
0 rows inserted/updated/deleted
snappy>
CREATE TABLE road
(
road_id VARCHAR(64) NOT NULL,
name VARCHAR(64) NOT NULL,
CONSTRAINT road_PK PRIMARY KEY (road_id)
)
PERSISTENT;
0 rows inserted/updated/deleted
In DBVisualizer using JDBC, I have the following schemas: A, APP, NULLID, Q, SQLQ, etc.
When I change DBVisualizer to point to a specific schema, and run:
select * from road;
The query returns zero rows on ALL SCHEMAS. I would expect a 'Table not found:ROAD;' error on all schemas except "A". What do I need to do to create the tables only on a specific schema?
The schema integration of the store with Spark metadata had some issues which have been fixed in recent builds. As of the released version, you will need to use fully qualified names like:
create table a.road ...
select * from a.road
Btw, if you run the cluster as a pure rowstore (using "snappy-start-all.sh rowstore"), then schema should work as expected.

Using south to migrate database table

I was not using south. Now I want to add a couple columns. Am I screwed?
(env)noah:code broinjc$ ./manage.py schemamigration reports --initial
Creating migrations directory at '/Users/broinjc/esp/code/reports/migrations'...
Creating __init__.py in '/Users/broinjc/esp/code/reports/migrations'...
+ Added model reports.Classroom
+ Added model reports.Student
+ Added model reports.SurveySet
+ Added model reports.Survey
Created 0001_initial.py. You can now apply this migration with: ./manage.py migrate reports
(env)noah:code broinjc$ ./manage.py migrate reports
Running migrations for reports:
- Migrating forwards to 0001_initial.
> reports:0001_initial
FATAL ERROR - The following SQL query failed: CREATE TABLE "reports_classroom" ("id" integer NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "added" datetime NOT NULL, "updated" datetime NOT NULL, "name" varchar(30) NOT NULL, "am_or_pm" varchar(2) NOT NULL)
The error was: table "reports_classroom" already exists
! Error found during real run of migration! Aborting.
! Since you have a database that does not support running
! schema-altering statements in transactions, we have had
! to leave it in an interim state between migrations.
! You *might* be able to recover with: = DROP TABLE "reports_classroom"; []
= DROP TABLE "reports_student"; []
= DROP TABLE "reports_surveyset"; []
= DROP TABLE "reports_survey"; []
! The South developers regret this has happened, and would
! like to gently persuade you to consider a slightly
! easier-to-deal-with DBMS (one that supports DDL transactions)
! NOTE: The error which caused the migration to fail is further up.
Error in migration: reports:0001_initial
After seeing all that, I thought, maybe I need to update my models (making them inconsistent with sqlite db) So I updated them and then ran the same command but with --auto instead of initial...
(env)noah:code broinjc$ ./manage.py schemamigration reports --auto
? The field 'SurveySet.top_num' does not have a default specified, yet is NOT NULL.
? Since you are adding this field, you MUST specify a default
? value to use for existing rows. Would you like to:
? 1. Quit now, and add a default to the field in models.py
? 2. Specify a one-off value to use for existing columns now
... So I went ahead with option 2, and then proceeded to migrate...
(env)noah:code broinjc$ ./manage.py migrate reports
Running migrations for reports:
- Migrating forwards to 0002_auto__add_field_surveyset_top_num__add_field_surveyset_externalizer_ra.
> reports:0001_initial
FATAL ERROR - The following SQL query failed: CREATE TABLE "reports_classroom" ("id" integer NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "added" datetime NOT NULL, "updated" datetime NOT NULL, "name" varchar(30) NOT NULL, "am_or_pm" varchar(2) NOT NULL)
The error was: table "reports_classroom" already exists
! Error found during real run of migration! Aborting.
! Since you have a database that does not support running
! schema-altering statements in transactions, we have had
! to leave it in an interim state between migrations.
! You *might* be able to recover with: = DROP TABLE "reports_classroom"; []
= DROP TABLE "reports_student"; []
= DROP TABLE "reports_surveyset"; []
= DROP TABLE "reports_survey"; []
I'll try and explain what's going on so you better understand how to do what you want yourself.
Prior to using south you have some tables in your database which were generated from your models when you first run syncdb.
If you change your model, say you add a field "my_field", Django will fail when trying to read/write to it, since the table doesn't contain a column named "my_field". You'd normally have to dump your entire table and recreate it with syncdb. I'm sure you don't want to do that since you already have some data in you DB.
Say you want to make some changes without losing the data. First, you need to "convert" your app to south.
Basically, when you run schemamigration --initial, South will create a script (0001_initial.py) to replicate the current state of your models into a database.
If you run that script via manage.py migrate reports, it'll try to recreate all the tables you had initially, but in your case, since your DB already contains those tables, it'll scream at you saying the tables already exist:
FATAL ERROR - The following SQL query failed: CREATE TABLE "reports_classroom" ("id" integer NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "added" datetime NOT NULL, "updated" datetime NOT NULL, "name" varchar(30) NOT NULL, "am_or_pm" varchar(2) NOT NULL)
The error was: table "reports_classroom" already exists
The way to make South believe you have already applied that migration, you use the --fake option.
manage.py migrate reports 0001 --fake
Which is like saying, go to the migration state 0001_initial (you only have to write the numeric part of the name), but don't actually apply the changes.
After doing that, say you add a new field "my_field_02" to one of your models. As before, Django is referencing a field that doesn't exist in your model's table. To create it without writing the SQL yourself, you do:
manage.py schemamigration reports --auto
Which will create a new migration called something like 0002_auto__add_my_field_02.py which you then need to apply via manage.py migrate reports. You could also say manage.py migrate reports 0002 to specify the migration state you want to go to, but by default South will try to apply all the following migrations (remember you're already at state 0001).
I highly recommend you read South's documentation and backup your production data prior to doing anything.
tl;dr Read this and backup your data.

Rails 4 alternate primary key with MySQL

I have a strange problem I just cannot figure out.
I want to use the clustering ability of mysql to store related records beside each other on disk. Mysql clusters by the primary key on the table, which for a default rails model is ID.
However, for a lot of tables, it may make sense for the primary key of the table to be, for example, user_id, subscription_id, clustering the related records beside each, and making for a very efficient lookup when you ask the database for all of a user's subscriptions.
To do this, I created a mysql table like:
execute('create table subscriptions (
id integer not null auto_increment,
user_id integer not null,
feed_id integer not null,
folder_id integer,
created_at datetime,
updated_at datetime,
primary key (user_id, feed_id),
key id (id)
) engine=InnoDB default charset=utf8');
Notice that my PK is user_id, feed_id but I still have the ID column present, and I want rails to still use that as what it believes is the PK for the table.
First off, this didn't work at all until I set:
class Subscription < ActiveRecord::Base
self.primary_key = 'id'
...
end
Now comes the strange part.
When I run my tests, I get a strange error:
Mysql::Error: Field 'id' doesn't have a default value: INSERT INTO `subscriptions`
However - if I stick the application in development mode and do operations through the webpage, it works just fine.
After a lot of googling, I found a monkey patch to stop Rails setting MySQL into a stricter mode:
class ActiveRecord::ConnectionAdapters::MysqlAdapter
private
alias_method :configure_connection_without_strict_mode, :configure_connection
def configure_connection
configure_connection_without_strict_mode
strict_mode = "SQL_MODE=''"
execute("SET #{strict_mode}", :skip_logging)
end
end
If I add this, my test suit appears to work (for most tests, but not all), but any models that get created have an ID of zero.
Again in production mode, through the webpage things work just fine, and the models get an auto_increment ID as expected.
Has anyone got any ideas on what I can do this make my test suite work correctly in this setup?
I figured out what is going on.
What I did not remember, is that the development database is created by running migrations against the database, which also generates the schema.rb file. The schema.rb file is then used to load the test database.
So while my development database looked as I expected, the test database looked different - it would seem that the code which generates the schema.rb file cannot understand the database format I created and does not create a schema.rb that reflects my migrations correctly.
If I load my test database with:
$ rake db:migrate RAILS_ENV=test
And then run my test suite with:
$ rake test:all
Things work correctly. This is because the test:all task does not reload the database before running the test suite.
So what I described in the question to create an alternative primary key while maintaining the rails ID key works, except for the schema.rb part.