Rails 4 alternate primary key with MySQL - mysql

I have a strange problem I just cannot figure out.
I want to use the clustering ability of mysql to store related records beside each other on disk. Mysql clusters by the primary key on the table, which for a default rails model is ID.
However, for a lot of tables, it may make sense for the primary key of the table to be, for example, user_id, subscription_id, clustering the related records beside each, and making for a very efficient lookup when you ask the database for all of a user's subscriptions.
To do this, I created a mysql table like:
execute('create table subscriptions (
id integer not null auto_increment,
user_id integer not null,
feed_id integer not null,
folder_id integer,
created_at datetime,
updated_at datetime,
primary key (user_id, feed_id),
key id (id)
) engine=InnoDB default charset=utf8');
Notice that my PK is user_id, feed_id but I still have the ID column present, and I want rails to still use that as what it believes is the PK for the table.
First off, this didn't work at all until I set:
class Subscription < ActiveRecord::Base
self.primary_key = 'id'
...
end
Now comes the strange part.
When I run my tests, I get a strange error:
Mysql::Error: Field 'id' doesn't have a default value: INSERT INTO `subscriptions`
However - if I stick the application in development mode and do operations through the webpage, it works just fine.
After a lot of googling, I found a monkey patch to stop Rails setting MySQL into a stricter mode:
class ActiveRecord::ConnectionAdapters::MysqlAdapter
private
alias_method :configure_connection_without_strict_mode, :configure_connection
def configure_connection
configure_connection_without_strict_mode
strict_mode = "SQL_MODE=''"
execute("SET #{strict_mode}", :skip_logging)
end
end
If I add this, my test suit appears to work (for most tests, but not all), but any models that get created have an ID of zero.
Again in production mode, through the webpage things work just fine, and the models get an auto_increment ID as expected.
Has anyone got any ideas on what I can do this make my test suite work correctly in this setup?

I figured out what is going on.
What I did not remember, is that the development database is created by running migrations against the database, which also generates the schema.rb file. The schema.rb file is then used to load the test database.
So while my development database looked as I expected, the test database looked different - it would seem that the code which generates the schema.rb file cannot understand the database format I created and does not create a schema.rb that reflects my migrations correctly.
If I load my test database with:
$ rake db:migrate RAILS_ENV=test
And then run my test suite with:
$ rake test:all
Things work correctly. This is because the test:all task does not reload the database before running the test suite.
So what I described in the question to create an alternative primary key while maintaining the rails ID key works, except for the schema.rb part.

Related

Cannot Add Foreign Key Constraint even no foreign keys are added

Laravel 8
For some reason my staging environment will not allow me to run migrations.
I get the following error.
General error: 1215 Cannot add foreign key constraint (
SQL: create table `login_credentials` (
`id` char(36) not null,
`username` varchar(75) not null,
`confirmation_code` varchar(36) null,
`deleted_at` timestamp null,
`remember_token` varchar(100) null,
`created_at` timestamp null,
`updated_at` timestamp null)
default character set utf8mb4
collate 'utf8mb4_unicode_ci'
engine = InnoDB)
I do not have any foreign keys defined yet, and this appears to have recently started. I have unit tests written for my application so most of my development is done locally.
Rarely will I upload to the staging server to further test things in more detail before I push to my production. If its a small negotiable change then it'll go from local to production once unit tests pass and verify a clean build. So unfortunately sometime from now and a few weeks ago this became an issue.
my migration file for login_credentials.
class CreateLoginCredentialsTable extends Migration
{
public function up(): void
{
Schema::create('login_credentials', function (Blueprint $table) {
$table->uuid('id')->nullable(false)->unique()->primary();
$table->string('username',75)->unique();
$table->string('confirmation_code',36)->nullable()->unique();
$table->softDeletes();
$table->rememberToken();
$table->timestamps();
});
}
}
I have searched high and low on Stackoverflow and have tried the following.
Verify that my columns for user's id match everywhere as a uuid.
Verify the database is InnoDB
Verify no foreign keys being defined in any migration
Verify order of execution is correct on the migration file(s)
Updated composer to latest dependencies/versions.
Run all the artisan commands to clear cache, sessions, routes, views, etc.
Verify my models are $incrementing = false when the id of the tbl is a uuid.
Move laravel migration files into my migration folder, and add Cashier::ignoreMigrations(); Passport::ignoreMigrations(); to my appserviceprovider and update those migration files to ensure the user_id isnt bigInt, or Int, and defined as uuid.
Write a trait to ensure laravel knows the id is a uuid.
Running migration files in my localhost runs fine, without errors.
Any ideas?
Even removed all migration files except for the single login_credentials table and still get the same error? Am I going crazy??
My staging, and production databases are on the same mysql server.
Come to find out, you can have a foreign key constraint come from a DIFFERENT DATABASE that has similar, or same table(s) and pose these kind of issues on your database that you're working on.
Answering this rather than deleting it for the rare occasion another person trips across this as a breadcrumb.
"Cannot add foreign key constraint "
CHECK OTHER DATABASES FOR CONSTRAINTS

Using south to migrate database table

I was not using south. Now I want to add a couple columns. Am I screwed?
(env)noah:code broinjc$ ./manage.py schemamigration reports --initial
Creating migrations directory at '/Users/broinjc/esp/code/reports/migrations'...
Creating __init__.py in '/Users/broinjc/esp/code/reports/migrations'...
+ Added model reports.Classroom
+ Added model reports.Student
+ Added model reports.SurveySet
+ Added model reports.Survey
Created 0001_initial.py. You can now apply this migration with: ./manage.py migrate reports
(env)noah:code broinjc$ ./manage.py migrate reports
Running migrations for reports:
- Migrating forwards to 0001_initial.
> reports:0001_initial
FATAL ERROR - The following SQL query failed: CREATE TABLE "reports_classroom" ("id" integer NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "added" datetime NOT NULL, "updated" datetime NOT NULL, "name" varchar(30) NOT NULL, "am_or_pm" varchar(2) NOT NULL)
The error was: table "reports_classroom" already exists
! Error found during real run of migration! Aborting.
! Since you have a database that does not support running
! schema-altering statements in transactions, we have had
! to leave it in an interim state between migrations.
! You *might* be able to recover with: = DROP TABLE "reports_classroom"; []
= DROP TABLE "reports_student"; []
= DROP TABLE "reports_surveyset"; []
= DROP TABLE "reports_survey"; []
! The South developers regret this has happened, and would
! like to gently persuade you to consider a slightly
! easier-to-deal-with DBMS (one that supports DDL transactions)
! NOTE: The error which caused the migration to fail is further up.
Error in migration: reports:0001_initial
After seeing all that, I thought, maybe I need to update my models (making them inconsistent with sqlite db) So I updated them and then ran the same command but with --auto instead of initial...
(env)noah:code broinjc$ ./manage.py schemamigration reports --auto
? The field 'SurveySet.top_num' does not have a default specified, yet is NOT NULL.
? Since you are adding this field, you MUST specify a default
? value to use for existing rows. Would you like to:
? 1. Quit now, and add a default to the field in models.py
? 2. Specify a one-off value to use for existing columns now
... So I went ahead with option 2, and then proceeded to migrate...
(env)noah:code broinjc$ ./manage.py migrate reports
Running migrations for reports:
- Migrating forwards to 0002_auto__add_field_surveyset_top_num__add_field_surveyset_externalizer_ra.
> reports:0001_initial
FATAL ERROR - The following SQL query failed: CREATE TABLE "reports_classroom" ("id" integer NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "added" datetime NOT NULL, "updated" datetime NOT NULL, "name" varchar(30) NOT NULL, "am_or_pm" varchar(2) NOT NULL)
The error was: table "reports_classroom" already exists
! Error found during real run of migration! Aborting.
! Since you have a database that does not support running
! schema-altering statements in transactions, we have had
! to leave it in an interim state between migrations.
! You *might* be able to recover with: = DROP TABLE "reports_classroom"; []
= DROP TABLE "reports_student"; []
= DROP TABLE "reports_surveyset"; []
= DROP TABLE "reports_survey"; []
I'll try and explain what's going on so you better understand how to do what you want yourself.
Prior to using south you have some tables in your database which were generated from your models when you first run syncdb.
If you change your model, say you add a field "my_field", Django will fail when trying to read/write to it, since the table doesn't contain a column named "my_field". You'd normally have to dump your entire table and recreate it with syncdb. I'm sure you don't want to do that since you already have some data in you DB.
Say you want to make some changes without losing the data. First, you need to "convert" your app to south.
Basically, when you run schemamigration --initial, South will create a script (0001_initial.py) to replicate the current state of your models into a database.
If you run that script via manage.py migrate reports, it'll try to recreate all the tables you had initially, but in your case, since your DB already contains those tables, it'll scream at you saying the tables already exist:
FATAL ERROR - The following SQL query failed: CREATE TABLE "reports_classroom" ("id" integer NOT NULL PRIMARY KEY, "user_id" integer NOT NULL, "added" datetime NOT NULL, "updated" datetime NOT NULL, "name" varchar(30) NOT NULL, "am_or_pm" varchar(2) NOT NULL)
The error was: table "reports_classroom" already exists
The way to make South believe you have already applied that migration, you use the --fake option.
manage.py migrate reports 0001 --fake
Which is like saying, go to the migration state 0001_initial (you only have to write the numeric part of the name), but don't actually apply the changes.
After doing that, say you add a new field "my_field_02" to one of your models. As before, Django is referencing a field that doesn't exist in your model's table. To create it without writing the SQL yourself, you do:
manage.py schemamigration reports --auto
Which will create a new migration called something like 0002_auto__add_my_field_02.py which you then need to apply via manage.py migrate reports. You could also say manage.py migrate reports 0002 to specify the migration state you want to go to, but by default South will try to apply all the following migrations (remember you're already at state 0001).
I highly recommend you read South's documentation and backup your production data prior to doing anything.
tl;dr Read this and backup your data.

Rails: ActiveRecord::UnknownPrimaryKey exception

A ActiveRecord::UnknownPrimaryKey occurred in survey_response#create:
Unknown primary key for table question_responses in model QuestionResponse.
activerecord (3.2.8) lib/active_record/reflection.rb:366:in `primary_key'
Our application has been raising these exceptions and we do not know what is causing them. The exception happens in both production and test environments, but it is not reproducible in either. It seems to have some relation to server load, but even in times of peak loads some of the requests still complete successfully. The app (both production and test environments) is Rails 3.2.8, ruby 1.9.3-p194 using MySQL with the mysql2 gem. Production is Ubuntu and dev/test is OS X. The app is running under Phusion Passenger in production.
Here is a sample stack trace: https://gist.github.com/4068400
Here are the two models in question, the controller and the output of "desc question_responses;": https://gist.github.com/4b3667a6896b60383dc3
It most definitely has a primary key, which is a standard rails 'id' column.
Restarting the app server temporarily stops the exceptions from occurring, otherwise they happen over a period of time 30 minutes - 6 hours in length, starting as suddenly as they stop.
It always occurs on the same controller action, table and model.
Has anyone else run into this exception?
FWIW, I was getting this same intermittent error and after a heck of a lot of head-scratching I found the cause.
We have separate DBs per client, and some how one of the client's DBs had a missing primary key on the users table. This meant that when that client accessed our site, Rails updated it's in-memory schema to that of the database it had connected to, with the missing primary key. Any future requests served by that Passenger app process (or any others that had been 'infected' by this client) which tried to access the users table borked out with the primary key error, regardless of whether that particular database had a primary key.
In the end a fairly self-explanatory error, but difficult to pin down when you've got 500+ databases and only one of them was causing the problem, and it was intermittent.
Got this problem because my workers used shared connection to database. But I was on unicorn.
I know that Passenger reconnects by default, but maybe you have some complicated logic. Connections to number of databases, for example. So you need to reconnect all connections.
This same thing happened to me. I had a composite primary key in one of my table definitions which caused the error. It was further compounded because annotate models did not (but will shortly / does now) support annotation of composite primary keys.
My solution was to make the id column the only primary key and add a constraint (not shown) for the composition. To do this you need to drop auto_increment on id if this is set, drop your composite primary key, then re-add both the primary status and autoincrement.
ALTER TABLE indices MODIFY id INT NOT NULL;
ALTER TABLE indices DROP PRIMARY KEY;
ALTER TABLE indices MODIFY id INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
on postgres database
ALTER TABLE indices ALTER COLUMN id SET DATA TYPE INT;
ALTER TABLE indices ADD PRIMARY KEY (id)

Migrate Django / MySQL foreign key to accept null values

Have a Django / MySQL set up. There's a model, Survey, which currently looks like...
class Survey(models.Model):
company = models.ForeignKey('Company')
I want to set up the model, so company can be a null value:
company = models.ForeignKey('Company', blank = True, null = True)
However, I'm not sure what I should do on the MySQL side to ensure all the existing constraints / models. Do I just alter the column through the console to accept null values? It's a live database, so I don't want to experiment too much (my development environment uses SqlLite3).
Update your model so that blank=True, null=True. Then run the sqlall command on your production server (so that it gives the output for MySQL)
./manage.py sqlall myapp
Find the create table statement This will show the new definition for the survey_id field.
CREATE TABLE `myapp_survey` (
...
`survey_id` integer
...
Then, in your database shell, modify the column to accept null values using the ALTER TABLE command.
ALTER TABLE myapp_survey MODIFY company integer;
Be careful, and consider whether you want to run MySQL in your development environment as well. Do you really want to be copying and pasting commands from Stack Overflow into your live DB shell without testing them first?

Databases allow bad foreign keys from Rails Fixtures

I am using Rails Fixtures to load some test data to my database and accidentally I introduced a foreign key out of range.
To my surprise, the database accepted it despite having referential integrity constraints (that work). I tried with PostgreSQL and with MySQL InnoDB and both allowed.
Example:
Having in the database "Flavours" whith a numeric primary key (id), 5 entries (1 to 5). I can introduce bad data doing:
Icecream_1:
name: my ice cream
flavour_id: 6
How is it possible that the fixtures loading go around my database constraints?
Thank you.
Here are two tables. Having 200 user_types (fake data) I was able to introduce a user with user_type_id 201 but only from fixtures, pgAdmin forbids it.
CREATE SEQUENCE user_types_id_seq;
CREATE TABLE user_types (
id SMALLINT
NOT NULL
DEFAULT NEXTVAL('user_types_id_seq'),
name VARCHAR(45)
NOT NULL
UNIQUE,
PRIMARY KEY (id));
CREATE SEQUENCE users_id_seq;
CREATE TABLE users (
id BIGINT
NOT NULL
DEFAULT NEXTVAL('users_id_seq'),
user_type_id SMALLINT
NOT NULL
REFERENCES user_types (id) ON DELETE CASCADE ON UPDATE CASCADE,
PRIMARY KEY (id));
---------
Fixture
<% for i in (1..201) %>
user_<%= i %>:
id: <%= i %>
user_type_id: <%= i %>
<% end %>
And as I said, both innoDb and postgresql accepted the bad key.
Thanks
PostgreSQL doesn't accept corrupt data, don't worry. In MySQL it all depends on the engine (must be innoDB) and the (connection) settings for the parameter foreign_key_checks.
How do your tables and constraints look like? Check pgAdmin (or some other client) and dump the relevant piece of datamodel over here, than we can help you out.
pgAdmin forbids it.
No, your PostgreSQL database forbids it. pgAdmin is just a client and it only sends a query to the database. The database does some checks, FK got violated and returns an error.
Looks like you're working on the wrong database (no FK's or MySQL with the wrong engine and/or settings), PostgreSQL works fine when having a FK.
I agree with Frank. Your test database for PostgreSQL is most probably not setup correctly. You either forgot to create the FK constraints or you disabled them.
The fact that you got an error in pgAdmin indicates that you are working with a different database from within pgAdmin and your test script.
As far as MySQL is concerned I'd look for a wrong default engine in the test database or if you also forgot to create the FK constraints there (note that you will not get an error if you create a FK constraint with an engine that doesn't support referential integrity on MySQL)
Check the table definitions in your test database. IIRC, "rake db:test:prepare" does not maintain fidelity when creating the tables in the test database.
Thank you all for answering.
Someone at ruby forum figured it out. Looks like the triggers which enforce RI are disabled prior to the loading of the fixtures.
I don't know why but it solves the mistery.