Rails: ActiveRecord::UnknownPrimaryKey exception - mysql

A ActiveRecord::UnknownPrimaryKey occurred in survey_response#create:
Unknown primary key for table question_responses in model QuestionResponse.
activerecord (3.2.8) lib/active_record/reflection.rb:366:in `primary_key'
Our application has been raising these exceptions and we do not know what is causing them. The exception happens in both production and test environments, but it is not reproducible in either. It seems to have some relation to server load, but even in times of peak loads some of the requests still complete successfully. The app (both production and test environments) is Rails 3.2.8, ruby 1.9.3-p194 using MySQL with the mysql2 gem. Production is Ubuntu and dev/test is OS X. The app is running under Phusion Passenger in production.
Here is a sample stack trace: https://gist.github.com/4068400
Here are the two models in question, the controller and the output of "desc question_responses;": https://gist.github.com/4b3667a6896b60383dc3
It most definitely has a primary key, which is a standard rails 'id' column.
Restarting the app server temporarily stops the exceptions from occurring, otherwise they happen over a period of time 30 minutes - 6 hours in length, starting as suddenly as they stop.
It always occurs on the same controller action, table and model.
Has anyone else run into this exception?

FWIW, I was getting this same intermittent error and after a heck of a lot of head-scratching I found the cause.
We have separate DBs per client, and some how one of the client's DBs had a missing primary key on the users table. This meant that when that client accessed our site, Rails updated it's in-memory schema to that of the database it had connected to, with the missing primary key. Any future requests served by that Passenger app process (or any others that had been 'infected' by this client) which tried to access the users table borked out with the primary key error, regardless of whether that particular database had a primary key.
In the end a fairly self-explanatory error, but difficult to pin down when you've got 500+ databases and only one of them was causing the problem, and it was intermittent.

Got this problem because my workers used shared connection to database. But I was on unicorn.
I know that Passenger reconnects by default, but maybe you have some complicated logic. Connections to number of databases, for example. So you need to reconnect all connections.

This same thing happened to me. I had a composite primary key in one of my table definitions which caused the error. It was further compounded because annotate models did not (but will shortly / does now) support annotation of composite primary keys.
My solution was to make the id column the only primary key and add a constraint (not shown) for the composition. To do this you need to drop auto_increment on id if this is set, drop your composite primary key, then re-add both the primary status and autoincrement.
ALTER TABLE indices MODIFY id INT NOT NULL;
ALTER TABLE indices DROP PRIMARY KEY;
ALTER TABLE indices MODIFY id INT NOT NULL PRIMARY KEY AUTO_INCREMENT;

on postgres database
ALTER TABLE indices ALTER COLUMN id SET DATA TYPE INT;
ALTER TABLE indices ADD PRIMARY KEY (id)

Related

Foreign key in different database connection

While building my app, I came across a problem. I have some database tables with information, I want to reuse for different applications. Mainly for authentication and user privileges.
That is why i decided to split my database into two, one for user data (data I will need for other applications) and another for application related data (data I will need only for this).
In some cases, I need to reference a foreign key from one database on another database. I had no problem doing so while databases are in the same connection. I did it like so:
CREATE TABLE `database1`.`table1` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`foreign_key` int(10) unsigned DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `table1_foreign_key_foreign` (`foreign_key`),
CONSTRAINT `table1_foreign_key_foreign` FOREIGN KEY (`foreign_key`) REFERENCES `database2`.`table2` (`id`)
);
Now here is my problem. I am getting to know Docker and I would like to create a container for each database. If my understanding is correct, each container acts as a different connection.
Is it even possible to reference a foreign key on different database connection?
Is there another way of referencing a foreign key from one Docker container on another?
Any suggestions or comments would be much appreciated.
Having a foreign key cross database boundaries is a bad idea for multiple reasons.
Scaling out: You are tying the databases to the same instance. Moving a database to a new instance becomes much more complicated, and you definitely do not want to end up with a FK constraint running over a linked server. Please, no. Don't.
Disaster Recovery: Your DR process has a significant risk. Are your backups capturing the data at the exact same point in time? If not, there is the risk that the related data will not match after a restore. Even a difference of a few seconds can invalidate the integrity of the relationship.
Different subsystems: Each database requires resources. Some are explicit, others are shared, but there is overhead for each database running in your instance.
Security: Each database has its own security implementation. Different logins and access permissions. If a user in your DATA database needs to lookup a value against the USER database, you'll need to manage permissions in both. Segregating the data by database doesn't solve or enhance your security, it just makes it more complicated. The overhead to manage the security for the sensitive data doesn't change, you'll still need to review and manage users and permissions based on the data (not the location of the data). You should be able to implement exactly the same security controls within the single database.
No, that is not possible. You can not create FK to different instance of DB (or other Docker container in your case).
You may try to make this check on application level.

Rails 4 alternate primary key with MySQL

I have a strange problem I just cannot figure out.
I want to use the clustering ability of mysql to store related records beside each other on disk. Mysql clusters by the primary key on the table, which for a default rails model is ID.
However, for a lot of tables, it may make sense for the primary key of the table to be, for example, user_id, subscription_id, clustering the related records beside each, and making for a very efficient lookup when you ask the database for all of a user's subscriptions.
To do this, I created a mysql table like:
execute('create table subscriptions (
id integer not null auto_increment,
user_id integer not null,
feed_id integer not null,
folder_id integer,
created_at datetime,
updated_at datetime,
primary key (user_id, feed_id),
key id (id)
) engine=InnoDB default charset=utf8');
Notice that my PK is user_id, feed_id but I still have the ID column present, and I want rails to still use that as what it believes is the PK for the table.
First off, this didn't work at all until I set:
class Subscription < ActiveRecord::Base
self.primary_key = 'id'
...
end
Now comes the strange part.
When I run my tests, I get a strange error:
Mysql::Error: Field 'id' doesn't have a default value: INSERT INTO `subscriptions`
However - if I stick the application in development mode and do operations through the webpage, it works just fine.
After a lot of googling, I found a monkey patch to stop Rails setting MySQL into a stricter mode:
class ActiveRecord::ConnectionAdapters::MysqlAdapter
private
alias_method :configure_connection_without_strict_mode, :configure_connection
def configure_connection
configure_connection_without_strict_mode
strict_mode = "SQL_MODE=''"
execute("SET #{strict_mode}", :skip_logging)
end
end
If I add this, my test suit appears to work (for most tests, but not all), but any models that get created have an ID of zero.
Again in production mode, through the webpage things work just fine, and the models get an auto_increment ID as expected.
Has anyone got any ideas on what I can do this make my test suite work correctly in this setup?
I figured out what is going on.
What I did not remember, is that the development database is created by running migrations against the database, which also generates the schema.rb file. The schema.rb file is then used to load the test database.
So while my development database looked as I expected, the test database looked different - it would seem that the code which generates the schema.rb file cannot understand the database format I created and does not create a schema.rb that reflects my migrations correctly.
If I load my test database with:
$ rake db:migrate RAILS_ENV=test
And then run my test suite with:
$ rake test:all
Things work correctly. This is because the test:all task does not reload the database before running the test suite.
So what I described in the question to create an alternative primary key while maintaining the rails ID key works, except for the schema.rb part.

Cayenne "resets" primary key value?

I am using Cayenne to add records to a MySQL database, and I am seeing some strange behavior.
When I run my application, I create a DataContext, perform a series of adds, then close the application. This works out well, because I am using an integer for a primary key, and when I add a record to the database, the key automatically increments. For some reason, it starts at 200 for the first record, then goes to 201 for the second record, etc.
If, however, I stop the application, then run it again, the primary key starts at 200 again! This, of course, causes an exception to be thrown because a new record ends up having a duplicate primary key. It is looking like when I create a new object using the DataContext's newObject() after starting my application, Cayenne does not "remember" how far the primary key was incremented when the application was previously run.
Does anyone know what is causing this reset of the primary key values, and (more importantly) how to stop it from happening??? Or have I found a bug in the current version of Cayenne? I am using Version 3.0.2.
Someone please advise...
The last used PK for a given table is stored in a special table called AUTO_PK_SUPPORT. Please check the contents of this table between the restarts of your app. Also check you application Cayenne logs for reads and writes to AUTO_PK_SUPPORT. This should give you an idea of what's happening.
Aside from that you might switch to auto-increment PK (see "Primary Key Provided by Database" section here). MySQL supports auto-incremented PK columns and if you have an option of altering the schema, this IMO is the cleanest PK generation strategy out of all available. (And it doesn't require AUTO_PK_SUPPORT).

Databases allow bad foreign keys from Rails Fixtures

I am using Rails Fixtures to load some test data to my database and accidentally I introduced a foreign key out of range.
To my surprise, the database accepted it despite having referential integrity constraints (that work). I tried with PostgreSQL and with MySQL InnoDB and both allowed.
Example:
Having in the database "Flavours" whith a numeric primary key (id), 5 entries (1 to 5). I can introduce bad data doing:
Icecream_1:
name: my ice cream
flavour_id: 6
How is it possible that the fixtures loading go around my database constraints?
Thank you.
Here are two tables. Having 200 user_types (fake data) I was able to introduce a user with user_type_id 201 but only from fixtures, pgAdmin forbids it.
CREATE SEQUENCE user_types_id_seq;
CREATE TABLE user_types (
id SMALLINT
NOT NULL
DEFAULT NEXTVAL('user_types_id_seq'),
name VARCHAR(45)
NOT NULL
UNIQUE,
PRIMARY KEY (id));
CREATE SEQUENCE users_id_seq;
CREATE TABLE users (
id BIGINT
NOT NULL
DEFAULT NEXTVAL('users_id_seq'),
user_type_id SMALLINT
NOT NULL
REFERENCES user_types (id) ON DELETE CASCADE ON UPDATE CASCADE,
PRIMARY KEY (id));
---------
Fixture
<% for i in (1..201) %>
user_<%= i %>:
id: <%= i %>
user_type_id: <%= i %>
<% end %>
And as I said, both innoDb and postgresql accepted the bad key.
Thanks
PostgreSQL doesn't accept corrupt data, don't worry. In MySQL it all depends on the engine (must be innoDB) and the (connection) settings for the parameter foreign_key_checks.
How do your tables and constraints look like? Check pgAdmin (or some other client) and dump the relevant piece of datamodel over here, than we can help you out.
pgAdmin forbids it.
No, your PostgreSQL database forbids it. pgAdmin is just a client and it only sends a query to the database. The database does some checks, FK got violated and returns an error.
Looks like you're working on the wrong database (no FK's or MySQL with the wrong engine and/or settings), PostgreSQL works fine when having a FK.
I agree with Frank. Your test database for PostgreSQL is most probably not setup correctly. You either forgot to create the FK constraints or you disabled them.
The fact that you got an error in pgAdmin indicates that you are working with a different database from within pgAdmin and your test script.
As far as MySQL is concerned I'd look for a wrong default engine in the test database or if you also forgot to create the FK constraints there (note that you will not get an error if you create a FK constraint with an engine that doesn't support referential integrity on MySQL)
Check the table definitions in your test database. IIRC, "rake db:test:prepare" does not maintain fidelity when creating the tables in the test database.
Thank you all for answering.
Someone at ruby forum figured it out. Looks like the triggers which enforce RI are disabled prior to the loading of the fixtures.
I don't know why but it solves the mistery.

Troubleshooting MySQLIntegrityConstraintViolationException

I have a closed-source upgrade application which migrates my database from an old format to a new format (creates new tables and migrates data from the old to new tables).
The application crashes with a MySQLIntegrityConstraintViolationException. It doesn't give me the name of the table with the primary key violation or the contents of the broken SQL query.
Is there any MySQL server option that I can switch to give me more troubleshooting information? Maybe the text of the failed query or the name of the primary key constraint which is violated?
You can enable the general log file: http://dev.mysql.com/doc/refman/5.1/en/query-log.html . This way it might be possible to see at which point the server stops processing the queries.
You also can run the MySQL command show processlist to see what queries are being processed at that time.
Also have a look into all other application specific error logs.
A first try could be to disable foreign key checks during migration:
SET foreign_key_checks = 0;
A first guess would be, that the old server supported 0 as Primary Key values, whilst the new one does not.