I have a migration for a model with a binary field meant to store a file that can be bigger than 10Mb:
class CreateNewModel < ActiveRecord::Migration
def change
create_table :new_model do |t|
...
t.binary :data, limit: 16777216
...
end
end
end
With the limit information the migration can create a longblob object in a MySQL or MariaDB database, as seen in How do you get Rails to use the LONGBLOB column in mysql?.
The migration seems to work fine on a MariaDB database: data has longblob type. However loading directly from the schema gives data a blob type instead of longblob, this means the rake db:setup command is no longer usable as the schema doesn't reflect the database I want.
This seems pretty evident when one looks at the db/schema.rb file:
create_table "new_model", force: :cascade do |t|
...
t.binary "data"
...
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
There is no limit info and as such loading the schema can only lead to blobs and not longblobs.
Why doesn't the limit information get written in the schema?
I don't want to change the schema manually as it would mean have to redo the change at each migration (as they regenerate the schema file). What other solutions do I have, is there a way to force the limit field from the migration to the schema?
I've tried using the shorthands described in https://github.com/rails/rails/pull/21688 but it doesn't seem to exist in Rails 4.2.6.
Related
I wrote a migration with the following:
class CreateTableSomeTable < ActiveRecord::Migration[5.1]
def change
create_table :some_tables do |t|
t.references :user, foreign_key: true
t.references :author, references: :user, foreign_key: true
t.text :summary
end
end
end
It is a basic migration that is creating a database table. However: when I run rails db:migrate a very odd error message aborts the migration:
Mysql2::Error: Table 'my_database.some_tables' doesn't exist: SHOW FULL FIELDS FROM 'some_tables'
It is as if the error is saying it can't create the table because the table does exist, which doesn't make sense.
Things I have looked at and tried:
reviewed the database.yml which seems fine. Nothing has changed, and I have recently run other migrations just fine (though no migrations that created database tables)
ran bundle to ensure all gems were installed
deleted the schema.rb file, recreated the database with data from another copy, and I ran rake db:schema:dump to recreate the schema.rb file. I attempted to run the migration again and still got the same error.
I am using rails 5.1.1 as well as mysql2 0.4.6
Any tips on how I can get the migration to run?
I got a similar error when trying to create a new model that has a reference to an existing model that was created before migrating to Rails 5.1.
Although the error message was not very clear about that, in my case it turned out that the problem was data type mismatch between the primary key of the old model and the foreign key of the new model (MySQL does not permit that). It was so because since Rails 5.1 the default data type of all the primary and foreign keys is bigint, but for the old model the primary key type was still integer.
I solved this by converting all the primary and foreign keys of the current models to bigint, so I can use the Rails new defaults and forget about it.
A workaround could also be specifying integer type for the new foreign keys so that they match the primary keys type of the old models. Something like the following:
class CreateUserImages < ActiveRecord::Migration[5.1]
def change
create_table :user_images do |t|
t.references :user, type: :integer, foreign_key: true
t.string :url
end
end
end
The big issue with the ActiveRecord migration 5.1 is that now the id are expected to be BIGINT instead of INT, so when you adding a column referring another table created before rails 5.1 it expect the column type to be BIGINT but instead is just an INT, hence the error.
The best solution is just modify your migration and change the type of the column to int.
class CreateTableSomeTable < ActiveRecord::Migration[5.1]
def change
create_table :some_tables do |t|
t.references :user, foreign_key: true, type: :int
t.references :author, references: :user, foreign_key: true
t.text :summary
end
end
that should work.
I figured out a work around, but it is still very puzzling to me.
The error message in the log file was not exactly pointing to the issue. For some reason, it might be rails 5.1.1 or it might be mysql2 0.4.6, but it doesn't like using references within the create_table block for some reason. Very odd because it has worked for me in the past.
So I changed the migration from this:
class CreateTableSomeTable < ActiveRecord::Migration[5.1]
def change
create_table :some_tables do |t|
t.references :user, foreign_key: true
t.references :author, references: :user, foreign_key: true
t.text :summary
end
end
end
To this:
class CreateTableSomeTable < ActiveRecord::Migration[5.1]
def change
create_table :some_tables do |t|
t.integer :user_id
t.integer :author_id
t.text :summary
end
end
end
And it worked.
It is very odd because references works just fine with sqlite3 (I tested this by generating a dummy app, ran a scaffold command with a references column, and ran rails db:migrate and it all worked).
This drove me nuts, I think I was seeing a different reason for this than what others suggested. In my case it happened because my migration file names didn't exactly match the migration class therein. For example, I had a migration file named 20171205232654_bonus.rb but inside the class was declared as class CreateBonus < ActiveRecord::Migration[5.1]. Once I changed the file name to 20171205232654_create_bonus.rb everything worked.
This might have something to do with the fact that I've been creating migrations only, not full scaffolds, and maybe I did something wrong. I really don't know how I wound up with that mismatch.
I am using rails and the mysql2 adapter. I want to change all primary ids and foreign keys to be 64 bit integers instead of the default 32 bit as they are right now for my production database.
Is this possible on the fly or do I have to drop the database, change the structure and import the data again?
If there is a way to do it without dropping the database, even if it's a hack, it would be great to know.
Rails 5.1 already added a bigint type for migrations, you can do this:
change_column :users, :id, :bigint
Source:
http://www.mccartie.com/2016/12/05/rails-5.1.html
While ActiveRecord does not support this, you are able to do it using execute
class UpdateUserIdLimit < ActiveRecord::Migration
def up
# PostgreSQL
execute('ALTER TABLE users ALTER COLUMN id SET DATA TYPE BIGINT')
# MySQL
execute('ALTER TABLE users MODIFY COLUMN id BIGINT(8) NOT NULL AUTO_INCREMENT')
end
def down
raise ActiveRecord::IrreversibleMigration
end
end
For new tables you should be able to simply do
def change
create_table :users, id: false do |t|
t.int :id, limit: 8, primary_key: true
t.string :first_name
t.string :last_name
end
end
Also starting with Rails 5.1 primary keys will be BIGINT by default.
I'm a RoR developer and I'm used to create my own databases using scaffold and such. However I was told to create a rails app to an existing populated database. I Searched and i found this:
1. write config/database.yml to reference your database.
2. Run "rake db:schema:dump" to generate db/schema.rb. Here's the
documentation:
$ rake -T db:schema:dump
...
rake db:schema:dump # Create a db/schema.rb file that can be
portably used against any DB supported by AR
3. Convert schema.rb into db/migrate/001_create_database.rb:
Class CreateMigration < ActiveRecord::Migration
def self.up
# insert schema.rb here
end
def self.down
# drop all the tables if you really need
# to support migration back to version 0
end
end
However I saw some comments saying that they lost their data and some saying that it worked. I can't take chances to lose the data from the database. Can someone please give me some more solid explanation? Or a better solution
I had the same problem but I can't use the solution above.
My database was created before than the ruby on rails app and it didn't have the schema_migrations table created, so when it ran this:
if ActiveRecord::Migrator.current_version > 2
# force the next migration 002_create_database.rb to be skipped
ActiveRecord::SchemaMigration.create(version: '2')
# the version '2' above is the version of the file which is (002 becomes 2)
end
it returned false in the if statement thus, it never skip the create_database migration in my case.
I was looking for some way where I can run the migrations and avoid to create tables again and, at the same time, new coders can run it and create the tables.
After some search I found the table_exists? function, so:
I created the schema migration using the rake db:schema:dump
Then I created my first migration: 001_create_database.rb and pasted the schema from the db/schema.rb file
Finally, I edited the migration adding and if statement before each create table:
if !table_exists?("CarsTable")
create_table "CarsTable" do |t|
t.bigint "Owner", null: false
t.string "Color", limit: 20, null: false
t.string "Make", limit: 40, null: false
t.string "Model", limit: 20, null: false
t.string "Plate", limit: 10, null: false
end
end
I tested this in 2 ways, with a database previously created and populated (for production case and my own case) and without any database (for new coders) and it worked.
Rails check all migration files in db/migrate/ and checks if they all are part of the "schema_migrations" table. Your code above will only work for fresh copy of your code (i.e. if I clone your Rails project), because migrations are run in the order of the filename. Since it is 00001_create_d... then it will be the first to be ran on my end, followed by other migration files you have.
You are right that you will then possibly lose data if you migrate, because the schema code you have will be ran after all of your migration files have already been migrated.
Now since it is only you and other developers working on the project, who cannot simply just do rake db:migrate, and that new cloners of your project will have no problems with your code above, then I could do the following below to force that 001_create_database.rb to be already part of your schema_migrations table, thereby skipping it, but not for new cloners (like myself).
IMPORTANT: backup your database first before doing below code
db/migrate/001_safeguard_create_database.rb
class SafeguardCreateDatabase < ActiveRecord::Migration
def up
# if current migration version already has a created database
if ActiveRecord::Migrator.current_version > 2
# force the next migration 002_create_database.rb to be skipped
ActiveRecord::SchemaMigration.create(version: '2')
# the version '2' above is the version of the file which is (002 becomes 2)
end
end
def down
raise ActiveRecord::IrreversibleMigration
end
end
db/migrate/002_create_database.rb
class CreateDatabase < ActiveRecord::Migration
def up
# your schema.rb here
end
def down
raise ActiveRecord::IrreversibleMigration
end
end
After creating these two files, try rake db:migrate. It should only process 001_safeguard_create_database, and skip 002_create_database because it is assumed that your current DB is already set up with it. The 002_create_database then will only be ran for new project users who do not have a DB yet; and for these users, these first two migration files be be ran first, followed by all of your other migrations.
I recently started noticing that following a deployment to production, I see this git diff in my db/schema.rb there:
- t.boolean "published", limit: 1
+ t.boolean "published"
and
- t.boolean "visible", limit: 1, default: false
+ t.boolean "visible", default: false
Given that the Rails version is the same on both environments, is this just caused by the difference between MySQL versions, respectively 5.5.43 on production and 5.6.23 on development?
Has your Rails version changed? There was a recent change in Rails that could account for this: https://github.com/rails/rails/pull/19066
Basically, since MySQL doesn't have a boolean column type, Rails uses a TINYINT(1) column type for :boolean attributes, which was reflected when the schema was dumped to schema.rb. So far, so good. But then if one tried to load the same schema.rb into PostgreSQL it would fail because Postgres does have a BOOLEAN column type, but declaring a length for BOOLEAN columns is illegal. This bug was fixed by removing the limit: 1 option when dumping :boolean attributes from a MySQL database (it wasn't necessary anyway).
So if the last time your schema was dumped (which happens when you run migrations) you were on Rails 4.2.2 or earlier you would have gotten limit: 1 in schema.rb, and if you subsequently upgraded to 4.2.3 and dumped your schema again the limit: 1 would be gone.
This change doesn't have any effect except for fixing the aforementioned bug—your schema.rb will function exactly the same way it did before—so there's nothing to be concerned about.
I am using Rails 3.2.6 and Mysql 6.0.9 (but I have exactly the same error on MySQL 5.2.25)
When I create new database (rake db:create) and then when I try to load the schema (rake schema:load) I get this error:
Mysql2::Error: Specified key was too long; max key length is 767 bytes: CREATE UNIQUE INDEX `unique_schema_migrations` ON `schema_migrations` (`version`)
After hours and hours of research I found these solutions:
1. Change MySQL variable innodb_large_prefix to true (or ON)
This didn't work. I tried it on my Linux server, my Mac and even on Windows - it just doesn't work.
2. Monkeypatch ActiveRecord::SchemaMigration.create_table
I do not need the version column to be 255 long (when it is UTF-8, then it takes 4*255 = 1020 bytes and exceeds the MySQL limit of 767 byte for keys). I do not need it to be UTF-8 either, but all other tables in the DB are UTF-8 and I have set utf8_czech_ci to be the default collation.
The method that actually creates the schema_migrations table looks like this:
def self.create_table
unless connection.table_exists?(table_name)
connection.create_table(table_name, :id => false) do |t|
t.column :version, :string, :null => false
end
connection.add_index table_name, :version, :unique => true, :name => index_name
end
end
You can read the whole file on Github rails/rails
So I tried to add :limit => 100 to the t.column statement, but I did not succeed with this solution either. The problem is that I cannot make this patch load when the originial is already in place. In other words - my patch loads before ActiveRecord::SchemaMigration so it is overwritten.
When I put this in config/initializers/patches/schema_migration.rb:
require 'active_record/scoping/default'
require 'active_record/scoping/named'
require 'active_record/base'
module ActiveRecord
class SchemaMigration < ActiveRecord::Base
def self.create_table
unless connection.table_exists?(table_name)
connection.create_table(table_name, :id => false) do |t|
t.column :version, :string, :null => false, :limit => 100
end
connection.add_index table_name, :version, :unique => true, :name => index_name
end
end
end
end
It is successfully loaded, but the it is overwritten when the original ActiveRecord::SchemaMigration is loaded.
I tried to mess up with ActiveSupport.on_load(:active_record) but that doesn't seem to work either.
Is there a way to load this file after the originial ActiveRecord::SchemaMigration is in place and make this patch work?
Do you have any suggestions? I can clarify any part of this question, if it makes no sense to you. Just ask me. I've been stuck with this for too long.
767 key should work. Make sure you use utf8 encoding, and not utf16.
I had same problem, and my mistake was that I accidently created utf16 database
I suggest you to drop your database and recreate a new one with the following instructions :
mysql -u root -p -e "CREATE DATABASE {DB_NAME} DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;"
I have the same problem with a column named version for varchar of length 2000
class AddVersionToUsers < ActiveRecord::Migration
def change
add_column :users, :version, :string, limit:2000
add_index :users, :version
end
end
I was using this latin 1 1 character 1 byte, but now I want to use utf8mb4 1 character 4 bytes.
Configuring your databse like this you can get index until 3072 bytes:
docker run -p 3309:3306 --name test-mariadb -e MYSQL_ROOT_PASSWORD=Cal1mero. -d mariadb:10.2 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --innodb-large-prefix=1 --innodb-file-format=barracuda --innodb-file-per-table=1 --innodb-strict-mode=1 --innodb-default-row-format=dynamic
this is enough for latin_1, (will be 2000 bytes), but for utf8mb4 it will be 8000 bytes. In this keys you have some options
Add a column named hash_version and implement the index on that column.
Consistent String#hash based only on the string's content
Make the string shorter, it should work , but depernds on your needs
or use fulltext in your migrations, like this:
class AddVersionToUsers < ActiveRecord::Migration
def change
add_column :users, :version, :string, limit:2000
add_index :users, :version, type: :fulltext
end
end
references:
https://mensfeld.pl/2016/06/ruby-on-rails-mysql2error-incorrect-string-value-and-specified-key-was-too-long/
https://codex.wordpress.org/Converting_Database_Character_Sets
https://dev.mysql.com/doc/refman/8.0/en/innodb-restrictions.html
https://docs.oracle.com/cd/E17952_01/mysql-5.7-en/innodb-restrictions.html
https://dba.stackexchange.com/questions/35821/possible-index-on-a-varchar-field-in-mysql