In Yii2, when I use the
yii migrate
command, I get a long list of remaining migrations. How can I only run one specific migration in the list without performing the others?
Run migrate/history to list migrations have been applied:
./yii migrate/history
Copy the name of migration you want to return to later (lets say it is 'm160101_185401_initial_migration'). Save it somewhere because you're going to need this later.
Mark migration history at the one just before the one you need to run:
./yii migrate/mark m170101_185401_create_news_table
Run one migration:
./yii migrate 1
Reset migration history:
./yii migrate/mark m160101_185401_initial_migration
yii migrate --migrationPath=#app/modules/forum/
If you want to skip some migrations that have been implemented in your database without running the migrations, you set your migrations state without running them.
By "marking" the migrations you also assure they will no longer be re-prompted and will be regarded as "done".
you can read about marking in the Yii docs here
To run specific migration, you can mark(skip) migrations upto just before one you want run.
You can mark migration by using one of following command:
Using timestamp to specify the migration yii migrate/mark 150101_185401
Using a string that can be parsed by strtotime() yii migrate/mark "2015-01-01 18:54:01"
Using full nameyii migrate/mark m150101_185401_create_news_table
Using UNIX timestamp yii migrate/mark 1392853618
Related
I've created models using the following command.
$ node_modules/.bin/sequelize model:create --name XYZ --attributes "....."
But by mistake I deleted the folder containing migration script. Now I want to generate this script again.
I tried using sequelize migration:create, but it generated any empty file.
Please Help.
What you need to do now, after you generated a new migration file is to copy the skeleton from here: http://docs.sequelizejs.com/manual/tutorial/migrations.html#skeleton and to customize your migration using the functions that you need.
I'm getting started with JHipster and am attempting to initialize my data using liquibase. I have added two entities via the JHipster yo task and have added my two csv files into the /resources/config/liquibase directory and added the relevant loadData section to my "added entity" change log files to point at the CSV's. I had to update the MD5hash in the databasechangelog table and the app is running BUT, the CSV files don't seem to get picked up via the loadData elements I added to the "added entity" XML files. No data is inserted. Any ideas how to go about running this down?
If you updated the MD5 hashes in the changelog table, I suspect your change log files will not be run because Liquibase will think that they have already been run. I would rather set to null the MD5 hashes and re-start the app.
This was my solution for me:
1. Delete row in databasechangelog
2. Delete table
3. Re-start app
Liquibase re-generated table with csv and loaded all data to database.
I hope I helped you :)
So I am writing acceptance tests for a single feature of a large App. I needed a lot of data for this and have a lot of scenarios to test; so I have pre-populated a mysql database using Sequel Pro.
Obviously the appname_test database is in place for the other tests in the app. I would like to know how I could load up a .sql file (which is a sql dump of content) into this database at the start of my tests (before the first context statement).
I am new to development so this is completely new to me. Thanks in advance for any help.
Update:
I have used the yaml_db gem to dump the dev db (db:data:dump) and then load it into the test db (db:data:load ENV_RAILS=test). However, as soon as I run my specs the test db is wiped clean! Is there a way to run db:data:load ENV_RAILS=test from inside the spec file. I have tried:
require 'rake'
Rake::Task['bundle exec db:data:load ENV_RAILS=test'].invoke
but it says Don't know how to build task 'be db:data:load ENV_RAILS=test'
OK, so here is what I did to solve this.
I used the yaml_db gem and rake db:data:dump which creates db/data.yml (a dump of the dev db).
I then had to write a library and rake task which converted the data.yml file into individual fixtures files. I run this new rake task once to create the fixure files.
Then, at the start of my spec I call fixtures :all which populates the test database with all the fixtures.
I have a working puppet configuration to help installing mysql instances on a machine. My environment is setup such that there are multiple instances running on the same machine (with different configs/ports/etc).
The basic setup I have in a manifest looks like
File{
owner => $owner,
group => $group,
before => Exec["mysql_install_db-${name}"],
}
exec{"mysql_install_db-${name}":
creates => "/var/lib/mysql/${name}/mysql",
command => "/usr/local/percona/mysql-${version}/usr/bin/mysql_install_db --user=mysql --datadir=/var/lib/mysql/${name} --basedir=/usr/local/percona/mysql-${version}/usr",
logoutput => true,
}
This works perfectly fine, however now I'd like to modify this install process to run some subsequent commands to bootstrap the fresh install with some internal stored procedures and do some other 'prep work' we do for a new install.
The commands would be basically like
mysql -u user < /path/to/bootstrap1.sql
mysql -u user < /path/to/bootstrap2.sql
mysql -u user < /path/to/bootstrap3.sql
I only want these run once, after the mysql_install_db command but somewhat under the same "creates" guard.
I found some references to just passing an array to the command argument but that reference was in the form of a bug report of it not always working consistently.
What's the preferred method to accomplish something like this, and ensure the commands get executed in a deterministic order and only after mysql_install_db was run?
There are several ways to run exec only once, and only after "mysql_install_db-${name}" :
Just change the command line to add all the other commands (cmd1 && cmd2 && cmd3 ...).
Use the unless or onlyif parameters, so as to check if your stored procedure or whatsoever already exists before running the command. This might be complex, so another method is to have the command create a file ("command && touch /root/blah-${name}") and use the creates parameter.
Set refreshonly, and subscribe to the previous exec.
While all these solutions will work, you will not be respecting Puppet's spirit of describing the final state of your system. For database, user and grant settings you can use the puppetlabs-mysql module, that will let you describe them in a natural way. Stored resources are another matter, that might be packaged more logically with the application deployment process (as they must be kept in sync with the application). If this is not possible, then you can make your SQL scripts idempotent, using conditionals.
Has anyone done this? Is it an easy process? We're thinking of switching over for transactions and because mysql seems to be "crapping out" lately.
Converting MySQL database to Postgres database with Django
First backup your data of the old Mysql database in json fixtures:
$ python manage.py dumpdata contenttypes --indent=4 --natural-foreign > contenttype.json
$ python manage.py dumpdata --exclude contenttypes --indent=4 --natural-foreign > everything_else.json
Then switch your settings.DATABASES to postgres settings.
Create the tables in Postgresql:
$ python manage.py migrate
Now delete all the content that is automatically made in the migrate (django contenttypes, usergroups etc):
$ python manage.py sqlflush | ./manage.py dbshell
And now you can safely import everything, and keep your pk's the same!
$ python manage.py loaddata contenttype.json
$ python manage.py loaddata everything_else.json
Tested with Django==1.8
I just used this tool to migrate an internal app and it worked wonderfully. https://github.com/maxlapshin/mysql2postgres
You can do that using Django serializers to output the data from MySQL's format into JSON and then back into Postgres. There are some good artices on internet about that:
Migrating Django from MySQL to PostgreSQL the Easy Way
Move a Django site to PostgreSQL: check
I've never done it personally, but it seems like a combination of the dumpdata and loaddata options of manage.py would be able to solve your problem quite easily. That is, unless you have a lot of database-specific things living outside the ORM such as stored procedures.
I've not done it either.
I'd first follow this migration guide, there is a MySql section which should take care of all your data. Then django just switch the mysql to postgre in the settings. I think that should be ok.
I found another question on stackoverflow which should help with the converting mysql to postgre here.
python manage.py dump.data >> data.json
Create database and user in postrgesql
Set your just created database in postrgesql as default database in django settings or use param --database=your_postrgesql_database next steps
Run syncdb for create tables.
python syncdb [--database=your_postrgesql_database] --noinput
Create dump without data, drop all tables and load dump. Or truncate all tables (table django_content_type whith data which can be not equals your old data - it is way to many errors). At this step we need empty tables in postgresql-db.
When you have empty tables in postgresql-db just load your data:
python manage.py loaddata data.json
And be fun!
I wrote a Django management command that copies one database to another:
https://gist.github.com/mturilin/1ed9763ab4aa98516a7d
You need to add both database in the settings and use this command:
./manage.py copy_db from_database to_database app1 app2 app3 --delete --ignore-errors
What cool about this command is that it recursively copy dependent objects. For example, if the model have 2 foreign keys and two Many-to-Many relationships, it will copy the other objects first to ensure you won't get foreign key violation error.