My team is building a website in django.
We are using MySql and the database we created for the project is called 'vote'
We always share the code, but the problem is that whatever my project team has added to the database has to be added by me again,manually so as to use it.
Is there any way in which we can copy the whole database created by my team to my system?
Thanks
There are 3 approachs off the top of my head:
Export and Import the entire mysql database (using mysqldump or similar).
Use Django's fixtures system. This allows you to dump the contents of the DB to json/xml files which can be loaded again later by other members of the team via python manage.py loaddata .... These can be quite temperamental in reality and I generally find them more hassle then they are worth to implement.
Use South's data migrations. South is primarily concerned with managing schema migrations, i.e. gracefully handling the addition and deletion of fields on your models so that you don't end up with an inconstant DB. You can also use it to write data migrations which will allow you to programatically add data to the DB which you can share amongst your team mates.
Related
I have a database which is used by multiple projects. Each project has its database migration.
I have tried to google and read the knex documentation, but no luck. I have seen some suggestion to fake the migration files to trick the migration table, but I don't think it is a good solution.
I want to keep all the migrations data in one migration table. Is it possible on knex?
Having different set of migration files for each projects and trying to run them separately against the same migration table is not possible. There is no good solution for it and it would not make sense anyways to do it.
If migrations are not related to each other, then there is no reason to have them in the same table. On the other hand if they are related, then the files really should be hosted in the same place to guarantee that everything is done in correct order.
You can setup migration table name tableName (http://knexjs.org/#Migrations-API) to be different for every project in knex config.
However I would never recommend having multiple projects using the same database and everyone having separate migrations for it.
Only reason where that could be remotely acceptable would be the case where you don't have access to create separate databases for each project.
If projects are sharing the same data model (microservices with shared DB), in that case you should still be using multiple databases or to have single service which is the owner of the schema changes and the rest of the services should only read/write data.
I am a beginner in learning SQL and i am just wondering if i happen to build a database for let's say a customer. How would i actually give the completed database to the customer ?
For example, the customer is a school and i made a database of their students and teachers in SQL then in what ways i could give the completed database to the school authority.
Thank you in advance !
The easiest way is to take a backup of the database. The backup file can then be passed to the customer and restored in their environment.
Depending on your SQL flavour the backup and restore commands may differ.
However if you are just intending to pass data from the school hosted database to the local authority, then you would want to export the data to a format that the LA can support (e.g. CSV or XML).
Not sure I completely understand the problem but if this is for an application with a small database, and by small I mean about a dozen table, I build would it using an embedded database such as SQLite. That way you your application will come complete with a built-in database all developed into one executable file. I have worked with SQLite in the past and it is a very robust database that can store and retrieve very large data sets. It also interfaces quite well with other languages such as Perl, C++, Java, etc. You may be surprise to find out how many of your current phone apps come complete with a backend database embedded in them.
here is for sql server steps for you . After step 6 you have 2 more finalization steps to confirm all you have done. Your restore sql code is towards the end after step 6
RESTORE DATABASE [db_name] FROM DISK = 'X:\MSSQL\Data\FullBackups\db_name.bak' WITH RECOVERY
I'm kinda new to this kind of problem. I'm developing a web-app and changing DB design trying to improve it and add new tables.
well since we had not published the app since some days ago,
what I would do was to dump all the tables in server and import my local version but now we've passed the version 1 and users are starting to use it.
so I can't dump the server, but I still would need to update design of server DB when I want to publish a new version. What are the best practices here?
I like to know how I can manage differences between local and server in mysql?
I need to preserve data in server and just change the design, data on local DB are only for test.
Before this all my other apps were small and I would change a single table or column but I can't keep track of all changes now, since I might revert many of them later and managing all team members on this is impossible.
Assuming you are not using a framework that provides a migration tool for database, you need to keep track of the changes manually.
Create a folder sql_upgrades (or whatever name you name) in your code repository
Whenever a team member updates the SQL schema, he creates a file in this folder with the corresponding ALTER statements, and possibly UPDATE, CREATE TABLE etc. So basically the file contains all the statements used to update the dev database.
Name the files so that it's easy to manage, and that statements for the same feature are grouped together. I suggest something like YYYYMMDD-description.sql, e.g. 20150825-queries-for-feature-foobar.sql
When you push to production, execute the files to upgrade you SQL schema in production. Only execute the files that have been created since your last deployment, and execute them in the order they have been created.
Should you need to rollback a file, check the queries it contains, and write queries to undo what was done (drop added columns, re-create dropped columns, etc.). Note that this is "non-trivial", as many changes cannot be rolled back fully (e.g. you can recreate a dropped column, but you will have lost the data inside).
Many web frameworks (such as Ruby of Rails) have tools that will do exactly that process for you. They usually work together with the ORM provided by the framework. Keeping track of the changes manually in SQL works just as well.
I'm working with a project which is using mysql as the database. The application is hosted with many clients and we are doing upgrades for the current live systems often.
There are some instances where the client has change the database structure(adding new tables) and causes some unexpected db crashes.
I need to log all the structural changes which were done at that database, so we can find the correct root cause for that. We can't do it 100% correct with diff tool because it will not show the intermediate changes.
I found http://www.liquibase.org/ tool but seems little bit complex.
Is there any well known technique or a tool to track database structural changes only.
well from mysql studio you can generate all object's schema definition and compare them with your standard schema definition and this way you can compare two database schema...
generate scrips of both database (One is client's Database and One is master copy database) and then compare it using file compare tool would be the best practice according to me because this way you can track which collumn was added, which column was deleted, which index was added like wise without any tool download.
Possiable duplication of Compare two MySQL databases ?
Hope this helps.
If you have an application for your clients to manage these schema changes, you can use a mechanism at application level. If you have a Python and Django-based solution, you could probably use South which provides schema change tracking and rollbacks.
I'm working with another dev and together we're building out a MySQL database. We've each got our own local instances of MySQL 5.1 on our dev machines. We've not yet been able to identify a way for us to be able to make a local schema change (eg: add a field and some values for that field) and then export some kind of script or diff file that the other can import in. I've looked into Toad and Navicat's synchronization features but they seem oriented towards synchronizing between two instances, not an instance and an intermediate file. We thought MySQL Workbench would be great but this but the synchronization feature just seems plain broken. Any other ideas? How do you collaborate with others on the schema?
First of all put your final SQL schema into version control. So you'll always have a version of it with all changes. It can be a plain SQL file. Every developer in the team can use it as starting point to created his copy database. All changes must be applied to it. This will help you to find conflicts faster.
Also I used such file to create a test database to run unit-tests after each submit. So we were always sure that production code is working.
Then you can use any migration tool to move changed between developers. Here is similar question about this:
Mechanisms for tracking DB schema changes
If you're using PHP then look at Doctrine migrations.