i have a database with name "DB_General" i want to create separate data base for each user
according to his name like"DB_User1". i want to use the same structure for "DB_User1" as
given in "DB_General". is there any way to do this, i am using MySQL database in JSP.
make a backup of DB_General eg using mysqldump, remove data if needed and restore it under different database name like DB_User_1
however idea of creating separate database for each user seems to be far way from wise
Run the following programmatically from a shell:
mysqladmin --user=foo --password=bar create DB_User1
mysqldump --user=foo --password=bar --no-data DB_general | mysql --user=foo --password=bar DB_User1
Related
I got a DB in MYSQL (that I haven't created), I do not have the code that was used for it.
I want to know what was the code used to create one of the tables in the DB , is there an option to do so? I need to create the same table but on diffrent data..
Thanks alot!
P
In MySQL Workbench you can display the DDL for any DB object. Just right click on it in the schema tree on either Copy to Clipboard or Send to SQL Editor and Create Statement:
This is a late answer, but since I don't see any reference to it, I'll suggest you to perform a dump of your database. Every decent DBMS has now a tool to do it. With MySQL, from command line, this would be :
mysqldump -u <username> <database_name> > yourfile.sql
This performs a complete dump in SQL format of your base, enabling you to recreate it elsewhere. No need for any special tool to do it when you need to. Just pass the content of the file to the regular MySQL's client.
If you want to get only the database schema without any data, just pass "--no-data" option.
mysqldump --no-data -u <username> <database_name> > yourfile.sql
You'll now be able to recreate a brand new, virgin database, having all attributes and special features of the previous one, without the data.
Situation
I had a table with 8 columns, then I need 2 more fields :
company_wieght
server_type_weight
So I run a migration to add those fields, and now I have 10 columns.
I got 0 data in there right now.
I want
to copy/paste all the data from my staging server back to my local. I keep get a error :
How do I solve this problem ?
Is there a way to force paste rows,and leave the other 2 rows as NULL/blank ? With that I can add data to them later via migration.
I'm a little stuck now on that.
Seems there are several aspects mixed in here. For MySQL Workbench: source and target column count in a copy/paste action must be the same, no way around it. However, I wouldn't copy over data with copy/paste unless it's really not much and not needed many times. Instead I'd export the existing data to a csv file, load that in a spreadsheet (e.g. in Open Office) and add 2 dummy columns. Export that to csv again and import in MySQL Workbench.
One way to do it:
Make sure that your new columns are nullable in migration
$table->integer('company_weight')->nullable(); // make sure you use nullable()
$table->integer('server_type_weight')->nullable();
Dump the table data on your staging server
$ mysqldump -u<username> -p --no-create-info --compact --skip-comments \
--complete-insert <database> <table> > /path/to/file.sql
Download resulting file.sql to your local machine
Import data to your local database
$ mysql -u<username> -p <database> < /path/to/file.sql
I have MySql and Postgres databases. I have been working on Mysql DB which is populated with my data. Now for me to use heroku, I need to port it to Postgres. These are the steps I followed:
I exported data from my Mysql DB by simple dump command:
mysqldump -u [uname] -p[pass] db_name > db_backup.sql
I logged into my Postgres
sudo su postgres
Now when I try to import the sql into Postgres, it does not have access to db_backup.sql. I changed the permissions for all users and made the dump file read/write to all but still I cannot import the sql.
My question is what is the correct way to duplicate (both schema and data) from Mysql to Postgres. Also why am I not able to access the dump file even after changing the permissions? And if I have a dump from Mysql what are the chances that it runs into the issues while running it on Postgres (I do not have any procedural stuff in my Mysql. Just creation of tables and dumping data into those tables.)?
Thanks!
P.S. I am on Mac-Mavericks if that matters
While the primary part of the question was answered by #wildplasser I thought I would put the entire answer for people looking at porting MySQL data to Postgres.
After trying out multiple solutions, the easiest and quite smooth solution was this: https://github.com/lanyrd/mysql-postgresql-converter
This worked quite smoothly. But just one problem- it does not port any of Mysql sequences to Postgres. This means if you have auto-increment primary ids, you will have to change your Postgres schema separately and create serial sequences after the porting is done. Apart from that, it was quite smooth.
To talk about the permission issue, logging in as Postgres user and trying to access dump created by original user failed, the right way to do it was stay logged in as original user and use postgres user only for DB operation by using -U postgresuser command.
E.g.: psql -U postgres databasename < data_base_dump
While for many this must be the obvious way of doing it, I must admit it was one of those eureka moments for me :)
I have a local installation of MariaDB on a Windows XP.
I created an empty database db_y which I wanted to populate with the tables of the database db_x which I exported as a dump-file from a MySQL-instance (with HeidiSQL). When I imported the dump-file db_x.sql into the the MariaDB instance:
c:\ > mysql -u root -h localhost -p db_y < "X:/archive/db_x.sql"
I got the following:
- MariaDB-inst
+db_x
+db_y
db_y remains empty and db_x from the dump-file was added (db_x is the database name of the original database I exported). What I have to do to get the desired database name? I thought I could change the database name in the db_x.sql file but I didn't want to open such a large file. Can I change the import command above in such a way that it change the database name?
I'm also interested in this kind of solution:
CREATE DATABASE y FROM DATABASE x
Is something like this possible?
In the net I find the solution RENAME DATABASE which was not recommended and ALTER DATABASE db_x UPGRADE DATA DIRECTORY NAME
but sincerely, I preferred to create a new database with the new name.
Thanks for any help.
Consider you have two databases: source_db and target_db. If you want to copy the database contents from source_db to target_db you should do as follow in HeidiSQL:
Right click on source_db then select: Export database as SQL.
Now change the value of Output and select Database.
A select box will appear, select target_db and that's all.
There is an easy way to transfer a database from one instance to another with HeidiSQL:
Create the database db_y in instance y
Click on dump icon (or right click). The instance y should be activated.
At "Output" option choose Database
At "Database" option choose db_y
Select on the left the instance x and database x
Export
Try MySQL Workbench. It's made by MySQL and I've found it excellent for backing up a database and restoring it under a different name.
http://dev.mysql.com/downloads/workbench/
HeidiSQL's export dialog recently got a new option called "Max INSERT size". This controls the number of rows in bulk/multiple INSERT commands.
Also, there is a documentation for this export dialog.
I've got a 3.5gb database dump. Is there a way to restore just a single table from that file to a differently named table in the same database without editing the file, using mysqladmin, or some other commonly available command line application that runs on FreeBSD 6?
You would need to create the table in restore-db and run something like:
grep "^INSERT INTO table" dump-file | mysql -u user -p restore-db
First make sure that your pattern matches correctly.
cat THE_DUMP_FILE.SQL | sed -n "/^-- Table structure for table \`THE_TABLE_NAME\`/,/^-- Table structure for table/p" > THE_OUTPUT_SQL_FILE_NAME
I googled around for a while on this, this solution worked great for me, and seemed to be one of the fastest solutions for a large dump file, I got the idea from:
http://code.openark.org/blog/mysql/on-restoring-a-single-table-from-mysqldump