I am running a local mysql server to work with some old data from a dump. I am able to normally log into the database using:
/mysql -uroot -p
and in fact the show databases command correctly outputs the following, I am also able to execute queries and use the data normally:
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| servicenow2 |
| sys |
+--------------------+
now the problem is I was using command line to import some files from my old dump:
/mysql -uroot -p servicenow2</filetoimport.sql
however this command no longer works always resulting in (for all dbs in the server):
ERROR 1049 (42000): Unknown database 'servicenow2'
I assume some schema got corrupted somewhere is there any way to fix this so I do not have to dump the db and reimport it the db I am working with is over 150gb and would take a very long time to dump and reimport.
update: I performed a flush privileges as described here : https://superuser.com/questions/603026/mysql-how-to-fix-access-denied-for-user-rootlocalhost
now the command : mysql -uroot -p servicenow2 correctly starts a client in the servicenow2 db but the import mysql -uroot -p servicenow2</filetoimport.sql still gives unknown db error.
Work around in case anyone is interested:
create a new db and import into the new db then move from table in new db to desired db
Related
I've a very fresh installation of mariadb-server-10.5 (1:10.5.15-0+deb11u1) on a freshly installed debian 11.1 .
On the old machine with mysql-server (5.5.9999+default) and debian 9.6 I created a dump like this:
mysqldump -u root -pSOMEPW --all-databases > all_databases.dump
and I loaded this dump on the new server:
source /path/to/all_databases.dump
. The source took a while, did not result any error, however it beeped once at the end (no visible error or warning message).
Checking the mysql.user table it has only 3 entries for root, mysql and mariadb.sys , so I tried to create users (which were existing and used on the old machine) with this command:
create user 'testuser'#'localhost' identified by 'pw';
but it result this error:
ERROR 1396 (HY000): Operation CREATE USER failed for 'testuser'#'localhost'
.
With a short script checking all the tables of the mysql db the 'testuser' appears in 3 different tables, but as a User only in the db table twice like this:
| Host | Db | User | Select_priv
| localhost | somedb | testuser | Y
| localhost | somedbp2 | testuser | Y
.
I think that might cause create user to fail.
How could I fix this issue without losing the information in the db table?
Thanks.
In general you need to run mysql_upgrade whenever you switch to a more recent MySQL or MariaDB release, or after importing a backup taken from an older major version.
This is especially true for MariaDB 10.4 and later when importing from MySQL or from MariaDB 10.3 or earlier, as the internal privilege tables changed substantially with 10.4.
mysql.user table was replaced by mysql.global_priv in 10.4, allowing for more fine grained authentication control, e.g. supporting multiple authentication plugins for a single user.
So now mysql.user is just a VIEW presenting information from mysql.global_priv in a backwards compatible way. Simple information like user and host name can still be modified via that view directly as it is an updateable view, but this does not work for the more complex columns.
And commands like CREATE USER now directly operate on the mysql.global_priv table anyway, the errors you are getting are due to that table not being present in your imported dump.
The good news is: mysql_upgrade will take care of the necessary conversion, and after that CREATE USER should work again.
See also: https://mariadb.com/kb/en/mysql_upgrade/
See also: https://mariadb.com/kb/en/mysqlglobal_priv-table/
Recently I need to import mysql data into postgres database in Heroku. Actcually it includes several steps:
convert mysql data to postgresql
import postgresql data to Heroku
After referring plenty of materials and testing several tools in github, finally I succeed. Here I want to share some of my experience and references.
Firstly, I list some tools for converting mysql database format into postgresql format.
mysql-postgresql-converter
: I finally use this tool and succeed. Dump MySQL database in PostgreSQL-compatible format
mysqldump -u username -p --compatible=postgresql databasename > outputfile.sql
then use the converter to transfer data into *.psql file. then load new dump into a fresh PostgreSQL database.
mysql2postgres : It is a tool which is introduced in Heroku Dev Center. Just refer here Migrating from MySQL to Postgres on Heroku. it is based on Ruby. However, as for me, I found some issues after I finish installation and cannot solve it.
You have already activated test-unit 2.5.5, but your Gemfile requires test-unit 3.2.3. #95
py-mysql2pgsql: Similar process with mysql2postgres above by editing a *.yml file for configuration. There is a nice reference table in README file called Data Type Conversion Legend, which compares different data type between MySQL and PostgreSQL. You can manually modify the data type.
** This website lists some other converting methods.
Some basic operations in PostgreSQL:
$sudo su - postgres
$createtedb testdb
$psql testdb
=# create user username password ' password ';
-- To change a password:
=# alter role username password ' password ';
=# create database databasename with encoding 'utf8';
How to list all database in postgres: PostgreSQL - SELECT Database
postgres-# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+-------+-----------------------
postgres | postgres | UTF8 | C | C |
template0 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | C | =c/postgres +
| | | | | postgres=CTc/postgres
testdb | postgres | UTF8 | C | C |
(4 rows)
postgres-#
Now type the below command to connect/select a desired database, here we will connect to the testdb database:
postgres=# \c testdb;
psql (9.2.4)
Type "help" for help.
You are now connected to database "testdb" as user "postgres".
testdb=#
After you create your database, import the converted tables into psql. Kindly note that a database should be created before importing data.
$psql -h server -d databasename -U username -f data.sql
(sometimes a sudo -u postgres should be added before psql)
How to generate dump of psql using pg_dump: creating dump file
$sudo -u postgres pg_dump -Fc --no-acl --no-owner -h localhost -U postgres databasename > mydb.dump
The next step, how to import data into Heroku Postgres?
After previous steps, you may have import data into your local PostgresSQL or generate a pg_dump file. Here two methods will be introduced to transfer data to remote Heroku Postgres.
use pg_dump file.reference
Use the raw file URL in the pg:backups restore command:
$ heroku pg:backups:restore 'https://s3.amazonaws.com/me/items/3H0q/mydb.dump' DATABASE_URL
In this case, you should firstly upload the dump file to somewhere with an HTTP-accessible URL. The dev center in Heroku recommend using Amazon S3.
The DATABASE_URL represents the HEROKU_POSTGRESQL_COLOR_URL of the database you wish to restore to. For example, my database url is postgresql-globular-XXXXX.
use pg:push
pg:push will push data from your local psql database into remote Heroku Postgres database. The command looks like this:
$heroku pg:push mylocaldb DATABASE_URL --app sushi
This command will take the local database "mylocaldb" and push it to the database at DATABASE_URL on the app "sushi". Kindly note that the remote database must be empty before performing pg:push in order to prevent accidental data overwrites and loss.
Actually I use this pg:push method and succeed finally.
More information about Heroku Postgres can be found in official document of Heroku.
**others:
Viewing logs of your web application in Heroku: heroku logs --tail
How to deploy Python and Django Apps on Heroku?
How to write the Procfile of Django Apps?
A common Procfile of Django projects will look like this:
web: gunicorn yourprojectname.wsgi --log-file -
Here web is a single process type. What we need to modify is yourprojectname.wsgi. Just replace your project name in the prefix.
How to add Gunicorn to your application?
$ pip install gunicorn
$ pip freeze > requirements.txt
How to run command line in remote Heroku server?
You can execute the bash command in Heroku.
$heroku run bash
Running bash attached to terminal... up, run.1
~ $ ls
Then you can do some command such ls, cd just like in your local bash.
Also, you can use commands in this pattern to execute manage.py in remote Heroku:heroku run python manage.py runserver
I have installed WAMP server2.2 . Installation completed smoothly. but when i tried to access phpmyadmin. It gives error. I changed the password from /wamp/apps/phpmyadmin3.5.1/conf.inc.php.
But now I am facing another error as follows:
Error:-
SQL query: Documentation Edit
SELECT * FROM information_schema.CHARACTER_SETS
MySQL said: Documentation
#1146 - Table 'information_schema.character_sets' doesn't exist
I am new to this so please help me out....I stuck on the same error......
I can't proceed further without phpmyadmin..... Hope somebody help me as early as possible to resolve this error.
Thanks in advance.
Login via command line,
mysql -h myhost -u myusername -p
<ENTER PASSWORD>
run the command
show databases;
sample output:
+--------------------+
| Database |
+--------------------+
| mysql |
| phprojekt |
| test |
+--------------------+
if there is no database information_schema in your output , your mysql server is too old. (Version 4.x, most recent is 5.6 as of 2013). This can be the case, if you try to connect to an old database server in your company which didn't bother to upgrade (happens quite often for many reasons).
Also read this:
Generate INFORMATION_SCHEMA table for MySQL database
(you can't)
Following error occurs when I am trying to restore a DB in MYSQL via putty.
Command: mysql -u root -p db1<dbname.sql ;
ERROR 1 (HY000) at line 7904: Can't create/write to file
'./dbname/db.opt' (Errcode: 2)
What is the reason?
This often means that your dump file includes a command that should run against a database that either doesn't exist in your local context, or to which the current user does not have access. Open up the dumpfile and look at the line mentioned in the error to find out what's going on.
I ran into this error at work when the source database name was different than the target database name. I dumped a database on one server with mysqldump db1 > dumpfile and attempted to import it on a different server with mysql db2 < dumpfile.
Turns out the dumpfile had ALTER TABLE db1 ... statements which were meaningless on the target server where I named the database db2.
There is probably a more elegant solution than this, but I just edited the dumpfile on the target server and replaced db1 with db2.
Find out what Errcode: 2 means
You can use the perror utility to find what error 2 means:
$ perror 2
OS error code 2: No such file or directory
More info is at the link #Jocelyn mentioned in their comment: http://dev.mysql.com/doc/refman/5.5/en/cannot-create.html
Find out what path ./ points to
We now know a file doesn't exist (or maybe it can't be written to.) The error message gives us a relative path ./ which makes it tricky... Wouldn't it be helpful if it output a fully-qualified path? Yeah.
So when MySQL imports an SQL file it creates some temp files on the filesystem. The path is usually specified by the "tmpfile" configuration option in the MySQL my.cnf file. You can quickly find the value by executing an SQL query:
$ mysql -h127.0.0.1 -uroot -p
# I assume you're now logged into MySQL
mysql> SHOW VARIABLES LIKE '%tmpdir%';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| slave_load_tmpdir | /tmp |
| tmpdir | /tmp |
+-------------------+-------+
2 rows in set (0.00 sec)
Ensure the directory is writeable by mysql user
According to tmpdir this means MySQL was trying to create /tmp/dbnamehere/db.opt. Ensure this directory exists and that it's owned by mysql:mysql. You might have to use sudo to elevate privileges high enough to create some directories.
$ chown -R mysql:mysql /tmp/dbnamehere
Still not working? Try other default tmpdir paths
I hit issues on my system (Ubuntu 12.04 + Vagrant 1.7.2 + Chef 11.something + opscode mysql cookbook 6.0.6) where the value in tmpdir wasn't being considered or wasn't being pulled from where I expected.
MySQL was actually trying to create the temp file at one of the following locations:
/var/lib/mysql/dbnamehere
/var/lib/mysql-default/dbnamehere
I had to create those directories and change ownership to mysql:mysql.
I had backup from "db1" and restoring to "db2"
so in the dump file had to change "db1" to "db2" with sed.
And all worked fine.
You'll find help about this error in the MySQL manual: http://dev.mysql.com/doc/refman/5.5/en/cannot-create.html
not sure if this is a question better suited for serverfault but I've been messing with amazon RDS lately and was having trouble getting 'file' privileges to my web host mysql user.
I'd assume that a simple:
grant file on *.* to 'webuser#'%';
would work but it does not and I can't seem to do it with my 'root' user as well. What gives? The reason we use load data is because it is super super fast for doing thousands of inserts at once.
anyone know how to remedy this or do I need to find a different way?
This page, http://docs.amazonwebservices.com/AmazonRDS/latest/DeveloperGuide/index.html?Concepts.DBInstance.html seems to suggest that I need to find a different way around this.
Help?
UPDATE
I'm not trying to import a database -- I just want to use the file load option to insert several hundred-thousand rows at a time.
after digging around this is what we have:
mysql> grant file on *.* to 'devuser'#'%';
ERROR 1045 (28000): Access denied for user 'root'#'%' (using password: YES)
mysql> select User, File_priv, Grant_priv, Super_priv from mysql.user;
+----------+-----------+------------+------------+
| User | File_priv | Grant_priv | Super_priv |
+----------+-----------+------------+------------+
| rdsadmin | Y | Y | Y |
| root | N | Y | N |
| devuser | N | N | N |
+----------+-----------+------------+------------+
You need to use LOAD DATA LOCAL INFILE as the file is not on the MySQL server, but is on the machine you are running the command from.
As per comment below you may also need to include the flag:
--local-infile=1
For whatever it's worth... You can add the LOCAL operand to the LOAD DATA INFILE instead of using mysqlimport to get around this problem.
LOAD DATA LOCAL INFILE ...
This will work without granting FILE permissions.
Also struggled with this issue, trying to upload .csv data into AWS RDS instance from my local machine using MySQL Workbench on Windows.
The addition I needed was adding OPT_LOCAL_INFILE=1 in: Connection > Advanced > Others. Note CAPS was required.
I found this answer by PeterMag in AWS Developer Forums.
For further info:
SHOW VARIABLES LIKE 'local_infile'; already returned ON
and the query was:
LOAD DATA LOCAL INFILE 'filepath/file.csv'
INTO TABLE `table_name`
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;
Copying from the answer source referenced above:
Apparently this is a bug in MYSQL Workbench V8.X. In addition to the
configurations shown earlier in this thread, you also need to change
the MYSQL Connection in Workbench as follows:
Go to the Welcome page of MYSQL which displays all your connections
Select Manage Server Connections (the little spanner icon)
Select your connection
Select Advanced tab
In the Others box, add OPT_LOCAL_INFILE=1
Now I can use the LOAD DATA LOCAL INFILE query on MYSQL RDS. It seems
that the File_priv permission is not required.*
Pretty sure you can't do it yet, as you don't have the highest level MySQL privileges with RDS. We've only done a little testing, but the easiest way to import a database seems to be to pipe it from the source box, e.g.
mysqldump MYDB | mysql -h rds-amazon-blah.com --user=youruser --pass=thepass
Importing bulk data into Amazon MySQL RDS is possible two ways. You could choose anyone of below as per your convenience.
Using Import utility.
mysqlimport --local --compress -u <user-name> -p<password> -h <host-address> <database-name> --fields-terminated-by=',' TEST_TABLE.csv
--Make sure, here the utility will be inserting the data into TEST_TABLE only.
Sending a bulk insert SQL by piping into into mysql command.
mysql -u <user-name> -p<password> -h <host-address> <database-name> < TEST_TABLE_INSERT.SQL
--Here file TEST_TABLE_INSERT.SQL will have bulk import sql statement like below
--insert into TEST_TABLE values('1','test1','2017-09-08'),('2','test2','2017-09-08'),('3','test3','2017-09-08'),('3','test3','2017-09-08');
I ran into similar issues. I was in fact trying to import a database but the conditions should be the same - I needed to use load data due to the size of some tables, a spotty connection, and the desire for a modest resume functionality.
I agree with chris finne that not specifying the local option can lead to that error. After many fits and starts I found that the mk-parallel-restore tool from Maatkit provided what I needed with some excellent extra features. It might be a great match for your use case.