Export DB Tables via phpMyAdmin In Non-Alphabetical Order - mysql

I have a MySQL database from a Joomla MultiSite installation where it has a set of tables with different prefixes for each Joomla site. When I export the db via phpMyAdmin it creates a SQL file where the tables are created and populated in alphabetical order. The problem is that the tables for the slave sites have dependencies on the tables for the master site, but alphabetically their prefixes are ahead of the master site. So the export works fine but when I try importing I get error after error and have to manually move sections around in the SQL file to make sure that the dependent tables are created/populated first.
So, is it possible to export a db via phpMyAdmin with the tables in a specific order?
EDIT: Here's the error I'm getting which should clarify things:
Error
SQL query: Documentation
--
-- Dumping data for table `j1_content_rating`
--
-- --------------------------------------------------------
--
-- Table structure for table `j1_core_acl_aro`
--
CREATE ALGORITHM = UNDEFINED DEFINER = `bookings_bpjms`#`localhost` SQL SECURITY DEFINER VIEW `bookings_bpjms`.`j1_core_acl_aro` AS SELECT `bookings_bpjms`.`js0_core_acl_aro`.`id` AS `id` , `bookings_bpjms`.`js0_core_acl_aro`.`section_value` AS `section_value` , `bookings_bpjms`.`js0_core_acl_aro`.`value` AS `value` , `bookings_bpjms`.`js0_core_acl_aro`.`order_value` AS `order_value` , `bookings_bpjms`.`js0_core_acl_aro`.`name` AS `name` , `bookings_bpjms`.`js0_core_acl_aro`.`hidden` AS `hidden`
FROM `bookings_bpjms`.`js0_core_acl_aro` ;
MySQL said: Documentation
#1146 - Table 'bookings_bpjms.js0_core_acl_aro' doesn't exist
The js0_ portions of the import script come after the j1_ portions, and so this error occurs. If I edit this file in a text editor (30+ megs and growing every day) I can find the js0_ portions and move them to the top, but this is tedious, time consuming and error prone.

Is the problem foreign key checks (in which case a SET FOREIGN_KEY_CHECKS=0 at the start of the file should work), or is the problem simply importing in a live environment?
With mysqldump it seems the tables are dumped in the order you give them in (if you specify tables instead of just a database), but this is undocumented behavior as far as I know and hence should not be relied upon.

Related

I have an older .sql file (exported from 5.0.45) I am trying to import into a newer version of MySQL via phpMyAdmin. Receiving errors

Receiving the following error message:
Error
Static analysis:
1 errors were found during analysis.
This option conflicts with "AUTO_INCREMENT". (near "AUTO_INCREMENT" at position 692)
SQL query:
-- phpMyAdmin SQL Dump -- version 2.8.2.4 -- http://www.phpmyadmin.net -- -- Host: localhost:3306 -- Generation Time: Mar 23, 2020 at 03:58 PM -- Server version: 5.0.45 -- PHP Version: 5.2.3 -- -- Database: weir-jones -- -- -------------------------------------------------------- -- -- Table structure for table categories -- CREATE TABLE categories ( number int(11) NOT NULL auto_increment, section varchar(255) NOT NULL, parent_id varchar(10) NOT NULL, title varchar(200) NOT NULL, type varchar(255) NOT NULL, content text NOT NULL, display_order int(11) NOT NULL, PRIMARY KEY (number) ) ENGINE=MyISAM AUTO_INCREMENT=126 DEFAULT CHARSET=utf8 AUTO_INCREMENT=126
MySQL said: Documentation
1046 - No database selected
============================================
I have tried importing with all compatibility modes. No luck.
old database is gone, cannot export again.
Any help would be appreciated.
Brendan
If you ask for the 1046 No database selected then it is what it means. You exported a table from a database without the USE xxx.
So I would suggest try importing this within a database or add the USE clause on top on your SQL file.
Another thing:
If you ask a question on Stackoverflow make sure to read the "formatting rules". Wich means you can organzie your question.
It is actually quite hard to read what error you have. Use emphasis, code blocks and such things like:
CREATE table_blub
col1 CHAR(120) NOT NULL,
col2 INT(5)...
By this someone can better read what is code and what is the error and of course what is the actual question.
Eurobetics is correct, this is because the .sql file doesn't specify what database to work with. That's no problem, you can just create the database on your new server and import the file in to that. Since you're importing through phpMyAdmin, first use the "New" text in the left-hand navigation area to create a new database (you don't need to put any tables in it). Once the database is created, phpMyAdmin puts you in the database structure page. (If you happen to navigate away or are coming back after you've already created the database, just click the existing database in the navigation pane). Look at the tabs along the top row and use the "Import" tab there (or drag and drop your .sql file here).
Being inside that database page tells phpMyAdmin that you want to import to that database specifically, whereas if you're on the main home page, the Import button there isn't attached to any particular database, which leads to your error.
You could technically add the SQL commands to create the database and USE the database in to the .sql file, but in this case that doesn't seem like it's needed and would just be making more work for you.

#1062 - Duplicate entry '1' for key 'PRIMARY'

I am at a complete loss here. I have two databases. One on my localhost site that I use for development and one on my remote site that I use for my live (production) site. I manage both of them through phpMyadmin. As I have been doing for months now, when I need to update the live site, I dump the related database and import the database from my localhost site.
Now, no matter what I try, I keep getting this error:
Error
SQL query:
--
-- Dumping data for table `oc_address_type`
--
INSERT INTO `oc_address_type` ( `address_type_id` , `address_type_name` )
VALUES ( 1, 'Billing' ) , ( 2, 'Shipping' ) ;
MySQL said: Documentation
#1062 - Duplicate entry '1' for key 'PRIMARY'
I tried creating a new blank database on my localhost and importing into that but same results. I have validated all of the tables and indexes and cannot find anything wrong there.
Any suggestions please as I am completely down until this gets resolved.
By the way, I am completely dropping all tables and importing structure and data. This has always worked until today.
you need to dump with the drop statements. The table exists and has data already and your trying to insert more which is identical. Im not 100% sure on phpmyadmin but the dumps will have an option for "add drop table" statements
Dump your database on localhost with "mysqldump --insert-ignore ..." then try to import with phpmyadmin on your live machine.
Or try to connect to your live database with command line tools (configure your database to be able to connect from other hosts than "localhost" first!)
Then you can try following:
$ mysql -f -p < yourdump.sql
with -f "force" you can ignore errors during importing. It's the same as adding "--force" parameter to "mysqlimport".
The problem is related with your file - you are trying to create a DB using a copy - at the top of your file you will find something like this:
CREATE DATABASE IF NOT EXISTS *THE_NAME_OF_YOUR_DB* DEFAULT CHARACTER SET latin1 COLLATE latin1_general_ci; USE *THE_NAME_OF_YOUR_DB*;
and I'm sure that you already have a DB with this name - IN THE SAME SERVER - please check, because you are trying to overwrite!! Just change the name OR (better) ERASE THIS LINE!
For me the foreign_key_checks and truncate table options was useful.
SET foreign_key_checks = 0;
TRUNCATE `oc_address_type`;
SET foreign_key_checks = 1;
Run the above sql script, and after the import.
I had this same issue, my problem was I had a primary key column called unique_id and when you try to add two of the same value in that primary keyed column, it comes back with the error below.
A primary key column's data is all suppose to be different, so that bottom 1 I changed to 3 and the error went away.
Your MySql is not corrupt, like previous answers and comments.
you need to delete any previous tables that you are over-writing. if you are doing a complete restore of all tables, delete all existing tables.
I have met the same problem, I drop the table and rebuilt the database, then the problem solved.

Federated Table Clarification

In my prior job, I was able to copy data from our production environment in a breeze by using the following statements:
from tablename#UNIXPROD2
INSERT INTO tablename#UNIXTEST2
My current job's databases aren't setup in this fashion.
So, I did some research on MySQL 5.0+ because that's what we are using for one of our customers. And I came across FEDERATED tables, so as I was reading, I found this (here):
As of MySQL 5.0.46, FEDERATED performs bulk-insert handling such that multiple rows are sent to the remote table in a batch. This provides a performance improvement. Also, if the remote table is transactional, it enables the remote storage engine to perform statement rollback properly should an error occur. This capability has the following limitations:
To me, this indicates that (A) I can copy the data from our prod database to our test database; (B) any actions performed on the federated table will also be processed on the source table, which is not what I want to do. I have some scripts that I need to run and I want to run it against actual prod data to make sure it works before I use it in the prod environment.
My question: Is my interpretation correct?
Assuming it is, I've tried:
select * from database.tablename#ipaddress, but received an error message that told me to check the MySQL manual for the version I'm running, which is what I'm going to do after I hit "Post Your Question."
I would appreciate any help in this matter.
EDIT: After further research, I think might be able to do what I need using OUTFILE and INFILE whereby I would use OUTFILE on the prod table(s) and then INFILE those rows on the test table(s). Thoughts?
My answer:
A - correct
B - correct.
You could set the user permission to read-only, but in your situation I would not use federated tables, instead dump the whole db into file and then restore it on the other server. Easiest way - use MySql Workbench.
and some info about federated tables:
You need federated enabled just on server B
You can access a view on A by making a federated table on B
You can do INSERT UPDATE DELETE on federated table
If you need read-only access you can limit the user privileges
BUT! You can't do any aggregate func. on a view which will be federated (ex. COUNT(), MAX(), UNION...) (you can, however it will lag)
Remember to set the KEY's on the federated table you are creating. (or it will lag horr.)
Remember to use ALGORITHM=MERGE on views
Remember to grant acces to USERNAME(from connection string) on server A
example of a federated table on server B:
delimiter $$
CREATE TABLE `schemaName`.`tableName`(
`keyName` VARCHAR(10) NOT NULL,
`key2Name` DATE DEFAULT '2012-01-01',
KEY `keyName` (`keyName`)
)
ENGINE=FEDERATED
DEFAULT CHARSET=utf8
CONNECTION='mysql://USERNAME:PASSWORD#IPADDRESS:PORTNUMBER/baseSchema/baseTable'
$$
And the view on server A:
CREATE
ALGORITHM = MERGE
DEFINER = `ANOTHERUSERNAME`#`%`
SQL SECURITY DEFINER
VIEW `baseSchema`.`baseTable` AS
SELECT
... AS `keyName`,
... AS `key2Name`
FROM
...

Error replicating database due to cross-db reference - table doesn't exist

We have mysql v5.0.77 running on a server collecting some measurement data.
On the mysql server, we have the following databases:
raw_data_db
config_tables_db
processed_data_db
We ONLY want to replicate the 'processed_data_db' which is constructed using information from the 'raw_data_db' and 'config_tables_db'.
We keep getting errors on our slave server when it tries to duplicate the statements that are constructing the processed data.
Example:
[ERROR] Slave: Error 'Table 'raw_data_db.s253' doesn't exist' on query. Default database: 'data'. Query: 'CREATE TEMPORARY TABLE temp SELECT * FROM raw_data_db.s253 WHERE DateTimeVal>='2011/04/21 17:00:00' AND DateTimeVal<='2011/04/21 17:10:00'', Error_code: 1146
What I am assuming is happening is that the cross-db selects can't find the raw database because we aren't replicating it, and the data do not exist on the slave...or something along those lines?
So I tried using ignores, but we're still getting the errors
replicate-wild-ignore-table = raw_data_db.*
replicate-wild-ignore-table = data.temp*
Other configuration information:
replicate-rewrite-db = processed_data_db->data
replicate-do-db = data
Is it possible to replicate just the one database if all the tables are created from references to other databases? Any ideas on how to get around this error?
I looked in to row-based replication which seemed like it might do the trick, but it's only available in v5.1 or greater....is there anything similar in earlier versions?
I fixed the ignore table statements to "data.%temp%", and it seems to be ignoring just fine, but I still can't replicate the tables I want because the insert statement is now referencing a table that doesn't exist.
ex.
Error 'Table 'data.temp' doesn't exist' on query. Default database: 'data'. Query: 'INSERT INTO abc SELECT FROM_UNIXTIME(AVG(UNIX_TIMESTAMP(DateTimeVal))), ROUND(AVG(Difference),3), ROUND(STDDEV(Difference),3), ROUND(AVG(Frequency),0), ROUND(AVG(SignalPower),1) FROM temp WHERE ABS(Difference)<'10000.0' AND Difference!='0''
The processing is creating temporary tables from the raw database and then averaging all the values in the temporary table and inserting the result in to the processed_data_db, but since I'm ignoring the create statements, it doesn't have access to those tables, but the reason I'm ignoring them in the first place is because they reference tables outside of what I want to replicate...so I'm not sure how I should approach this....any suggestions would be greatly appreciated.
Temporary tables and replication
options. By default, all temporary
tables are replicated; this happens
whether or not there are any matching
--replicate-do-db, --replicate-do-table, or --replicate-wild-do-table options in effect. However, the
--replicate-ignore-table and --replicate-wild-ignore-table options are honored for temporary tables.
http://dev.mysql.com/doc/refman/5.0/en/replication-features-temptables.html
edit:
replicate raw_data_db and config_tables_db tables which using
in you insert query
use drbd protocol
http://www.mysql.com/why-mysql/drbd/

phpMyAdmin: MySQL Error 1062 - Duplicate entry

I connect with user "root" onto my database "test" which I host locally for development. Among others I have the table "ratingcomment". For some reason when I click on the table "ratingcomment" phpMyAdmin shows me the following error:
Fehler
SQL-Befehl:
INSERT INTO `phpmyadmin`.`pma_history` (
`username` ,
`db` ,
`table` ,
`timevalue` ,
`sqlquery`
)
VALUES (
'root', 'test', 'ratingcomment', NOW( ) , 'SELECT * FROM `ratingcomment`'
)
MySQL meldet:
#1062 - Duplicate entry '838' for key 'PRIMARY'
I used google to finde out the following
"This indicates that you have a UNIQUE or PRIMARY index on a table, and there is a duplicate value someone on one of the values in one of these indexes."
But I still dont quite understand the error! I use a primary Key, which auto-increments for all of my tables, so there actually shouldnt be a problem with the table. I had another table named "rating" which had a column "comment". Can it be, that this causes problems?
Quick fix:
REPAIR TABLE `phpmyadmin`.`pma_history`
If that fails, I'd just truncate/empty the table.
TRUNCATE TABLE `phpmyadmin`.`pma_history`
Although phpmyadmin has it's place in my toolbox, I personally don't use it's internal db.
ADDENDUM
MyISAM tables can easily become corrupted. A couple causes that usually hit me: if the MySQL is not shutdown properly, or if the table has a FULLTEXT index and the stopword file on disk had changed.
Simply stated, the REPAIR just checkes the data file for errors (and depending on your options, makes it usable again) and rewrites the index file. Fair warning: with MyISAM, repairing a table can often toast all your data in that table to make it usable. See doc for more details.
A google search pertaining to this pma table being corrupted lead me to this.
This appears to be an internal error. You've issued this query:
SELECT * FROM `ratingcomment`
phpMyAdmin tries to write such action in its internal event log and it fails. If you Google for pma_history you'll find several references to such table being corrupted.
My advice is that you find another SQL client (such as HeidiSQL) and try to repair the phpMyAdmin database.
I know this is kinda late but I had the same problem and wanted to share what I did.
In PhpMyAdmin, I went to the table's Operation tab, and just incremented the AUTO_INCREMENT value under Table options and inserted a dummy record.