I have this problem that occurred today I am left baffled, The issues is that when I insert data in one of my database table called tbl_template_log, it exists but it is not showing in browse mode I am using phpMyAdmin.
But when I click "edit" the data appears correct...
Hopefully my question is understandable if not ask me for additional details.
This is how my data appear in browse Mode:
And The proof that the data in user_id actually exists after running this query "SELECT * FROM tbl_template_log" is here:
tbl_template_log structure:
CREATE TABLE IF NOT EXISTS `tbl_template_log` (
`templog_id` int(6) NOT NULL AUTO_INCREMENT,
`user_id` int(6) DEFAULT NULL,
`temp_id` int(6) DEFAULT NULL,
`savetemp_id` int(6) DEFAULT NULL,
`send_date` datetime NOT NULL,
`send_to` varchar(254) NOT NULL,
`email_send` text NOT NULL,
PRIMARY KEY (`templog_id`),
KEY `tbl_user.user_id` (`user_id`,`temp_id`,`savetemp_id`),
KEY `tbl_template.temp_id` (`temp_id`),
KEY `tbl_saved_template.savetemp_id` (`savetemp_id`),
KEY `user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=85 ;
--
-- Dumping data for table `tbl_template_log`
--
INSERT INTO `tbl_template_log` (`templog_id`, `user_id`, `temp_id`, `savetemp_id`, `send_date`, `send_to`, `email_send`) VALUES
(83, 77, NULL, NULL, '2014-05-20 22:08:25', 'tomasz#onetwotrade.com', '<html blahh blahh>'),
--
-- Constraints for table `tbl_template_log`
--
ALTER TABLE `tbl_template_log`
ADD CONSTRAINT `tbl_template_log_ibfk_1` FOREIGN KEY (`user_id`) REFERENCES `tbl_user` (`user_id`),
ADD CONSTRAINT `tbl_template_log_ibfk_2` FOREIGN KEY (`temp_id`) REFERENCES `tbl_template` (`temp_id`),
ADD CONSTRAINT `tbl_template_log_ibfk_3` FOREIGN KEY (`savetemp_id`) REFERENCES `tbl_saved_template` (`savedtemp_id`);
I have found a problem, firstly thank you all who tried to help me, Funny enough the problem was with my Safari browser :o, I cleared my browsing History, Catche & Cookies then everything started to work suddenly
Scroll all the way down in PHP my admin and change the Show 30 row(s) value to something like 100 or 500 or how many you want.
Your value is most probably on a different page.
Related
I am using mysql with Django. I am trying to count the number of visitor_pages for a specific dealer in a certain amount of time.
I would share the raw sql query that I have obtained from django debug toolbar.
SELECT COUNT(*) AS `__count`
FROM `visitor_page`
INNER JOIN `dealer_visitors`
ON (`visitor_page`.`dealer_visitor_id` = `dealer_visitors`.`id`)
WHERE (`visitor_page`.`date_time` BETWEEN '2021-02-01 05:51:00'
AND '2021-03-21 05:50:00'
AND `dealer_visitors`.`dealer_id` = 15)
The issue is that I have more than 13 million records in the visitor_pages table and about 1.5 million records in the dealer_visitor table. I have already indexed date_time. I am thinking of using a materialized view but before attempting that, I would really appreciate suggestions on how I could improve this query.
visitor_pages schema:
CREATE TABLE `visitor_page` (
`id` int NOT NULL AUTO_INCREMENT,
`date_time` datetime(6) DEFAULT NULL,
`added_at` datetime(6) DEFAULT NULL,
`updated_at` datetime(6) DEFAULT NULL,
`page_id` int NOT NULL,
`dealer_visitor_id` int NOT NULL,
PRIMARY KEY (`id`),
KEY `visitor_page_page_id_246babdf_fk_web_page_id` (`page_id`),
KEY `visitor_page_dealer_visitor_id_e2dddea2_fk_dealer_visitors_id` (`dealer_visitor_id`),
KEY `visitor_page_date_time_06e9e9f5` (`date_time`),
CONSTRAINT `visitor_page_dealer_visitor_id_e2dddea2_fk_dealer_visitors_id` FOREIGN KEY (`dealer_visitor_id`) REFERENCES `dealer_visitors` (`id`),
CONSTRAINT `visitor_page_page_id_246babdf_fk_web_page_id` FOREIGN KEY (`page_id`) REFERENCES `web_page` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=13626649 DEFAULT CHARSET=latin1;
dealer_visitors schema:
CREATE TABLE `dealer_visitors` (
`id` int NOT NULL AUTO_INCREMENT,
`visit_date` datetime(6) DEFAULT NULL,
`added_at` datetime(6) DEFAULT NULL,
`updated_at` datetime(6) DEFAULT NULL,
`dealer_id` int NOT NULL,
`visitor_id` int NOT NULL,
`type` int DEFAULT NULL,
`notes` longtext,
`location` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `dealer_visitors_dealer_id_306e2202_fk_dealer_id` (`dealer_id`),
KEY `dealer_visitors_visitor_id_27ae498e_fk_visitor_id` (`visitor_id`),
KEY `dealer_visitors_type_af0f7d79` (`type`),
KEY `dealer_visitors_visit_date_f2b138c9` (`visit_date`),
CONSTRAINT `dealer_visitors_dealer_id_306e2202_fk_dealer_id` FOREIGN KEY (`dealer_id`) REFERENCES `dealer` (`id`),
CONSTRAINT `dealer_visitors_visitor_id_27ae498e_fk_visitor_id` FOREIGN KEY (`visitor_id`) REFERENCES `visitor` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1524478 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
EXPLAIN ANALYZE the query gives me the following:
EXPLAIN:
For this query:
SELECT COUNT(*) AS `__count`
FROM visitor_page vp JOIN
dealer_visitors dv
ON vp.dealer_visitor_id = dv.id
WHERE vp.date_time BETWEEN '2021-02-01 05:51:00' AND '2021-03-21 05:50:00' AND
dv.dealer_id = 15;
The best indexes are on dealer_visitors(dealer_id, date_time, id) and visitor_page(dealer_visitor_id).
An index only on date helps a bit. But you are retrieving a month's worth of data and that might be a lot of data to process. Having dealer_id as the first column in the index will restrict the data to only the rows for that dealer in that time frame.
Depending on the distribution of the data, the Optimizer might pick one of the tables to start with, or pick the other. So, let's provide optimal indexes for each case:
ON `visitor_page`.`dealer_visitor_id` = `dealer_visitors`.`id`
WHERE `visitor_page`.`date_time` BETWEEN ...
AND `dealer_visitors`.`dealer_id` = 15
Starting with visitor_page:
visitor_page: INDEX(date_time) -- (already exists)
dealer_visitors: (already has PRIMARY KEY(id))
Starting with dealer_visitors:
dealer_visitors: INDEX(dealer_id) -- (already exists)
visitor_page: INDEX(dealer_visitor_id, date_time) -- in this order
and drop dealer_visitors_visitor_id_27ae498e_fk_visitor_id as now being redundant.
The net is to add one index and drop one index.
Materialized view -- It is often best for Data Warehouse reports to build and incrementally maintain a "summary table" (a "materialized view"). The very odd date range (1 month + 20 days - 61 seconds) makes this clumsy to do. Typically it is handy to make the table based on whole days. If you can shift to daily (or hourly), then see http://mysql.rjweb.org/doc.php/summarytables
Something else to check: How much RAM do you have? What does SHOW VARIABLES LIKE 'innodb_buffer_pool_size'; say?
I see that the tables have different charset/collation. This is not a problem for the query in question, but if you have other queries that JOIN on VARCHARs, check that they use the same collation.
I created two tables from phpmyadmin like this
CREATE TABLE customers (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(245) DEFAULT NULL,
place varchar(245) DEFAULT NULL,
email varchar(245) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
and another one like this
CREATE TABLE `orders` (
id int(11) NOT NULL AUTO_INCREMENT,
menu_name varchar(245) DEFAULT NULL,
menu_id int(11) DEFAULT NULL,
date_of_order date DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `FK orders menu_id customer id_idx` (`menu_id`),
CONSTRAINT `FK orders menu_id customer id` FOREIGN KEY (`menu_id`)
REFERENCES `customers` (`id`) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;
after this i insert a value in the first table called 'customers' like this:
now after that when i insert values into the 'orders' table, phpmyadmin linter displays error like this:
However, strangely when i click 'Go', the query works fine. It also works fine through the command line too. So is it a bug? or i have to write it in a different way?
Its a bug in phpmyadmin sql query parser in parsing sub queries. The issue is opened and has not been entertained yet.
You have some alternatives here:
Adminer
Or you can try a different mySql client:
MySQL Workbench
HeidiSQL
Yes, phpmyadmin version 4.5.1 had a bug which #Shaharyar mentioned above. i apologize for not posting the version before. However, updating it to version 4.6.3 fixed the issue. Thank you.
I'm looking to append a comments table from one WordPress site to another. The users are different. When I import the comments from site B to A, I run into a duplicate key issue; comment_id is already taken.
So how can I resolve this and append the table with a simple .sql file? Would I have to take the user information, generate a new user, check for comments made on site B, pull the content and postID, then go back to site A and recreate the comment for the newly created user!?
What a headache! THanks.
if your only problem is a duplicate key issue, go to the end of your sql file after
ENGINE=MyISAM
and make it
ENGINE=MyISAM AutoIncrement=a nubmer above the last id in the new database
or
Query database A for the last id then add one and use it on a new insert query.
Example 1:
CREATE TABLE IF NOT EXISTS `movies` (
`id` int(255) NOT NULL AUTO_INCREMENT,
`title` varchar(255) NOT NULL,
`year` int(4) NOT NULL,
`size` varchar(255) NOT NULL,
`added` date NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `title` (`title`,`year`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ;
The Inserts From My Dump:
INSERT INTO `movies` (`title`, `year`, `size`, `added`) VALUES
('[REC] 2', 0, '716688', '2011-09-23'),
('5 Days of War', 0, '1435406', '2012-01-09'),
('[REC]', 0, '1353420800', '2011-11-06');
See how i didnt include the PRIMARY KEY (id) in my includes, but it will still check against my UNIQUE KEY and see if the title exists. Just a little demo that hopefully helps out. If your table already exists on the new database then just skip to the inserts and dont include the primary key and it will be auto set on a new insert to the next available value.
I've got a weird problem on a MySQL table. When trying to insert a new row, it says the primary key is duplicate. My primary key is auto incremental and is not set within my query (automatically set by MySQL).
The problem is I get a "Duplicate primary key" error on a key that doesn't even exists (I checked). I solved the problem increasing the current auto_increment value but I can't understand how it happened.
Any help would be great.
Edit
Table creation
CREATE TABLE `articles_mvt` (
`id` int(10) NOT NULL AUTO_INCREMENT,
`ext_article_id` int(5) NOT NULL,
`date_mvt` date NOT NULL,
`qte` float(4,2) NOT NULL,
`in_out` enum('in','out') NOT NULL,
`ext_nateco_id` int(5) NOT NULL,
`ext_agent_id` int(5) NOT NULL COMMENT 'Demandeur',
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1647 ;
Problematic query
INSERT INTO articles_mvt (
`ext_article_id`,
`date_mvt`,
`qte`,
`in_out`,
`ext_nateco_id`,
`ext_agent_id`
)
VALUES (
'".$_POST["numArticle"]."',
'".dateSql($_POST["date_mvt"])."',
".$_POST["qte_entier"].".".$_POST["qte_virgule"].",
'".$_POST["in_out"]."',
".$_POST["numNateco"].",
".$_POST["demandeur"]."
)
FYI variables are sanitized earlier in the code ;)
Well i think at that time you did not check auto Inc flag on primary key. So when you try to enter than value 0 is insert in the primary key and for second entry it gives error. like that
ID Value
0 A ok it not give error
0 ff it gives error..
Or you may try to insert a row whose ID is already exist like
ID Value
11 A ok it not give error
11 ff it gives error..
Need some advice working with EF4 and MySql.
I have a table with lots of data items. Each item belongs to a module and a zone. The data item also has a timestamp (ticks). The most common usage is for the app to query for data after a specified time for a module and a zone. The data should be sorted.
Problem is that the query selects to many rows and the database server will be low on memory resulting in a very slow query. I tried to limit the query to 100 items but the generated sql will only apply the limit after all the items has been selected and sorted.
dataRepository.GetData().WithModuleId(ModuleId).InZone(ZoneId).After(ztime).OrderBy(p
=> p.Timestamp).Take(100).ToList();
Generated SQL by the MySql .Net Connector 6.3.6
SELECT
`Project1`.`Id`,
`Project1`.`Data`,
`Project1`.`Timestamp`,
`Project1`.`ModuleId`,
`Project1`.`ZoneId`,
`Project1`.`Version`,
`Project1`.`Type`
FROM (SELECT
`Extent1`.`Id`,
`Extent1`.`Data`,
`Extent1`.`Timestamp`,
`Extent1`.`ModuleId`,
`Extent1`.`ZoneId`,
`Extent1`.`Version`,
`Extent1`.`Type`
FROM `DataItems` AS `Extent1`
WHERE ((`Extent1`.`ModuleId` = 1) AND (`Extent1`.`ZoneId` = 1)) AND
(`Extent1`.`Timestamp` > 634376753657189002)) AS `Project1`
ORDER BY
`Timestamp` ASC LIMIT 100
Table definition
CREATE TABLE `mydb`.`DataItems` (
`Id` bigint(20) NOT NULL AUTO_INCREMENT,
`Data` mediumblob NOT NULL,
`Timestamp` bigint(20) NOT NULL,
`ModuleId` bigint(20) NOT NULL,
`ZoneId` bigint(20) NOT NULL,
`Version` int(11) NOT NULL,
`Type` varchar(1000) NOT NULL,
PRIMARY KEY (`Id`),
KEY `IX_FK_ModuleDataItem` (`ModuleId`),
KEY `IX_FK_ZoneDataItem` (`ZoneId`),
KEY `Index_4` (`Timestamp`),
KEY `Index_5` (`ModuleId`,`ZoneId`),
CONSTRAINT `FK_ModuleDataItem` FOREIGN KEY (`ModuleId`) REFERENCES
`Modules` (`Id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `FK_ZoneDataItem` FOREIGN KEY (`ZoneId`) REFERENCES `Zones`
(`Id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=22904 DEFAULT CHARSET=utf8;
All suggestions on how to solve this are welcome.
What's your GetData() method doing? I'd bet it's executing a query on the entire table. And that's why your Take(100) at the end isn't doing anything.
I solved this by using the Table Splitting method described here:
Table splitting in entity framework