Mysql File Import Issue When Use CURRENT_TIMESTAMP in column - mysql

I have a table which use CURRENT_TIMESTAMP as a default value on column. table working fine. but when export the database and then again import the exported mysql file, then the CURRENT_TIMESTAMP column replace all date with current today datetime.
This is the table srtructure:
CREATE TABLE a (
id int NOT NULL AUTO_INCREMENT,
col0 varchar(5) NOT NULL ,
col1 varchar(10) NOT NULL,
col2 varchar(20) ,
col3 varchar(20) NOT NULL,
createDateTime timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (id))
ENGINE=InnoDB
AUTO_INCREMENT=7430
DEFAULT CHARSET=utf8
COLLATE=utf8_unicode_ci

Just replace createDateTime timestamp with createDateTime datetime

I can say that the problem is here:
CREATE TABLE a (
id int NOT NULL AUTO_INCREMENT,
col0 varchar(5) NOT NULL ,
col1 varchar(10) NOT NULL,
col2 varchar(20) ,
col3 varchar(20) NOT NULL,
createDateTime timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, <--- **Here
PRIMARY KEY (id))
ENGINE=InnoDB
AUTO_INCREMENT=7430
DEFAULT CHARSET=utf8
COLLATE=utf8_unicode_ci;
Since you define the createDateTime column with CURRENT_TIMESTAMP as default, whenever there's changes on the row data, the column would update itself.
I experienced this once when I was doing an app and I put my createdDate column as CURRENT_TIMESTAMP default similar as you. Then I realize that every time I made changes on some information through the app, the column also get updated to the current timestamp; which ultimately made my data all messed up!
I think what you can do from the exported file (sql dump) is:
Open the dump file using a text editor - (caveat: if the dump file is too large, might not be easy to open.)
Locate the CREATE TABLE syntax in the dump file then change the following:
CREATE TABLE a (
id int NOT NULL AUTO_INCREMENT,
col0 varchar(5) NOT NULL ,
col1 varchar(10) NOT NULL,
col2 varchar(20) ,
col3 varchar(20) NOT NULL,
createDateTime timestamp NOT NULL, <--- **Here
PRIMARY KEY (id))
ENGINE=InnoDB
AUTO_INCREMENT=7430
DEFAULT CHARSET=utf8
COLLATE=utf8_unicode_ci;
Save the dump file and try importing it.
If that doesn't work, I suggest:
Create another table using the CREATE TABLE with modified createDateTime column above - name it as a1 or a_copy etc.
Run INSERT query from table A like :
INSERT INTO a_copy SELECT * FROM a;
Check if data matches between the two table - I usually run a LEFT JOIN query for a quick check like this:
SELECT a.id, a_copy.id FROM a
LEFT JOIN a_copy
ON a.id=a_copy.id
AND a.col0=a_copy.col0
AND a.col1=a_copy.col1
AND a.col2=a_copy.col2
AND a.col3=a_copy.col3
AND a.createDateTime=a_copy.createDateTime
WHERE a_copy.id IS NULL;
*The ON condition can be just ON a.id=a_copy.id AND a.createDateTime=a_copy.createDateTime and WHERE a_copy.id IS NULL is just simply showing any result that doesn't match.
Once you're satisfied, export and import the a_copy table.

Related

Hash of two columns in mysql

I have a MYSQL table, with 5 columns in it:
id bigint
name varchar
description varchar
slug
Can I get MySQL to automatically generate the value of slug as a 256 Bit Hash of name+description?
I am now using PHP to generate an SHA256 value of the slug prior to saving it.
Edit:
By automatic, I mean see if it's possible to change the default value of the slug field, to be a computed field that's the sha256 of name+description.
I already know how to create it as part of an insert operation.
MySQL 5.7 supports generated columns so you can define an expression, and it will be updated automatically for every row you insert or update.
CREATE TABLE IF NOT EXISTS MyTable (
id int NOT NULL AUTO_INCREMENT,
name varchar(50) NOT NULL,
description varchar(50) NOT NULL,
slug varchar(64) AS (SHA2(CONCAT(name, description), 256)) STORED NOT NULL,
PRIMARY KEY (id)
) DEFAULT CHARSET=utf8;
If you use an earlier version of MySQL, you could do this with TRIGGERs:
CREATE TRIGGER MySlugIns BEFORE INSERT ON MyTable
FOR EACH ROW SET slug = SHA2(CONCAT(name, description));
CREATE TRIGGER MySlugUpd BEFORE UPDATE ON MyTable
FOR EACH ROW SET slug = SHA2(CONCAT(name, description), 256);
Beware that concat returns NULL if any one column in the input is NULL. So, to hash in a null-safe way, use concat_ws. For example:
select md5(concat_ws('', col_1, .. , col_n));
Use MySQL's CONCAT() to combine the two values and SHA2() to generate a 256 bit hash.
CREATE TABLE IF NOT EXISTS `mytable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(50) NOT NULL,
`description` varchar(50) NOT NULL,
`slug` varchar(64) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
INSERT INTO `mytable` (`name`,`description`,`slug`)
VALUES ('Fred','A Person',SHA2(CONCAT(`name`,`description`),256));
SELECT * FROM `mytable`
OUTPUT:
COLUMN VALUE
id 1
name Fred
description A Person
slug ea76b5b09b0e004781b569f88fc8434fe25ae3ad17807904cfb975a3be71bd89
Try it on SQLfiddle.

Unable to create table in Qubole similar to mysql

I want to create a external table in Qubole similar to a table created in Mysql. Query for create table in mysql is:
CREATE TABLE `mytable` (
`id` varchar(50) NOT NULL,
`v_count` int(11) DEFAULT NULL,
`l_visited` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`f_visited` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
can anyone help me to write similar query in hive.
try this way :
CREATE TABLE page_view(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User')
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
STORED AS SEQUENCEFILE;
Follow this links:
link1
link2

How to optimized mysql query having large dataset

I have two tables with the following schema,
CREATE TABLE `open_log` (
`delivery_id` varchar(30) DEFAULT NULL,
`email_id` varchar(50) DEFAULT NULL,
`email_activity` varchar(30) DEFAULT NULL,
`click_url` text,
`email_code` varchar(30) DEFAULT NULL,
`on_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `sent_log` (
`email_id` varchar(50) DEFAULT NULL,
`delivery_id` varchar(50) DEFAULT NULL,
`email_code` varchar(50) DEFAULT NULL,
`delivery_status` varchar(50) DEFAULT NULL,
`tries` int(11) DEFAULT NULL,
`creation_ts` varchar(50) DEFAULT NULL,
`creation_dt` varchar(50) DEFAULT NULL,
`on_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The email_id and delivery_id columns in both tables make up a unique key.
The open_log table have 2.5 million records where as sent_log table has 0.25 million records.
I want to filter out the records from open log table based on the unique key (email_id and delivery_id).
I'm writing the following query.
SELECT * FROM open_log
WHERE CONCAT(email_id,'^',delivery_id)
IN (
SELECT DISTINCT CONCAT(email_id,'^',delivery_id) FROM sent_log
)
The problem is the query is taking too much time to execute. I've waited for an hour for the query completion but didn't succeed.
Kindly, suggest what I can do to make it fast since, I have the big data size in the tables.
Thanks,
Faisal Nasir
First, rewrite your query using exists:
SELECT *
FROM open_log ol
WHERE EXISTS (SELECT 1
FROM send_log sl
WHERE sl.email_id = ol.email_id and sl.delivery_id = ol.delivery_id
);
Then, add an index so this query will run faster:
create index idx_sendlog_emailid_deliveryid on send_log(email_id, delivery_id);
Your query is slow for a variety of reasons:
The use of string concatenation makes it impossible for MySQL to use an index.
The select distinct in the subquery is unnecessary.
Exists can be faster than in.
If this request is often on, you can greatly increase it by create bigint id column, enven if it not unique.
For example you can put trigger and create column like this
alter table sent_log for_get bigint;
After that create trigger/ update it to put hash into that bigint
for_get=CONV(substr(md5(concat(email_id, delivery_id)),1,10),16,10)
If you have such column in both table and index on it, query will be like
SELECT *
FROM open_log ol
left join send_log sl on sl.for_get=ol.for_get
WHERE sl.email_id is not null and sl.email_id = ol.email_id and sl.delivery_id = ol.delivery_id;
That query will be fast.

MySQL Error "There can be only one TIMESTAMP column with CURRENT_TIMESTAMP in DEFAULT clause" even though I'm doing nothing wrong

CREATE TABLE AlarmHistory
(
id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
value DOUBLE NOT NULL,
startedStamp TIMESTAMP NOT NULL,
finishedStamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
);
When trying to create the above table I get the following error: "SQL Error (1293): Incorrect table definition; there can be only one TIMESTAMP column with CURRENT_TIMESTAMP in DEFAULT or ON UPDATE clause".
My question is this a bug? Because sure, I have two TIMESTAMP columns, but only ONE of them have a default definition. When I remove startedStamp I have no errors.
Per the MySQL manual, version 5.5, Automatic Initialization and Updating for TIMESTAMP
With neither DEFAULT CURRENT_TIMESTAMP nor ON UPDATE CURRENT_TIMESTAMP, it is the same as specifying both DEFAULT CURRENT_TIMESTAMP and ON UPDATE CURRENT_TIMESTAMP.
CREATE TABLE t1 (
ts TIMESTAMP
);
However,
With a constant, the default is the given value. In this case, the column has no automatic properties at all.
CREATE TABLE t1 (
ts TIMESTAMP DEFAULT 0
);
So, this should work:
CREATE TABLE AlarmHistory
(
id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
value DOUBLE NOT NULL,
startedStamp TIMESTAMP DEFAULT 0 NOT NULL,
finishedStamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
);
fiddle
This is the limitation in MYSQL 5.5 version. You need to update the version to 5.6.
I was getting this error in adding a table in MYSQL
Incorrect table definition; there can be only one TIMESTAMP column
with CURRENT_TIMESTAMP in DEFAULT or ON UPDATE clause My new MYSQL
table looks something like this.
create table table_name (col1 int(5) auto_increment primary key, col2
varchar(300), col3 varchar(500), col4 int(3), col5 tinyint(2),
col6 timestamp default current_timestamp, col7 timestamp default
current_timestamp on update current_timestamp, col8 tinyint(1)
default 0, col9 tinyint(1) default 1);
After some time of reading about changes in different MYSQL versions and some of the googling. I found out that there was some changes that were made in MYSQL version 5.6 over version 5.5.
This article will help you to resolve the issue.
http://www.oyewiki.com/MYSQL/Incorrect-table-definition-there-can-be-only-one-timestamp-column

alter table statment to insert duplicate into another table

I have a table in which there is a column name with SP varchar(10) NOT NULL. I want that column always to be unique so i created unique index on that column . My table schema as follows :
CREATE TABLE IF NOT EXISTS `tblspmaster` (
`CSN` bigint(20) NOT NULL AUTO_INCREMENT,
`SP` varchar(10) NOT NULL,
`FileImportedDate` date NOT NULL,
`AMZFileName` varchar(50) NOT NULL,
`CasperBatch` varchar(50) NOT NULL,
`BatchProcessedDate` date NOT NULL,
`ExpiryDate` date NOT NULL,
`Region` varchar(50) NOT NULL,
`FCCity` varchar(50) NOT NULL,
`VendorID` int(11) NOT NULL,
`LocationID` int(11) NOT NULL,
PRIMARY KEY (`CSN`),
UNIQUE KEY `SP` (`SP`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10000000000 ;
Now i want that if anybody tries to insert duplicate record then that record should be inserted into a secondary table name tblDuplicate.
I have gone through this question MySQL - ignore insert error: duplicate entry but i am not sure that instead of
INSERT INTO tbl VALUES (1,200) ON DUPLICATE KEY UPDATE value=200;
can i insert duplicate row into another table ?
what changes needed to be done in main table scheme or index column ?
**Note : Data will be inserted by importing excel or csv files and excel files generally contains 500k to 800 k records but there will be only one single column **
I believe you want to use a trigger for this. Here is the MySQL reference chapter on triggers.
Use a before insert trigger. In the trigger, check if the row is a duplicate (maybe count(*) where key column value = value to be inserted). If the row is a duplicate, perform an insert into your secondary table.