Partition exchange column type or size mismatch (ORA-14097) - partitioning

I'm trying to do a exchange partition on a database and I'm having the following error: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
The script that does this was already created and it was running as expected on an Oracle 11g database. As soon as I've updated to 12c I've got this problem. This is how I'm doing the partition exchange:
-- The new partitioned table.
CREATE TABLE NEW_TABLE
(
id NUMBER(18) NOT NULL,
message VARCHAR2(4000) NOT NULL,
details VARCHAR2(4000),
partition_time TIMESTAMP(6) DEFAULT to_timestamp('01-01-2016','dd-mm-yyyy HH24:MI') NULL
) NOCOMPRESS LOGGING
PARTITION BY RANGE (partition_time) INTERVAL (NUMTODSINTERVAL(1,'HOUR'))
(PARTITION initial VALUES LESS THAN (to_timestamp('01-01-2016','dd-mm- yyyy HH24:MI')));
-- The old table.
CREATE TABLE OLD_TABLE
(
id NUMBER(18,0) NOT NULL,
message VARCHAR2(4000 byte) NOT NULL,
details VARCHAR2(4000),
);
-- Add the column that does not exist on the old table (keep the same columns).
ALTER TABLE OLD_TABLE ADD partition_time TIMESTAMP(6) DEFAULT to_timestamp('01-01-2016','dd-mm-yyyy HH24:MI') NULL;
ALTER TABLE NEW_TABLE
EXCHANGE PARTITION INITIAL
WITH TABLE OLD_TABLE
WITHOUT VALIDATION;
(...)
Now, once again, on Oracle 11g this was working perfectly. On Oracle 12c I've got the error explained above. I've did some research and I've seen people talk about INVISIBLE columns. Well, I've recreated the OLD_TABLE so I think there will be no invisible columns.
EDIT:
I've realized that on Oracle 12c when I try to alter the table to create a new column another invisible column is created (named SYS_NC00011$). This is why the partition exchange is not working. My question now is why is this happening and what is the best way to "remove this column" ? Already tryied to drop unused columns with no success.
Thank you guys!

We ran recently into the same error. Similar to your case, the error was triggered by a hidden column (and it wasn't even easter ;-). In our case the hidden column was caused by a ALTER TABLE xxx DROP COLUMN yyy of a compressed table.
In your case, it seems very likely that the hidden column is created by the ALTER TABLE xxx ADD COLUMN yyy NULL. As the article DDL Optimization in Oracle Database 12c and this answer explains, adding a NULL column does some data dictionary magic and adds a hidden column to track if the new column has been written to for each row.
CREATE TABLE old_table (
id NUMBER(18,0) NOT NULL,
message VARCHAR2(4000 BYTE) NOT NULL,
details VARCHAR2(4000)
);
ALTER TABLE old_table ADD partition_time TIMESTAMP(6)
DEFAULT to_timestamp('01-01-2016','dd-mm-yyyy HH24:MI') NULL;
SELECT * FROM user_tab_cols WHERE table_name='OLD_TABLE';
ID NUMBER
MESSAGE VARCHAR2
DETAILS VARCHAR2
SYS_NC00004$ RAW
PARTITION_TIME TIMESTAMP(6)
So, to fix your case, either recreate the table including the column partition_time:
CREATE TABLE old_table (
id NUMBER(18,0) NOT NULL,
message VARCHAR2(4000 BYTE) NOT NULL,
details VARCHAR2(4000),
partition_time TIMESTAMP(6) DEFAULT DATE '2016-01-01'
);
or add the column without a DEFAULT:
ALTER TABLE OLD_TABLE ADD partition_time TIMESTAMP(6) NULL;
or disable the new feature (Doc Id 2277937.1):
ALTER SESSION SET "_add_col_optim_enabled"=FALSE ;
ALTER TABLE old_table ADD partition_time TIMESTAMP(6)
DEFAULT to_timestamp('01-01-2016','dd-mm-yyyy HH24:MI') NULL;
SELECT * FROM user_tab_cols WHERE table_name='OLD_TABLE';
ID NUMBER
MESSAGE VARCHAR2
DETAILS VARCHAR2
PARTITION_TIME TIMESTAMP(6)
I haven't found a way yet to rebuild the table to get rid of the hidden column. ALTER TABLE MOVE does not help, only CREATE TABLE AS SELECT does.

Another reilable solution without compromising anything would be to create/rebuild OLD_TABLE (non-partitioned) using "..FOR EXCHANGE.." clause. It's available only from Oracle version 12.2 onwards.
CREATE TABLE OLD_TABLE **FOR EXCHANGE** WITH TABLE NEW_TABLE;
It's not clear from your description if the OLD_TABLE is empty or has data in your case. If you have data, you can populate data in it using
INSERT INTO OLD_TABLE SELECT * FROM <old backup table>;
This avoids ORA-14097 (or ORA-00932 in some cases) during the 'exchange partition' get the job done seamlessly.
Oracle could sense issues with "exchange partition" soon after introducing DDL optimisation related to DEFAULT column attribute and hence introduced "..FOR EXCHANGE.." version of CTAS operation from 12.2 onwards.

Thanks to wolφi for highlighting the hidden columns and pointing me in the right direction.
I confirmed the hidden columns with the query below:
SELECT * FROM SYS.dba_tab_cols
I then recreated my staging table including the system generated column names with matching types and in the same order according to INTERNAL_COLUMN_ID.
The partition exchange still failed because the new columns were showing as USER_GENERATED='YES'
The final fix was to mark the columns as unused:
ALTER TABLE STAGING_TABLE
set unused ("SYS_C00006_16092719:09:49$"
,"SYS_C00007_16092719:10:34$"
,"SYS_C00008_16092719:06:48$"
,"SYS_C00009_16092719:07:00$"
,"SYS_C00010_16092719:07:10$"
,"SYS_C00011_16092719:08:15$"
,"SYS_C00012_16092719:08:59$" );
After this the partition exchange worked.

The most obvious one is that NEW_TABLE has a PARTITION_TIME column, while OLD_TABLE does not.
The other things to check, that might be an issue
NEW_TABLE.ID is NUMBER(18,0), while OLD_TABLE.ID is NUMBER(18)
OLD_TABLE.MESSAGE is VARCHAR2(4000 byte). You should check your
length semantics, since if they are defined as CHAR, then
NEW_TABLE.message would be VARCHAR2(4000 char).

Related

How can auto-Incrementing be maintained when concurrent transactions occur on a compound key In MYSQL?

I recently encountered an error in my application with concurrent transactions. Previously, auto-incrementing for compound key was implemented using the application itself using PHP. However, as I mentioned, the id got duplicated, and all sorts of issues happened which I painstakingly fixed manually afterward.
Now I have read about related issues and found suggestions to use trigger.
So I am planning on implementing a trigger somewhat like this.
DELIMITER $$
CREATE TRIGGER auto_increment_my_table
BEFORE INSERT ON my_table FOR EACH ROW
BEGIN
SET NEW.id = SELECT MAX(id) + 1 FROM my_table WHERE type = NEW.type;
END $$
DELIMITER ;
But my doubt regarding concurrency still remains. Like what if this trigger was executed concurrently and both got the same MAX(id) when querying?
Is this the correct way to handle my issue or is there any better way?
An example - how to solve autoincrementing in compound index.
CREATE TABLE test ( id INT,
type VARCHAR(192),
value INT,
PRIMARY KEY (id, type) );
-- create additional service table which will help
CREATE TABLE test_sevice ( type VARCHAR(192),
id INT AUTO_INCREMENT,
PRIMARY KEY (type, id) ) ENGINE = MyISAM;
-- create trigger which wil generate id value for new row
CREATE TRIGGER tr_bi_test_autoincrement
BEFORE INSERT
ON test
FOR EACH ROW
BEGIN
INSERT INTO test_sevice (type) VALUES (NEW.type);
SET NEW.id = LAST_INSERT_ID();
END
db<>fiddle here
creating a service table just to auto increment a value seems less than ideal for me. – Mohamed Mufeed
This table is extremely tiny - you may delete all records except one per group with largest autoincremented value in this group anytime. – Akina
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=61f0dc36db25dd5f0cf4647d8970cdee
You may schedule excess rows removing (for example, daily) in service event procedure.
I have managed to solve this issue.
The answer was somewhat in the direction of Akina's Answer. But not quite exactly.
The way I solved it did indeed involved an additional table but not like the way He suggested.
I created an additional table to store meta data about transactions.
Eg: I had table_key like this
CREATE TABLE `journals` (
`id` bigint NOT NULL AUTO_INCREMENT,
`type` smallint NOT NULL DEFAULT '0',
`trans_no` bigint NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `transaction` (`type`,`trans_no`)
)
So I created a meta_journals table like this
CREATE TABLE `meta_journals` (
`type` smallint NOT NULL,
`next_trans_no` bigint NOT NULL,
PRIMARY KEY (`type`),
)
and seeded it with all the different types of journals and the next sequence number.
And whenever I insert a new transaction to the journals I made sure to increment the next_trans_no of the corresponding type in the meta_transactions table. This increment operation is issued inside the same database TRANSACTION, i.e. inside the BEGIN AND COMMIT
This allowed me to use the exclusive lock acquired by the UPDATE statement on the row of meta_journals table. So when two insert statement is issued for the journal concurrently, One had to wait until the lock acquired by the other transaction is released by COMMITing.

MySQL duplication

Using SQLyog, I was testing whether the correct value was set into table.
And I tried
SELECT type_service FROM service WHERE email='test#gmail.com'
So, only one result was output.
type_service
0
To continue to test, I tried to set value, 1 by force which gave the warning
Warning
There are 2 duplicates of the row you are trying to update. Do you
want to update all the duplicates?
Note: You can turn off this warning by unchecking Tools -> Preferences -> Others -> Prompt if multiple rows are getting updated.
But I thought I already placed limitations with where clause. So I pushed yes.
As a result, the value of all the data in type_service column was changed to 1.
Why?
You have 2 exact duplicate rows in table. Exact. It is a friendly warning, but most likely needs to be addressed by a slight schema change.
The most simple solution is to alter the table and add an auto_increment Primary Key column.
Mysql Alter Table Manual page here.
See this Webyog FAQ link.
Whenever I am about to spook up another table, I usually stub it out like:
create table blah
(
id int auto_increment primary key,
...
...
...
);
for safety sake.
Were you not to have the auto_increment PK, see the following.
create table people
(
firstName varchar(40) not null,
lastName varchar(40) not null,
age int not null
);
insert people (firstName,lastName,age) values ('Kim','Billings',30),('Kim','Billings',30),('Kim','Billings',30);
select * from people;
-- this could be bad:
update people
set age=40
where firstName='Kim' and lastName='Billings';
ALTER TABLE people ADD id INT PRIMARY KEY AUTO_INCREMENT;
select * from people; -- much nicer now, schema has an id column starting at 1
-- you now at least have a way to uniquely identify a row

What is automatically populating this column?

I am using MySQL 5.6.1 on a Win 7 64Bit.
I have a standard audit column I add to all my tables called CRT_TS (create timestamp), along with a UPD_TS (update timestamp) column. I had planned on populating these via a before insert trigger and a before update trigger using utc_timestamp().
The UPD_TS column behaves as I expect it to. However, the CRT_TS column seems to be getting automatically populated without my defining a default or trigger for that column.
I was able to reproduce this behavior by running the following script.
create schema `test` default character set utf8 collate utf8_general_ci;
drop table test.TEST_TABLE;
create table test.TEST_TABLE(
TEST_ID int not null auto_increment ,
CRT_TS timestamp not null ,
UPD_TS timestamp not null ,
TEST_ALIAS varchar(64) not null ,
primary key PK_PERM (TEST_ID) ,
unique index UI_PERM_01 (TEST_ALIAS) )
auto_increment = 1001;
insert into test.TEST_TABLE
(TEST_ID
,TEST_ALIAS)
values
(1
,'testing');
select *
from test.TEST_TABLE;
In the above example, the CRT_TS column isn't being supplied a value, and yet it is being populated with the same value what would have been provided by the now() function. The UPD_TS column is populated with all zeros, yet both columns have been defined identically.
My questions is, what is populating the CRT_TS column? I am attempting to set both the UPD_TS and CRT_TS columns to utc_timestamp() value. Even setting the value in a trigger for CRT_TS, the value is overridden.
Thanks for any clarity you can provide.

MySQL Auto-Inc Bug?

In my MySQL table I've created an ID column which I'm hoping to auto-increment in order for it to be the primary key.
I've created my table:
CREATE TABLE `test` (
`id` INT( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`name` VARCHAR( 50 ) NOT NULL ,
`date_modified` DATETIME NOT NULL ,
UNIQUE (
`name`
)
) TYPE = INNODB;
then Inserted my records:
INSERT INTO `test` ( `id` , `name` , `date_modified` )
VALUES (
NULL , 'TIM', '2011-11-16 12:36:30'
), (
NULL , 'FRED', '2011-11-16 12:36:30'
);
I'm expecting that my ID's for the above are 1 and 2 (respectively). And so far this is true.
However when I do something like this:
insert into test (name) values ('FRED')
on duplicate key update date_modified=now();
then insert a new record, I'm expecting it to be 3, however now I'm shown an ID of 4; skipping the place spot for 3.
Normally this wouldn't be an issue but I'm using millions of records which have thousands of updates every day.. and I don't really want to even have to think about running out of ID's simply because I'm skipping a ton of numbers..
Anyclue to why this is happening?
MySQL version: 5.1.44
Thank you
My guess is that the INSERT itself kicks off the code that generates the next ID number. When the duplicate key is detected, and ON DUPLICATE KEY UPDATE is executed, the ID number is abandoned. (No SQL dbms guarantees that automatic sequences will be without gaps, AFAIK.)
MySQL docs say
In general, you should try to avoid using an ON DUPLICATE KEY UPDATE
clause on tables with multiple unique indexes.
That page also says
If a table contains an AUTO_INCREMENT column and INSERT ... ON
DUPLICATE KEY UPDATE inserts or updates a row, the LAST_INSERT_ID()
function returns the AUTO_INCREMENT value.
which stops far short of describing the internal behavior I guessed at above.
Can't test here; will try later.
Is it possible to change your key to unsigned bigint - 18,446,744,073,709,551,615 is a lot of records - thus delaying the running out of ID's
Found this in mysql manual http://dev.mysql.com/doc/refman/5.1/en/example-auto-increment.html
Use a large enough integer data type for the AUTO_INCREMENT column to hold the
maximum sequence value you will need. When the column reaches the upper limit of
the data type, the next attempt to generate a sequence number fails. For example,
if you use TINYINT, the maximum permissible sequence number is 127.
For TINYINT UNSIGNED, the maximum is 255.
More reading here http://dev.mysql.com/doc/refman/5.6/en/information-functions.html#function_last-insert-id it could be inferred that the insert to a transactional table is a rollback so the manual says "LAST_INSERT_ID() is not restored to that before the transaction"
What about for a possible solution to use a table to generate the ID's and then insert into your main table as the PK using LAST_INSERT_ID();
From the manual:
Create a table to hold the sequence counter and initialize it:
mysql> CREATE TABLE sequence (id INT NOT NULL);
mysql> INSERT INTO sequence VALUES (0);
Use the table to generate sequence numbers like this:
mysql> UPDATE sequence SET id=LAST_INSERT_ID(id+1);
mysql> SELECT LAST_INSERT_ID();
The UPDATE statement increments the sequence counter and causes the next call to
LAST_INSERT_ID() to return the updated value. The SELECT statement retrieves that
value. The mysql_insert_id() C API function can also be used to get the value.
See Section 20.9.3.37, “mysql_insert_id()”.
It's really a bug how you can see here: http://bugs.mysql.com/bug.php?id=26316
But, apparently, they fixed it on 5.1.47 and it was declared as INNODB plugin problem.
A duplicate, but same problem, you can see here too: http://bugs.mysql.com/bug.php?id=53791 referenced to the first page mentioned here in this answer.

Insert if not exists

How to have only 3 rows in the table and only update them?
I have the settings table and at first run there is nothing so I want to insert 3 records like so:
id | label | Value | desc
--------------------------
1 start 10 0
2 middle 24 0
3 end 76 0
After this from PHP script I need to update this settings from one query.
I have researched REPLACE INTO but I end up with duplicate rows in DB.
Here is my current query:
$query_insert=" REPLACE INTO setari (`eticheta`, `valoare`, `disabled`)
VALUES ('mentenanta', '".$mentenanta."', '0'),
('nr_incercari_login', '".$nr_incercari_login."', '0'),
('timp_restrictie_login', '".$timp_restrictie_login."', '0')
";
Any ideas?
Here is the create table statement. Just so you can see in case I'm missing something.
CREATE TABLE `setari` (
`id` int(10) unsigned NOT NULL auto_increment,
`eticheta` varchar(200) NOT NULL,
`valoare` varchar(250) NOT NULL,
`disabled` tinyint(1) unsigned NOT NULL default '0',
`data` datetime default NULL,
`cod` varchar(50) default NULL,
PRIMARY KEY (`eticheta`,`id`,`valoare`),
UNIQUE KEY `id` (`eticheta`,`id`,`valoare`)
) ENGINE=MyISAM
As explained in the manual, need to create a UNIQUE index on (label,value) or (label,value,desc) for REPLACE INTO determine uniqueness.
What you want is to use 'ON DUPLICATE KEY UPDATE' syntax. Read through it for the full details but, essentially you need to have a unique or primary key for one of your fields, then start a normal insert query and add that code (along with what you want to actually update) to the end. The db engine will then try to add the information and when it comes across a duplicate key already inserted, it already knows to just update all the fields you tell it to with the new information.
I simply skip the headache and use a temporary table. Quick and clean.
SQL Server allows you to select into a non-existing temp table by creating it for you. However mysql requires you to first create the temp db and then insert into it.
1.
Create empty temp table.
CREATE TEMPORARY TABLE IF NOT EXISTS insertsetari
SELECT eticheta, valoare, disabled
FROM setari
WHERE 1=0
2.
Insert data into temp table.
INSERT INTO insertsetari
VALUES
('mentenanta', '".$mentenanta."', '0'),
('nr_incercari_login', '".$nr_incercari_login."', '0'),
('timp_restrictie_login', '".$timp_restrictie_login."', '0')
3.
Remove rows in temp table that are already found in target table.
DELETE a FROM insertsetari AS a INNER JOIN setari AS b
WHERE a.eticheta = b.eticheta
AND a.valoare = b.valoare
AND a.disabled = b.disabled
4.
Insert temp table residual rows into target table.
INSERT INTO setari
SELECT * FROM insertsetari
5.
Cleanup temp table.
DELETE insertsetari
Comments:
You should avoid replacing when the
new data and the old data is the
same. Replacing should only be for
situations where there is high
probability for detecting key values
that are the same but the non-key
values are different.
Placing data into a temp table allows
data to be massaged, transformed and modified
easily before inserting into target
table.
Deleting rows from temp table is
faster.
If anything goes wrong, temp table
gives you an additional debugging
stage to find out what went wrong.
Should consider doing it all in a single transaction.