Using JDBC 3 driver, one can insert a record into a table and immediately get autogenerated value for a column. This technique is used in ActiveJDBC.
Here is the table definition:
CREATE TABLE users (id int(11) NOT NULL auto_increment PRIMARY KEY, first_name VARCHAR(56), last_name VARCHAR(56), email VARCHAR(56)) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This is working fine on H2 and PostgreSQL, and the type of the returned value is Integer.
However, in MySQL the type is Long, while I believe it should be Integer.
When querying this same row in Mysql, the "id" comes back as Integer.
Anyone knows why the "getGeneratedKeys()" returns java.lang.Long in Mysql and how to fix it?
The why:
The generator that MySQL uses for keeping track of the value is BIGINT, so the driver describes it as BIGINT, and that is equivalent to Long. See LAST_INSERT_ID in the MySQL manual.
Drivers like PostgreSQL return the actual column of the table (actually PostgreSQL returns all columns when using getGeneratedKeys(); I assume that MySQL simply calls LAST_INSERT_ID().
How to solve it:
As indicated by Jim Garrison in the comments: Always use getInt(), or getLong(), and not getObject().
Related
I was developing a database in SQL Server where I was using an identity column as a seed for a primary key field. The intention was to reset the identity to 1 at the beginning of every year. This would allow us to create a PK of the Year - Identity Column.
Create Table Issues (
IssueID AS RIGHT(CONVERT(VARCHAR, Year(getdate()), 4),2) + '-' + RIGHT(REPLICATE('0', 2) +
CONVERT(VARCHAR, RecordID),3) NOT NULL PRIMARY KEY,
RecordID int Identity (1,1),.........)
The result would be
IssueID RecordID
20-001 1
20-002 2
20-003 3
21-001 1
etc....
Now I've been told we are going to use a MySQL database instead.
Can an Auto-Increment field in MySQL contain duplicate values like it can in SQL Server?
If Not, how can I do what I need to do in MySQL?
In MySQL, you can't use the default auto-increment feature for what you describe, a incrementing value that starts over per year.
This was a feature of the MyISAM storage engine years ago. An auto-increment that was the second column of a multi-column primary key would start counting from one for each distinct value in the first column of the PK. See the example under "MyISAM Notes" on this page: https://dev.mysql.com/doc/refman/8.0/en/example-auto-increment.html
But it's considered not a good idea to use MyISAM because it does not support ACID. In general, I would find another way of solving this task. I would not use MyISAM.
In InnoDB, there's no way the table will generate a value that is a duplicate of a value currently in the table, or even a value less than the maximum value previously generated for that table. In other words, there's no way to "fill in the gaps" using auto-increment.
You can use ALTER TABLE mytable AUTO_INCREMENT=1 to reset the counter, but the value you set it will automatically advance to the max value currently in the table + 1.
So you'll have to generate it using either another table, or else something other than the MySQL database. For example, I've seen some people use memcached, which supports an atomic "increment and return counter" operation.
Another thing to consider: If you need a row counter per year, this is actually different from using MySQL's auto-increment feature. It's not easy to use the latter as a row counter. Besides, what happens if you roll back a transaction or delete a row? You'd end up with non-consecutive RecordId values, with unexplained "gaps." It's also a fact about the auto-increment feature that it guarantees that subsequent id's will be greater, but it does not guarantee to generate all values consecutively. So you'll get gaps eventually anyway.
In MySQL a table can have only one auto_increment column and this column must be a part of the primary key. See details here.
Technical workaround for your task would be creating of a table with a single auto_increment column, and you can obtain auto_increment value by inserting a record into this table and immediately calling standard MySQL function last_inser_id(). When time comes you should truncate the table - in this case the auto_increment count will be reset.
Now I use SQL script like SELECT * FROM user WHERE JSON_CONTAINS(users, '[1]');But it will scan full table, it's inefficient. So I want to create the index on users column.
For example, I have a column named users, data looked like [1,2,3,4]. Please tell me how to set index on JSON array type(Generate virtual column). I had read the document on MySQL website, they all talked about to indexing in JSON object type by using JSON_EXTRACT() function.
It's now possible with MySQL 8+
Here is an example:
CREATE TABLE customers (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
modified DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
custinfo JSON
);
ALTER TABLE customers ADD INDEX comp(id, modified,
(CAST(custinfo->'$.zipcode' AS UNSIGNED ARRAY)) );
Use it this way:
SELECT * FROM customers
-> WHERE JSON_CONTAINS(custinfo->'$.zipcode', CAST('[94507,94582]' AS JSON));
More info:
https://dev.mysql.com/doc/refman/8.0/en/create-index.html
You cannot, not at least the way you intend. At The JSON Data Type we can read:
JSON columns, like columns of other binary types, are not indexed
directly; instead, you can create an index on a generated column that
extracts a scalar value from the JSON column. See Indexing a
Generated Column to Provide a JSON Column Index, for a detailed
example.
So with the restriction comes the workaround ;-)
Currently, I have a mySQL table with columns that looks something like this:
run_date DATE
name VARCHAR(10)
load INTEGER
sys_time TIME
rec_time TIME
valid TINYINT
The column valid is essentially a valid bit, 1 if this row is the latest value for this (run_date,name) pair, and 0 if not. To make insertions simpler, I wrote a stored procedure that first runs an UPDATE table_name SET valid = 0 WHERE run_date = X AND name = Y command, then inserts the new row.
The table reads are in such a way that I usually use only the valid = 1 rows, but I can't discard the invalid rows. Obviously, this schema also has no primary key.
Is there a better way to structure this data or the valid bit, so that I can speed up both inserts and searches? A bunch of indexes on different orders of columns gets large.
In all of the suggestions below, get rid of valid and the UPDATE of it. That is not scalable.
Plan A: At SELECT time, use 'groupwise max' code to locate the latest run_date, hence the "valid" entry.
Plan B: Have two tables and change both when inserting: history, with PRIMARY KEY(name, run_date) and a simple INSERT statement; current, with PRIMARY KEY(name) and INSERT ... ON DUPLICATE KEY UPDATE. The "usual" SELECTs need only touch current.
Another issue: TIME is limited to 838:59:59 and is intended to mean 'time of day', not 'elapsed time'. For the latter, use INT UNSIGNED (or some variant of INT). For formatting, you can use sec_to_time(). For example sec_to_time(3601) -> 01:00:05.
I am developing an application which uses external datasources. My Application supports multiple databases(viz. MySQl,MsSQl,Teradata, Oracle, DB2 etc.). When i create a datasource, I allow user to assign a primary key(pk) to the datasource. Now, I am not checking if the user selected column is primary key or not in actual database. I just want that, while retrieving data from database, the records which have null/blank value in user selected primary key should get dropped. I have created a filter supporting all other databases except for DB2 and Teradata.
Sample Query for other databases:
Select * from MY_TABLE where PK_COLUMN IS NOT NULL and PK_COLUMN !='';
Select * from MY_TABLE where PK_COLUMN IS NOT NULL AND cast(PK_COLUMN as varchar) !=''
DB2 and Teradata:
The PK_COLUMN !='' and cast(PK_COLUMN as varchar) !='' conditions gives error for int datatype in DB2 and teradata because:
- column with int type cannot be gven the above mentioned conditions and also we cannot cast the int type columns to varchar directly in DB2 and Teradata.
I want to create a query to drop null/blank value from the database provided table name and user pk column name as string. (I do not know the pk_column_type while creating the query so the query should be uniform to support all datatypes)
NOTE: The pk here is not actual pk, it is just a dummy pk assigned by my application user. So this can be a normal column and thus can have null/blank values.
I have created a query as:
Select * from MY_TABLE where PK_COLUMN IS NOT NULL AND cast(cast(PK_COLUMN as char) as varchar) !=''
My Question:
Will this solution(double casting) support all datatypes in DB2 and Teradata?
If not, can I come up with a better solution?
Of course you can cast an INT to a VARCHAR in both Teradata and DB2, but you have to specify the length of a VARCHAR, there's no default length.
Casting to a CHAR without a length defaults to CHAR(1) in Standard SQL, which might cause some "string truncation" error.
You need to cast to a VARCHAR(n) where n is the maximum length based on the DBMS.
Plus there's no != operator in SQL, this should be <> instead.
Finally there's a fundamental difference between an empty string and a NULL (except on Oracle), one or more blanks might also have a meaning and will be filtered when compared to ''.
And what is an empty INT supposed to be? If your query worked zero would be casted to '0' which is not equal to '', so it would fail.
You should simply use IS NOT NULL and add a check for an empty string only for character column (and add an option for the user to decide if an empty string is equal to NULL).
Without having to do it manually (which I'm open to implementing if no other options exist), is there a way in either PostgreSQL or MySQL to have an automatic counter/field that decrements instead of increments?
For a variety of reasons in a current application, it would be nice to know how many more entries (from a datatype point of view) can still be added to a table just by looking at the most-recently-added record, rather than subtracting the most recent ID from the max for the datatype.
So, is there an "AUTO_DECREMENT" or similar for either system?
You have to do a bit of manual configuration in PostgreSQL but you can configure a sequence like that:
create sequence example_seq
increment by -1
minvalue 1
maxvalue 5
start with 5;
create table example(
example_id int primary key default nextval('example_seq'),
data text not null
);
alter sequence example_seq owned by example.example_id;
I suppose it would be equivalent to create the table with a serial column and then alter the auto-generated sequence.
Now if I insert some rows I get example_id counting down from 5. If I try to insert more than 5 rows, I get nextval: reached minimum value of sequence "example_seq" (1)