I'm trying to create a trigger with sql so that When I insert a row in Point I insert before it a row in PointAbs.
CREATE TABLE PointAbs (
ID INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
X INTEGER NOT NULL,
Y INTEGER NOT NULL
);
CREATE TABLE Point(
ID INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
Name VARCHAR(50) ,
IDPointAbs INTEGER NOT NULL,
FOREIGN KEY (IDPointAbs) REFERENCES PointAbs(ID) ON DELETE CASCADE
);
the problem is that I need to provide "X" and "Y" for PointAbs and "Name" for Point at the same time. Ho can I achieve that?
I could use a JDBC functionality to get the last insertedID but I don't like it that way.
It seems like the relation is 1 to 1, as you have to create a new PointAbs for each Point. Unless you have another table that relates to PintAbs, there will be one PointAbs for each Point. If you don't need two separated objects, you can state X and Y as an index of Point:
CREATE TABLE Point(
ID INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
Name VARCHAR(50) ,
IDPointAbs INTEGER NOT NULL,
X INTEGER NOT NULL,
Y INTEGER NOT NULL,
INDEX INDEX_X_Y ON Point(X,Y)
);
Of course, this might not be possible nor desirable in your design.
As you can't send parameters to the trigger like the X and Y values, your best option is to use a single transaction for both inserts.
BEGIN;
INSERT INTO PointAbs(X,Y) VALUES (10,15);
INSERT INTO Point(Name, IDPointAbs) VALUES ('Fancy Name', LAST_INSERT_ID(PointAbs));
COMMIT;
You can control the insertions using the programming language of the system back-end, but as there is no specific language mentioned other than mysql, I won't enter into details.
Related
lets say I have an account object in my application, which currently represented as:
CREATE TABLE Account (
accountId int NOT NULL AUTO_INCREMENT,
name varchar(255) NOT NULL,
PRIMARY KEY (accountId)
);
Now, Account object need to also have Solution field...and Status have 4 different possible values:
Solution1, Solution2, Solution3, Solution4
What would be the right way to represent it in the database?
Account can have few statuses, and status can have few accounts...
So at first I thought create in the db table of Solutions and than have another table to hold the relationship, but its seems too complicated for a field that have only 4 possible values...
Create a junction table to represent the relationships between accounts and solutions:
CREATE TABLE account_solution (
accountId int NOT NULL,
solutionId int NOT NULL
PRIMARY KEY (accountId, solutionId)
)
For your solution table, since there are only 4 values, you might be able to take advantage of MySQL's enum type, e.g.
CREATE TABLE solution
solutionId int NOT NULL PRIMARY KEY,
status ENUM('Solution1', 'Solution2', 'Solution3', 'Solution4')
);
You can use set Mysql SET type
CREATE TABLE Account (
accountId int NOT NULL AUTO_INCREMENT,
name varchar(255) NOT NULL,
status set('Solution1','Solution2','Solution3','Solution4') NOT NULL,
PRIMARY KEY (accountId)
);
And if you want to select a specific status
SELECT *
FROM `Account`
WHERE FIND_IN_SET( 'Solution2', `status` ) >0
In R, I have a vector, "myVector", of strings which I want to insert into a column, "myColumn", of a mysql table, "myTable". I understand I can write the sql query and run it in R using dbSendQuery. So let's figure out the sql query first. Here is an example:
myVector = c("hi","I", "am")
Let's insert myVector in the column myColumn of myTable, row numbers 3 to 5, here is the sql query which works except for the last line I have no idea:
UPDATE myTable t JOIN
(SELECT id
FROM myTable tt
LIMIT 3, 3
) tt
ON tt.id = t.id
SET myColumn = myVector;
Thanks
Assuming that I understand your problem correctly, I have two possible solutions on my mind:
1. one column per element:
if your vectors are all have equal number of elements, you could store each of them in a seperate column. Proceeding from your example above, the table could look like this. (the size of the columns and whether to allow null values or not depends on your data)
CREATE TABLE `myTable` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`element1` varchar(255) DEFAULT NULL,
`element2` varchar(255) DEFAULT NULL,
`element3` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The statement for inserting your vector from above would be:
INSERT INTO `myTable` (`id`, `element1`, `element2`, `element3`)
VALUES (1, 'hi', 'I', 'am');
Depending on how much elements your vectors have this approach might be more or less applicable.
2. Storing the vector as a blob:
Another approach could be storing the vector as a blob. Blob (Binary Large Object) is a datatype to store a variable amount of (binary) data (see: https://dev.mysql.com/doc/refman/5.7/en/blob.html). This idea is taken from this article: http://jfaganuk.github.io/2015/01/12/storing-r-objects-in-sqlite-tables/
The table could be created using the following statement:
CREATE TABLE `myTable` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`myVector` blob,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
When inserting your vector you bind the variable to your query. As I am not a R specialist I would refer to this article for the implementation details.
I'm not aware, if MySQL support Vector data type, but you could design your table as workaround where Vector can be store in different table and will have relation with myTable as 1-M.
This is help you to manage and retrieve details easily. So, assuming myTable is your table and it's existing design is :
myTable
-------
id
col1
vectorCol
So, you main table can be
CREATE TABLE myTable (
id INT NOT NULL AUTO_INCREMENT,
col1 varchar(50),
PRIMARY KEY (id)
);
and table which will store your vector.
CREATE TABLE vectorTab (
id INT NOT NULL AUTO_INCREMENT, -- in case ordering matter
parent_id INT NOT NULL,
value TEXT,
PRIMARY KEY (id),
FOREIGN KEY (parent_id) REFERENCES myTable (id) ON DELETE CASCADE ON UPDATE CASCADE
);
What you should do is export your R vector as JSON using toJSON() function for example:
myJSONVector = toJSON(c("hi","I", "am"))
Also create or alter myTable so that myColumn has the appropriate JSON Data Type
Attempting to insert a value into a JSON column succeeds if the value
is a valid JSON value, but fails if it is not:
Example
CREATE TABLE `myTable` (`myColumn` JSON);
INSERT INTO `myTable` VALUES(myJSONVector); // will fail if myJSONVector is not valid JSON
// update query would be
UPDATE `myTable` SET `myColumn` = myJSONVector
WHERE `id` IN (3,4,5);
In addition
you can make an R vector from JSON using function fromJSON().
Given the following table:
DROP TABLE IF EXISTS my_table;
CREATE TABLE IF NOT EXISTS my_table(
id INT NOT NULL,
timestamp TIMESTAMP(3) DEFAULT CURRENT_TIMESTAMP(3) NOT NULL,
data BLOB NULL,
PRIMARY KEY (id)
);
I can insert on it with:
INSERT INTO my_table (timestamp, data) VALUES
('2014-07-11 11:25:48.185', LOAD_FILE('sql/file.bin'));
In the above insert I was not enforced to insert the id field.
How may I create the table (my_table) so that it prevents inserts without id?
I would every insert to be made (providing the id) like, i.e.:
INSERT INTO my_table (id, timestamp, data) VALUES
(7, '2014-07-11 11:25:48.185', LOAD_FILE('sql/file.bin'));
I was thinking NOT NULL was there for it.
To prevent inserts with an empty value for ID (or not value passed), simply define the column as NOT NULL as you defined it.
I can't see how your example worked (i.e. inserting only into (timestamp, data)).
Now, the fact that there is another table with a trigger that inserts in this one does not have any effect on the ID column of this table. If you define it as AUTO_INCREMENT, whenever you insert a new row, the ID will automatically get a new value which will be fully independent from any data of the first table.
You can have as many tables as you wish with auto-incremented fields, each running a different sequence (and hence their numbering will be fully independent).
To summarize:
CREATE TABLE IF NOT EXISTS my_table(
id INT NOT NULL AUTO_INCREMENT ,
timestamp TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3) ,
data BLOB NULL ,
PRIMARY KEY (id)
);
I have created a table empInfo as follow
CREATE TABLE empInfo (
empid INT(11) PRIMARY KEY AUTO_INCREMENT ,
firstname VARCHAR(255) DEFAULT NULL,
lastname VARCHAR(255) DEFAULT NULL
)
Then I run below Insert statements :-
INSERT INTO empInfo VALUES(NULL , 'SHREE','PATIL');
INSERT INTO empInfo(firstname,lastname) VALUES( 'VIKAS','PATIL');
INSERT INTO empInfo VALUES(NULL , 'SHREEKANT','JOHN');
I thought first or Third SQL will fail as empid is PRIMARY KEY and We are trying to insert NULL for empid .
But MYSQL proved me wrong and all 3 queries ran successfully .
I wanted to know Why it is not failing when trying to insert NULL in empid column ?
Final Data available in table is as below
empid firstname lastname
1 SHREE PATIL
2 VIKAS PATIL
3 SHREEKANT JOHN
I can figure out that it has something releted to AUTO_INCREMENT But I am not able to figure out reason for it . Any pointers on this .
This behaviour is by design, viz inserting 0, NULL, or DEFAULT into an AUTO_INCREMENT column will all trigger the AUTO_INCREMENT behaviour.
INSERT INTO empInfo VALUES(DEFAULT, 'SHREEKANT','JOHN');
INSERT INTO empInfo VALUES(NULL, 'SHREEKANT','JOHN');
INSERT INTO empInfo VALUES(0, 'SHREEKANT','JOHN');
and is commonplace practice
Note however that this wasn't however always the case in versions prior to 4.1.6
Edit
Does that mean AUTO_INCREMENT is taking precedance over PRIMARY KEY?
Yes, since the primary key is dependent on the AUTO_INCREMENT delivering a new sequence prior to constraint checking and record insertion, the AUTO_INCREMENT process (including the above re-purposing of NULL / 0 / DEFAULT) would need to be resolved prior to checking PRIMARY KEY constraint in any case.
If you remove the AUTO_INCREMENT and define the emp_id PK as INT(11) NULL (which is nonsensical, but MySql will create the column this way), as soon as you insert a NULL into the PK you will get the familiar
Error Code: 1048. Column 'emp_id' cannot be null
So it is clear that the AUTO_INCREMENT resolution precedes the primary key constraint checks.
It is exactly because of the auto increment. As you can see, no empid values are null in the db. That is the purpose of auto increment. Usually you would just not include that column in the insert, which is same as assigning null
As per the documentation page:
No value was specified for the AUTO_INCREMENT column, so MySQL assigned sequence numbers automatically. You can also explicitly assign 0 to the column to generate sequence numbers. If the column is declared NOT NULL, it is also possible to assign NULL to the column to generate sequence numbers.
So, because you have an auto increment null-allowed field, it ignores the fact that you're trying to place a NULL in there, and instead gives you a sequenced number.
You could just leave it as is since, even without the not null constraint, you can't get a NULL in there, because it will auto-magically convert that to a sequenced number.
Or you can change the column to be empid INT(11) PRIMARY KEY AUTO_INCREMENT NOT NULL if you wish, but I still think the insert will allow you to specify NULLs, converting them into sequenced numbers in spite of what the documentation states (tested on sqlfiddle in MySQL 5.6.6 m9 and 5.5.32).
In both cases, you can still force the column to a specific (non-zero) number, constraints permitting of course.
CREATE TABLE empInfo (
empid INT(11) PRIMARY KEY AUTO_INCREMENT NOT NULL,
firstname VARCHAR(255) DEFAULT NULL,
lastname VARCHAR(255) DEFAULT NULL
)
Not sure but i think it will work :)
I have created a table with some attributes that have the NOT NULL constraint, then I have tried a INSERT INTO instruction, specifying values only for the fields that don't have the NOT NULL constraint, but the instruction still works. Shouldn't it work and give an error?
CREATE TABLE ciao(
Id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
Nome VARCHAR(30) NOT NULL,
Cognome VARCHAR(30) NOT NULL,
Nickname VARCHAR(30)
);
INSERT INTO ciao(Nickname) VALUES ('prova');
It is inserting empty string as the default value for the columns you didn't supply. That default is not specified in your create statement, so it's probably been created by someone else. If you run your create and insert on a clean DB, it fails.