I'm trying insert data in table using this query
INSERT INTO table (
url,
v_count,
v_date)
SELECT
url,
v_count,
v_date FROM json_populate_recorset(null::record,
'[{"url_site":"test.com","visit_count":1,"visit_date":"2022-08-31"},
{"url_site":"dev.com","visit_count":2,"visit_date":"2022-08-31"}]'::json)
AS ("url" varchar(700), "v_count" integer, "v_date" date)
And I'm getting this error:
null value in column "v_date" of relation table violates not null constraint
Since my json could be hundreds of entries at some times,
how should I send the date in my json ?
There is another (efficient) way to insert this data in the table ?
Edit: in postico 1.5.20 my example above works as long as I have the json key named the same as the table columns, how can I reference differents names in my json keys?
Since v_date can resolve to null, you'll need to either skip them or provide a value when null appears.
To skip the null values, you may want to add a WHERE v_date NOTNULL clause to your SELECT statement.
Otherwise, you can use COALESCE() to assign a value when v_date is null. For example ... SELECT url, v_count, COALESCE(v_date,now()) FROM json_populate_recordset...
Related
I need to insert an empty value into a date field.
I've read how to insert an empty value in mysql date type field? which states to set the field to allow null. I've done that.
It also states to isnert null rather than an empty value. Unfortunatly that's not possible with the current set up - is there a way to allow it to input an empty string?
Say, your table is:
CREATE TABLE MyTable (A INT ,
B INT ,
C INT
) ;
The statement:
INSERT INTO MyTable (B,C) VALUES (3,4) ;
will leave A null.
I have a table which saves time stamp value associated with primary_keys of the record.
Table : primary_key_timestamp
Column : primary_key VARCHAR(40), time_stamp (TIMESTAMP).
I need a single query which updates the timestamp value of a primary key and return true in following cases :
Entry not exist for provided key : Insert it and return true.
Entry exist and timestamp value in table is lesser than value provided : Update it and return True.
Should return false in following case :
Timestamp already exist for key and it has higher timestamp value : Dont do anything just return false.
It's impossible to insert and return a result in a single statement, unless you're calling a stored procedure. If you are able to get ROW_COUNT() after the insert (which many drivers/clients will do implicitly), you can find out if this query makes any updates. Just check for a nonzero row count.
INSERT INTO primary_key_timestamp (primary_key, time_stamp)
VALUES ([your_key], CURRENT_TIMESTAMP())
ON DUPLICATE KEY UPDATE time_stamp = GREATEST(time_stamp, VALUES(time_stamp));
If you want to set time_stamp to a custom value, just replace CURRENT_TIMESTAMP() with your value.
Have some tables:
CREATE TABLE `asource` (
`id` int(10) unsigned NOT NULL DEFAULT '0'
);
CREATE TABLE `adestination` (
`id` int(10) unsigned NOT NULL DEFAULT '0',
`generated` tinyint(1) GENERATED ALWAYS AS (id = 2) STORED NOT NULL
);
I copy a row from asource to adestination:
INSERT INTO adestination
SELECT asource.*
FROM asource;
The above generates an error:
Error Code: 1136. Column count doesn't match value count at row 1
Ok, quite strange to require me to mention generated query. But ok, I add that column to the query:
INSERT INTO adestination
SELECT asource.*, NULL AS `generated`
FROM asource;
This has worked fine in 5.7.10. However, it generates an error in 5.7.11 (due to a fix:
Error Code: 3105. The value specified for generated column 'generated' in table 'adestination' is not allowed.
Ok, next try:
INSERT INTO adestination
SELECT asource.*, 1 AS `generated`
FROM asource;
But still the same error. I have tried 0, TRUE, FALSE but the error persists.
The DEFAULT value which is stated as the only allowed value (specs or docs). However, the following generates a syntax error (DEFAULT is not supported there):
INSERT INTO adestination
SELECT asource.*, DEFAULT AS `generated`
FROM asource;
So, how can I copy a row from one table to another using INSERT INTO ... SELECT if the destination table adds some columns where some of them are GENERATED?
The code calling this query is generic and has no knowledge what columns that particular tables have. It just knows which extra columns the destination table has. The source table is a live table, the destination table is a historical version of the source table. It has few columns extra like user id made the change, what type of the change it is (insert, update, delete) when etc.
Sadly this is just how MySQL works now to "conform to SQL standards".
The only value that the generated column can accept in an update, insert, etc. is DEFAULT, or the other option is to omit the column altogether.
My poor mans work around for these are to just disable the generated column while I'm working with the data (like for importing a dump) and then go back and add the generated column expression afterwards.
You must declare the columns
Insert into adestination (id, generated)
select id, 1
from asource;
It is best practice to list out the columns, and use null as field1 for the auto incremented id field.
INSERT INTO adestination
(id,
field1,
field2)
SELECT
null AS generated,
asource.field1,
asource.field2
FROM asource;
I'm experimenting with temporary tables and running into a problem.
Here's some super-simplified code of what I'm trying to accomplish:
IF(Object_ID('tempdb..#TempTroubleTable') IS NOT NULL) DROP TABLE #TempTroubleTable
select 'Hello' as Greeting,
NULL as Name
into #TempTroubleTable
update #TempTroubleTable
set Name = 'Monkey'
WHERE Greeting = 'Hello'
select * from #TempTroubleTable
Upon attempting the update statement, I get the error:
Conversion failed when converting the varchar value 'Monkey' to data type int.
I can understand why the temp table might not expect me to fill that column with varchars, but why does it assume int? Is there a way I can prime the column to expect varchar(max) but still initialize it with NULLs?
You need to cast null to the datatype because by default its an int
Select 'hello' as greeting,
Cast (null as varchar (32)) as name
Into #temp
I am working on a table having around 5 million records. I'm loading records from a csv file.
There is a unique column, url.
While inserting, if the url is already in the table, I want to make a change in the new url value and then do the insertion.
Example:
try inserting a record with a url of "book". If "book" already exists, the new record should have a url of "book-1" (then "book-2" and so on)
result: the url values "book-1","book-2"... are in the table in addition to the initial value book
I have figured out that there are 2 ways to do so.
before inserting each record: check whether the url value already exists; if it does then make the required changes in the new url value and insert. I am afraid that this will result in a poor performance.
insert records without checking if the url value already exists. If url value already exists handle the "mysql #1062 - Duplicate entry error" and make the required changes in the url value; retry the insertion.
Is this possible? If so, how?
If this is an one-off problem, I'd like to recommend an ad-hoc MySQL solution:
If your table isn't MyISAM, convert to MyISAM.
Temporarily create an auto_increment integer column named
url_suffix.
Temporarily delete the unique constraint on the url column.
Create the multiple-column index (url, url_suffix) and ensure that there are no other indexes that use url_suffix.
Insert all of your rows, allowing duplicate URLs. You'll notice that the auto_increment url_suffix column is keyed on the url now. So, the first particular url will have url_suffix of 1 and the next 2, and so on.
Do an update like the following, then delete your temporary url_suffix column and put your unique constraint back.
Query to update all the rows:
UPDATE urls
SET url = if (url_suffix = 1, url, CONCAT(url, '-', url_suffix - 1))
In fact, you could skip step 6, keep the auto_increment field so you could easily add duplicate URLs in the future, and simply fetch your URLs like this:
SELECT (if (url_suffix = 1, url, CONCAT(url, '-', url_suffix - 1))) AS url
FROM urls
Your data would look something like this:
url url_suffix
---------------------------
that 1
that 2
this 1
this 2
this 3
those 1
You have the problem here that a simple trigger will prove inefficient when inserting due to the fact that you are saying they will go from 'book' to 'book-1' 'book-2' etc. The easiest way to do this would be to have a new column which contains a numeric value defaulting to 0. This could be done in a stored procedure i.e.
CREATE PROCEDURE `insertURL`(inURL VARCHAR(255))
BEGIN
DECLARE thisSuffix INT UNSIGNED DEFAULT 0;
// We have to get this ID first, as MySQL won't let you select from the table you are inserting to
SELECT COALESCE(MAX(url_suffix)+1,0) INTO thisSuffix FROM urls WHERE url_column = inURL;
// Now the ID is retrieved, insert
INSERT INTO urls (
url_column,
url_suffix
) VALUES (
inURL,
thisSuffix
);
// And then select the generated URL
SELECT IF(thisSuffix>0,CONCAT(inURL,'-',thisSuffix),inURL) AS outURL;
END
Which is then invoked using
CALL insertURL('book');
And will then return 'book' if the suffix = 0, or 'book-1' if it's got a suffix greater than 0.
For purposes of testing my table design was
CREATE TABLE `urls` (
`url_column` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL ,
`url_suffix` tinyint(3) UNSIGNED NOT NULL ,
PRIMARY KEY (`url_column`, `url_suffix`)
);