golang - mysql driver - database functions - mysql

I have created a struct to store spatial types and I have created a scan function to help query rows in my database. I am having issues inserting this type.
I can insert data using the following sql;
INSERT INTO 'table' ('spot') VALUES (GeomFromText('POINT(10 10)'));
If I use Value interface in database/sql/driver;
type Value interface{}
Value is a value that drivers must be able to handle. It is either nil or an instance of one of these types:
int64
float64
bool
[]byte
string [*] everywhere except from Rows.Next.
time.Time
And use this code;
func (p Point) Value() (driver.Value, error) {
return "GeomFromText('" + p.ToWKT() + "')", nil
}
I end up with the following sql statement going to the database;
INSERT INTO 'table' ('spot') VALUES ('GeomFromText('POINT(10 10)')');
The issue being that the function GeomFromText is in quotes. Is there a way to avoid this scenario? I am using gorm and trying to keep raw sql queries to a minimum.
The mysql type being used on the database end is a point.

Please see the two urls below where the concept was poached from
Schema
-- http://howto-use-mysql-spatial-ext.blogspot.com/
create table Points
( id int auto_increment primary key,
name VARCHAR(20) not null,
location Point NOT NULL,
description VARCHAR(200) not null,
SPATIAL INDEX(location),
key(name)
)engine=MyISAM; -- for use of spatial indexes and avoiding error 1464
-- insert a row, so we can prove Update later will work
INSERT INTO Points (name, location, description) VALUES
( 'point1' , GeomFromText( ' POINT(31.5 42.2) ' ) , 'some place');
Update statement
-- concept borrowed from http://stackoverflow.com/a/7135890
UPDATE Points
set location = PointFromText(CONCAT('POINT(',13.33,' ',26.48,')'))
where id=1;
Verify
select * from points;
(when you open the Value Editor to see the blob, the point is updated)
So, the takeaway is to play with the concat() inside of the update statement.

Related

How to insert a json object with ORACLE 19 and 21

Because I don't use Oracle 21. I can't use the JSON type in the definition of a table.
CREATE TABLE TABLE_TEST_QUERY_2
(
TTQ_NR INTEGER GENERATED BY DEFAULT AS IDENTITY,
TTQ_QUERY_TO_BE_TESTED VARCHAR2 (4000 BYTE),
TTQ_RESULT CLOB,
--RESULT JSON, UPGRADE oracle 21
TTQ_TTQ_CREATION_DATE DATE DEFAULT SYSDATE,
TTQ_ALREADY_TESTED INTEGER DEFAULT 0,
TTQ_TEST_PASSED INTEGER,
PRIMARY KEY (TTQ_NR),
CONSTRAINT RESULT CHECK (TTQ_RESULT IS JSON)
)
I want to add a json object in ttq_result. Not a string representing a json.
I've a way to transform a json into a clob.
select to_clob(utl_raw.cast_to_raw (json_object('a' value 2))) from dual;
But it's not working, if I try to insert the clob created from a json in the table
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 TTQ_RESULT
VALUES to_clob(utl_raw.cast_to_raw (json_object(a value '2')));
[Error] Execution (3: 13): ORA-03001: unimplemented feature
code(oracle 18)
update:
I've tried to add a json on dbfiddle with oracle 21. I'm using the json type to define a column.
CREATE TABLE TABLE_TEST_QUERY_2
(
TTQ_NR INTEGER GENERATED BY DEFAULT AS IDENTITY,
TTQ_QUERY_TO_BE_TESTED VARCHAR2 (4000 BYTE),
TTQ_RESULT JSON,
TTQ_TTQ_CREATION_DATE DATE DEFAULT SYSDATE,
TTQ_ALREADY_TESTED INTEGER DEFAULT 0,
TTQ_TEST_PASSED INTEGER,
PRIMARY KEY (TTQ_NR)
)
INSERT INTO TABLE_TEST_QUERY_2 TTQ_RESULT
VALUES json_object('a' value 2);
I have the same error.
ORA-03001: unimplemented feature
Maybe are these 2 problems related.
code oracle 21
Your first problem is because you are using the wrong syntax as you have omitted the brackets from around column identifiers or the column value:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES ( to_clob(utl_raw.cast_to_raw (json_object(a value '2'))));
Which fixes the unimplemented feature exception but now you get:
ORA-00984: column not allowed here
Which is because you are using a different query to the SELECT as you have changed json_object('a' value 2) to json_object(a value '2') and the query cannot find a column a.
If you fix that by using the original code from the SELECT with 'a' as a string literal and not a a column identifier:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES ( to_clob(utl_raw.cast_to_raw (json_object('a' value 2))));
You will then get the error:
ORA-02290: check constraint (FIDDLE_FCJHJVMCPHKXUCUPDUSV.RESULT) violated
Because converting to a RAW and then to a CLOB will mangle the value.
You need something much simpler:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES (json_object('a' value 2));
or:
INSERT INTO BV_OWN.TABLE_TEST_QUERY_2 (TTQ_RESULT)
VALUES (EMPTY_CLOB() || json_object('a' value 2));
Which both work.
db<>fiddle here

SQL Update statement truncated incorrect double value error

I'm trying to update my table column values into a string. My query goes like this
UPDATE tbl_testing
SET result= 'Hey'
WHERE (SELECT (colOne) + '-' + (colTwo) + '-' + (colThree)) = 'r-r-r'
which the columns 'colOne, colTwo and colThree' already contains 'r' but slqyog shows "Truncated incorrect DOUBLE value: 'r-r-r'"
and all of the other result column data became = 'Hey'. What should I do?
You have to decide it is either MySQL or MSSQL.
In MySQL the string concatenation is not + sign, but simply you enumerate the columns separated by comma and the statement is SELECT CONCAT("Field1", "Field2" etc) AS ConcatenatedString); - CONCAT() function.
Try reevaluate your DB engine and adapt the query.
In MSSQL the string concatenation is indeed + sign. Your query works ok in MSSQL and updates the result column with the value you have set.
DDL
CREATE TABLE [dbo].[tbl_testing](
[id] [int] NULL,
[result] [nvarchar](4000) NULL,
[colOne] [nvarchar](4000) NULL,
[colTwo] [nvarchar](4000) NULL,
[colThree] [nvarchar](4000) NULL
) ON [PRIMARY]
GO
INSERT INTO tbl_testing (id, colOne, colTwo, colThree)
VALUES (1, 'r', 'r', 'r')
Update statement
UPDATE tbl_testing
SET result='Hey. I am a concatenated string'
WHERE (SELECT (colOne)+'-'+(colTwo)+'-'+(colThree))='r-r-r'
Output
id result colOne colTwo colThree
1 Hey. I am a concatenated string r r r
Changing comment to answer:
You should avoid doing that statements in that way. By doing that, database use no indexes and also you are giving more computation tasks to database server (server needs to concatenate all values and after concatenations will compare with given string).
Better way is to replace yours where statement with something like:
`WHERE colOne = 'r' AND colTwo = 'r' AND ...`
that will work faster without additional computation need (to concatenate strings).
This solution works much much faster, and looks much much better.

postgres force json datatype

When working with JSON datatype, is there a way to ensure the input JSON must have elements. I don't mean primary, I want the JSON that gets inserted to at least have the id and name element, it can have more but at the minimum the id and name must be there.
thanks
The function checks what you want:
create or replace function json_has_id_and_name(val json)
returns boolean language sql as $$
select coalesce(
(
select array['id', 'name'] <# array_agg(key)
from json_object_keys(val) key
),
false)
$$;
select json_has_id_and_name('{"id":1, "name":"abc"}'), json_has_id_and_name('{"id":1}');
json_has_id_and_name | json_has_id_and_name
----------------------+----------------------
t | f
(1 row)
You can use it in a check constraint, e.g.:
create table my_table (
id int primary key,
jdata json check (json_has_id_and_name(jdata))
);
insert into my_table values (1, '{"id":1}');
ERROR: new row for relation "my_table" violates check constraint "my_table_jdata_check"
DETAIL: Failing row contains (1, {"id":1}).

PostgreSQL: insert data into table from json

Now I use to manually parse json into insert string like so
insert into Table (field1, field2) values (val1, val2)
but its not comfortable way to insert data from json!
I've found function json_populate_record and tried to use it:
create table test (id serial, name varchar(50));
insert into test select * from json_populate_record(NULL::test, '{"name": "John"}');
but it fails with the message: null value in column "id" violates not-null constraint
PG knows that id is serial but pretends to be a fool. Same it do for all fieds with defaults.
Is there more elegant vay to insert data from json into a table?
There's no easy way for json_populate_record to return a marker that means "generate this value".
PostgreSQL does not allow you to insert NULL to specify that a value should be generated. If you ask for NULL Pg expects to mean NULL and doesn't want to second-guess you. Additionally it's perfectly OK to have a generated column that has no NOT NULL constraint, in which case it's perfectly fine to insert NULL into it.
If you want to have PostgreSQL use the table default for a value there are two ways to do this:
Omit that row from the INSERT column-list; or
Explicitly write DEFAULT, which is only valid in a VALUES expression
Since you can't use VALUES(DEFAULT, ...) here, your only option is to omit the column from the INSERT column-list:
regress=# create table test (id serial primary key, name varchar(50));
CREATE TABLE
regress=# insert into test(name) select name from json_populate_record(NULL::test, '{"name": "John"}');
INSERT 0 1
Yes, this means you must list the columns. Twice, in fact, once in the SELECT list and once in the INSERT column-list.
To avoid the need for that this PostgreSQL would need to have a way of specifying DEFAULT as a value for a record, so json_populate_record could return DEFAULT instead of NULL for columns that aren't defined. That might not be what you intended for all columns and would lead to the question of how DEFAULT would be treated when json_populate_record was not being used in an INSERT expression.
So I guess json_populate_record might be less useful than you hoped for rows with generated keys.
Continuing from Craig's answer, you probably need to write some sort of stored procedure to perform the necessary dynamic SQL, like as follows:
CREATE OR REPLACE FUNCTION jsoninsert(relname text, reljson text)
RETURNS record AS
$BODY$DECLARE
ret RECORD;
inputstring text;
BEGIN
SELECT string_agg(quote_ident(key),',') INTO inputstring
FROM json_object_keys(reljson::json) AS X (key);
EXECUTE 'INSERT INTO '|| quote_ident(relname)
|| '(' || inputstring || ') SELECT ' || inputstring
|| ' FROM json_populate_record( NULL::' || quote_ident(relname) || ' , json_in($1)) RETURNING *'
INTO ret USING reljson::cstring;
RETURN ret;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Which you'd then call with
SELECT jsoninsert('test', '{"name": "John"}');

Conversion error with NULL column and SELECT INTO

I'm experimenting with temporary tables and running into a problem.
Here's some super-simplified code of what I'm trying to accomplish:
IF(Object_ID('tempdb..#TempTroubleTable') IS NOT NULL) DROP TABLE #TempTroubleTable
select 'Hello' as Greeting,
NULL as Name
into #TempTroubleTable
update #TempTroubleTable
set Name = 'Monkey'
WHERE Greeting = 'Hello'
select * from #TempTroubleTable
Upon attempting the update statement, I get the error:
Conversion failed when converting the varchar value 'Monkey' to data type int.
I can understand why the temp table might not expect me to fill that column with varchars, but why does it assume int? Is there a way I can prime the column to expect varchar(max) but still initialize it with NULLs?
You need to cast null to the datatype because by default its an int
Select 'hello' as greeting,
Cast (null as varchar (32)) as name
Into #temp