How to use a JSON stored procedure argument to insert multiple records? - mysql

I have a linking table between two tables, ja1_surveyors and ja1_stores. I'm trying to writ a stored procedure that will take three arguments, the third being a json array of store_id.
I've tried this, but I know it's not correct:
/*
========================================================================================
Set the list of stores for a surveyor in a survey. Used with template to create the list
a user sees to edit, copy and delete surveyors in a survey
Accepts three arguments:
arg_srvy_id Survey key
arg_srvr_id Surveyor key
STORE_LIST JSON value holding a list of store keys assigned to this survey/surveyor
STORE_LIST JSON should be in the form: '{store_id:val1},{store_id:val2}' etc.
========================================================================================
*/
DROP PROCEDURE IF EXISTS SURVEYOR_LINK_STORES;
DELIMITER //
CREATE PROCEDURE SURVEYOR_LINK_STORES( IN arg_srvy_id INT(11),IN arg_srvr_id INT(11),IN STORE_LIST JSON)
BEGIN
/* Remove all links for this surveyor to stores for this survey */
DELETE FROM `ja1_storesurveyor`
WHERE `lnk_strsrvr_srvy_id` = arg_srvy_id AND `lnk_strsrvr_srvr_id` = arg_srvr_id;
/* Add links between this survey and surveyor for each key in STORE_LIST */
INSERT INTO `ja1_store_surveyor`
(
`lnk_strsrvr_srvy_id`,
`lnk_strsrvr_srvr_id`,
`lnk_strsrvr_store_id`
)
SELECT
arg_srvy_id,
arg_srvr_id,
STORE_LIST->>`$.store_id`
FROM STORE_LIST;
END
DELIMITER ;
The problem seems to be the select part of the insert statement.
All of the columns are INT(11). And I'm using MySQL version 5.6.41-84.1
What am I missing?

The best way to do this is with JSON_TABLE() but it requires MySQL 8.0.
Edit: When I wrote this answer, your original question did not make it clear you were using an old version of MySQL Server.
CREATE PROCEDURE SURVEYOR_LINK_STORES(
IN arg_arg_srvy_id INT,
IN arg_arg_srvr_id INT,
IN arg_STORE_LIST JSON)
BEGIN
/* Remove all links for this surveyor to stores for this survey */
DELETE FROM `ja1_storesurveyor`
WHERE `lnk_strsrvr_srvy_id` = arg_srvy_id
AND `lnk_strsrvr_srvr_id` = arg_srvr_id;
/* Add links between this survey and surveyor for each key in STORE_LIST */
INSERT INTO `ja1_store_surveyor`
(
`lnk_strsrvr_srvy_id`,
`lnk_strsrvr_srvr_id`,
`lnk_strsrvr_store_id`
)
SELECT
arg_srvy_id,
arg_srvr_id,
j.store_id
FROM JSON_TABLE(
arg_STORE_LIST, '$[*]' COLUMNS(
store_id VARCHAR(...) PATH '$'
)
) AS j;
END
I'm guessing the appropriate data type for store_id is a varchar, but I don't know how long the max length should be.
Re your comment: MySQL 5.6 doesn't have any JSON data type, so your stored procedure won't work as you wrote it (the arg_STORE_LIST argument cannot use the JSON data type).
FYI, MySQL 5.6 past its end-of-life in February 2021, so the version you are using won't get any more bug fixes or security fixes. You should really upgrade, regardless of the JSON issue.
The equivalent code to insert multiple rows in MySQL 5.6 is a lot of work and code to write. I'm not going to write an example for such an old version of MySQL.
You can find other examples on Stack Overflow with the general principle. It involves taking the argument as a VARCHAR, not JSON, and writing a WHILE loop to picking apart substrings of the varchar.

Related

" ( " not valid in this position, expecting an identifier in MySQL

Today is, 1/30/2022, I have been following along with an #AlexTheAnalyst video. I am on a Mac and using MySQL version 8.0.27. (The video is using windows based SQL Server Workbench) I am stuck! I am trying to creating a temporary table function. MySQL is not liking the # in the table name #PercentPopVaccinated as used in the video. When I remove it and run the function/query without the # I get 0 rows returned. I have researched on stackoverflow etc. and I am not coming up with a solution that I understand. (Newbie here)
I have dropped the table that was created and I am starting over.
I am getting an error when creating the temp table that states MySQL is expecting an identifier after the first " ( ". Anyone else have a similar issue?
Create Table #PercentPopulationVaccinated
(
continent nvarchar(255),
location nvarchar(255),
date datetime,
population numeric,
new_vaccinations numeric,
RollingVacCount numeric
)
Insert into #PercentPopulationVaccinated
Select dea.continent, dea.location, dea.date, dea.population, vac.new_vaccinations
, SUM(cast(vac.new_vaccinations as UNSIGNED)) OVER (Partition by dea.location Order by dea.location, dea.date)
as RollingVacCount
-- (RollingVacCount/population)*100
From project_portfolio.covid_deaths dea
Join project_portfolio.covid_vaccinations vac
On dea.location = vac.location
and dea.date = vac.date
where dea.continent is not null
-- order by 2,3
Select *, (RollingVacCount/Population)*100
From #PercentPopulationVaccinated;
So I'd say the underlying problem is that you are watching a video tutorial that is using SQL Server, but you are using MySQL. There are many similarities, but it is not going to be an exact match. For instance, the # sign creates a temporary table in Sql Server, but the # is not valid in MySQL. If you want to use a different database service than the tutorial you are watching is for, you are going to have to translate some concepts for yourself.
Another commenter already posted this link, which indicates the syntax for creating temp tables in MySQL.
https://dev.mysql.com/doc/refman/8.0/en/create-table.html#create-table-temporary-tables

How to reuse JSON arguments within PostgreSQL stored procedure

I am using a stored procedure to INSERT into and UPDATE a number of tables. Some of the data is derived from a JSON parameter.
Although I have successfully used json_to_recordset() to extract named data from the JSON parameter, I cannot figure how to use it in an UPDATE statement. Also, I need to use some items of data from the JSON parameter a number of times.
Q: Is there a way to use json_to_recordset() to extract named data to a temporary table to allow me to reuse the data items throughout my stored procedure? Maybe I should SELECT INTO variables within the stored procedure?
Q: Failing that can anyone please provide a simple example of how to update a table using data returned from json_to_recordset(). I must also include data not from the JSON parameter such as now()::timestamp(0).
This is how I have used json_to_recordset() so far:
INSERT INTO myRealTable (
rec_timestamp,
rec_data1,
rec_data2
)
SELECT
now()::timestamp(0),
x.data1,
x.data2
FROM json_to_recordset(json_parameter) x
(
json_data1 int,
json_data2 boolean
);
Thank you.

How to avoid long list of arguments to a stored procedure?

I have a stored procedure to update certain values in two tables. But the list of arguments or the set of values to update has grown to 10 arguments and could grow more in future. How could this be handled?
CREATE DEFINER=`root`#`localhost` PROCEDURE `update_base_plan`(userId int,newPlanId int,nextPlanId int,maxCreditPulseAllocated int)
begin
if userId is not null and newPlanId is not null and nextPlanId is not null
and maxCreditPulseAllocated is not null
then
update planallocation as pa
left join subscriptioninfo as si
on pa.SubscriptionId = si.SubscriptionId
left join plans as pl
on pa.CurrentPlanId = pl.PlanId
set pa.CurrentPlanId = newPlanId, pa.NextPlanId = nextPlanId,
pa.MaxCreditPulseAllocated = maxCreditPulseAllocated
where pl.Plan_Type = 'base' and
si.UserId = userId;
end if;
end$$
DELIMITER ;
Well technically, when a (stored) procedure requires n parameters, you're usually not getting around providing n parameters.
In some programming languages however, instead of providing all those parameters one at a time, an array / dictionary / object is provided, turning into "one parameter". I'm not certain, if this is possible in mysql, but you probably can use json as input and make mysql unpack it (or however you'd call it). See http://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html for example (and sibling pages).
Depending on the datatypes you could do other encodings similar to csv or new line separated. However, I'd advise against positional arguments, when they aren't communicated well enough.

Is there a way to change the default date input format

I have a source of data from where I extract some fields, among the fields there are some date fields and the source sends their values like this
#DD/MM/YYYY#
almost all the fields can be sent into the query with no modificaction, except this of course.
I have written a program the gets the data from an internet connection and sends it to the MySQL server and it's sending everything as it should, I am sure because I enabled general logging in the MySQL server and I can see all the queries are correct, except the ones with date fields.
I would like to avoid parsing the fields this way because it's a lot of work since it's all written in c, but if there is no way to do it, I understand and would accept that as an answer of course.
As an example suppose we had the following
INSERT INTO sometable VALUES ('#12/10/2015#', ... OTHER_VALUES ..., '#11/10/2015#');
in this case I send the whole thing as a query using mysql_query() from libmysqlclient.
In other cases I can split the parts of the message in something that is like an instruction and the parameters, something like this
iab A,B,C,#12/10/2015#,X,Y,#11/10/2015#
which could mean INSERT INTO table_a_something_b_whatever VALUES, and in this situation of course, I capture all the parameters and send a single query with a list of VALUES in it. Also in this situation, it's rather simple because I can handle the date like this
char date[] = "#11/10/2015#";
int day;
int month;
int year;
if (sscanf(date, "#%d/%d/%d#", &day, &month, &year) == 3)
{
/* it's fine, build a sane YYYY-MM-DD */
}
So the question is:
How can I tell the MySQL server in what format the date fields are?
Clarification to: Comment 1
Not necessarily INSERT, it's more complex than that. They are sometimes queries with all their parameters in it, sometimes they are just the parameters and I have to build the query. It's a huge mess but I can't do anything about it because it's a paid database and I must use it for the time being.
The real problem is when the query comes from the source and has to be sent as it is, because then there can be many occurrences. When I split the parameters one by one there is no real problem because parsing the above date format and generating the appropriate value of MySQL is quite simple.
You can use STR_TO_DATE() in MySQL:
SELECT STR_TO_DATE('#08/10/2015#','#%d/%m%Y#');
Use this as part of your INSERT process:
INSERT INTO yourtable (yourdatecolumn) VALUES (STR_TO_DATE('#08/10/2015#','#%d/%m%Y#'));
The only Thing I could imagine at the Moment would be to Change your Column-Type from DateTime to varchar and use a BEFORE INSERT Trigger to fix "wrong" Dates.
Something like this:
DELIMITER //
CREATE TRIGGER t1 BEFORE INSERT on myTable FOR EACH ROW
BEGIN
IF (NEW.myDate regexp '#[[:digit:]]+\/[[:digit:]]+\/[[:digit:]]+#') THEN
SET NEW.myDate = STR_TO_DATE(NEW.myDate,'#%d/%m/%Y#');
END IF;
END; //
DELIMITER ;
If you are just Need to run the Import in question once, use the Trigger to generate a "proper" dateTimeColumn out of the inserts - and drop the varchar-column afterwards:
('myDate' := varchar column to be dropped afterwards;`'myRealDate' := DateTime Column to Keep afterwards)
DELIMITER //
CREATE TRIGGER t1 BEFORE INSERT on myTable FOR EACH ROW
BEGIN
IF (NEW.myDate regexp '#[[:digit:]]+\/[[:digit:]]+\/[[:digit:]]+#') THEN
SET NEW.myRealDate = STR_TO_DATE(NEW.myDate,'#%d/%m/%Y#');
else
#assume a valid date representation
SET NEW.myRealDate = NEW.myDate;
END IF;
END; //
DELIMITER ;
Unfortunately you cannot use a Trigger to work on the datetime-column itself, because mysql will already mess up the NEW.myDate-Column.

MySQL SET Type in PostgreSQL? [duplicate]

This question already has an answer here:
convert MySQL SET data type to Postgres
(1 answer)
Closed 9 years ago.
I'm trying to use MySQL SET type in PostgreSQL, but I found only Arrays, that has quite similar functionality but doesn't met requirements.
Does PostgreSQL has similar datatype?
You can use following workarounds:
1. BIT strings
You can define your set of maximum N elements as simply BIT(N).
It is little bit awkward to populate and retrieve - you will have to use bit masks as set members. But bit strings really shine for set operations: intersection is simply &, union is |.
This type is stored very efficiently - bit per bit with small overhead for length.
Also, it is nice that length is not really limited (but you have to decide it upfront).
2. HSTORE
HSTORE type is an extension, but very easy to install. Simply executing
CREATE EXTENSION hstore
for most installations (9.1+) will make it available. Rumor has it that PostgreSQL 9.3 will have HSTORE as standard type.
It is not really a set type, but more like Perl hash or Python dictionary: it keeps arbitrary set of key=>value pairs.
With that, it is not very efficient (certainly not BIT string efficient), but it does provide functions essential for sets: || for union, but intersection is little bit awkward: use
slice(a,akeys(b)) || slice(b,akeys(a))
You can read more about HSTORE here.
What about an array with a check constraint:
create table foobar
(
myset text[] not null,
constraint check_set
check ( array_length(myset,1) <= 2
and (myset = array[''] or 'one'= ANY(myset) or 'two' = ANY(myset))
)
);
This would match a the definition of SET('one', 'two') as explained in the MySQL manual.
The only thing that this would not do, is to "normalize" the array. So
insert into foobar values (array['one', 'two']);
and
insert into foobar values (array['two', 'one']);
would be displayed differently than in MySQL (where both would wind up as 'one','two')
The check constraint will however get messy with more than 3 or 4 elements.
Building on a_horse_with_no_name's answer above, I would suggest something just a little more complex:
CREATE FUNCTION set_check(in_value anyarray, in_check anyarray)
RETURNS BOOL LANGUAGE SQL IMMUTABLE AS
$$
WITH basic_check AS (
select bool_and(v = any($2)) as condition, count(*) as ct
FROM unnest($1) v
GROUP BY v
), length_check AS (
SELECT count(*) = 0 as test FROM unnest($1)
)
SELECT bool_and(condition AND ct = 1)
FROM basic_check
UNION
SELECT test from length_check where test;
$$;
Then you should be able to do something like:
CREATE TABLE set_test (
my_set text[] CHECK (set_check(my_set, array['one'::text,'two']))
);
This works:
postgres=# insert into set_test values ('{}');
INSERT 0 1
postgres=# insert into set_test values ('{one}');
INSERT 0 1
postgres=# insert into set_test values ('{one,two}');
INSERT 0 1
postgres=# insert into set_test values ('{one,three}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
postgres=# insert into set_test values ('{one,one}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
Note this assumes that for your set, every value must be unique (we are talking sets here). The function should perform very well and should meet your needs. However this has the advantage of handling any size sets.
Storage-wise it is completely different from MySQL's implementation. It will take up more space on disk but should handle sets with as many members as you like, provided that you aren't running up against storage limits.... So this should have a superset of functionality in comparison to MySQL's implementation. One significant difference though is that this does not collapse the array into distinct values. It just prohibits them. If you need that too, look at a trigger.
This solution also leaves the ordinality of input data intact so '{one,two}' is distinct from '{two,one}' so if you need to ensure that behavior has changed, you may want to look into exclusion constraints on PostgreSQL 9.2.
Are you looking for enumerated data types?
PostgreSQL 9.1 Enumerated Types
From reading the page referenced in the question, it seems like a SET is a way of storing up to 64 named boolean values in one column. PostgreSQL does not provide a way to do this. You could use independent boolean columns, or some size of integer and twiddle the bits directly. Adding two new tables (one for the valid names, and the other to join names to detail rows) could make sense, especially if there is the possibility of needing to associate any other data to individual values.
some time ago I wrote one similar extension
https://github.com/okbob/Enumset
but it is not complete
some more complete and close to mysql is functionality from pltoolkit
http://okbob.blogspot.cz/2010/12/bitmapset-for-plpgsql.html
http://pgfoundry.org/frs/download.php/3203/pltoolbox-1.0.2.tar.gz
http://postgres.cz/wiki/PL_toolbox_%28en%29
function find_in_set can be emulated via arrays
http://okbob.blogspot.cz/2009/08/mysql-functions-for-postgresql.html