Qlik Sense Pagination with nextToken and currentToken - json

I am looking for help on how to get all records in Qlik Sense from a data source, which works with pagination (currentToken and nextToken).
So far everything I have tried always gives me only the first 100 records, which is basically the limitation of the tool I am trying to get data from.
This is how my script looks:
LIB CONNECT TO 'REST_CALL';
// Action required: Implement the logic to retrieve the total records from the REST source and assign to the 'total' local variable.
Let total = 0;
Let totalfetched = 0;
Let startAt = 0;
Let pageSize = 100;
for startAt = 0 to total step pageSize
RestConnectorMasterTable:
SQL SELECT
"__KEY_root",
(SELECT
"limit",
"totalCount",
"nextToken",
"currentToken",
"__KEY_paging",
"__FK_paging",
(SELECT
"__KEY__links",
"__FK__links",
(SELECT
"href",
"__FK_next"
FROM "next" FK "__FK_next"),
(SELECT
"href" AS "href_u0",
"__FK_self"
FROM "self" FK "__FK_self")
FROM "_links" PK "__KEY__links" FK "__FK__links")
FROM "paging" PK "__KEY_paging" FK "__FK_paging"),
(SELECT
"id" AS "id_u0",
"title",
"code",
"description",
"start",
"end",
"closeAfter",
"archiveAfter",
"launchAfter",
"timezone",
"defaultLocale",
"currency",
"registrationSecurityLevel",
"status",
"eventStatus",
"testMode",
"created",
"lastModified",
"virtual",
"format",
"type" AS "type_u0",
"capacity",
"__KEY_data",
"__FK_data",
(SELECT
"name",
"__KEY_venues",
"__FK_venues",
(SELECT
"city",
"country",
"countryCode",
"latitude",
"longitude",
"address1",
"region",
"regionCode",
"postalCode",
"__FK_address"
FROM "address" FK "__FK_address")
FROM "venues" PK "__KEY_venues" FK "__FK_venues"),
(SELECT
"firstName",
"lastName",
"email",
"prefix",
"__FK_planners"
FROM "planners" FK "__FK_planners"),
(SELECT
"id",
"name" AS "name_u0",
"type",
"order",
"__KEY_customFields",
"__FK_customFields",
(SELECT
"#Value",
"__FK_value"
FROM "value" FK "__FK_value" ArrayValueAlias "#Value")
FROM "customFields" PK "__KEY_customFields" FK "__FK_customFields"),
(SELECT
"name" AS "name_u1",
"__FK_category"
FROM "category" FK "__FK_category"),
(SELECT
"__KEY__links_u0",
"__FK__links_u0",
(SELECT
"href" AS "href_u1",
"__FK_invitation"
FROM "invitation" FK "__FK_invitation"),
(SELECT
"href" AS "href_u2",
"__FK_agenda"
FROM "agenda" FK "__FK_agenda"),
(SELECT
"href" AS "href_u3",
"__FK_summary"
FROM "summary" FK "__FK_summary"),
(SELECT
"href" AS "href_u4",
"__FK_registration"
FROM "registration" FK "__FK_registration")
FROM "_links" PK "__KEY__links_u0" FK "__FK__links_u0"),
(SELECT
"#Value" AS "#Value_u0",
"__FK_languages"
FROM "languages" FK "__FK_languages" ArrayValueAlias "#Value_u0")
FROM "data" PK "__KEY_data" FK "__FK_data")
FROM JSON (wrap on) "root" PK "__KEY_root"
WITH CONNECTION(
QUERY "startAt" "$(startAt)",
URL "https://api-platform.sample.com/sample/sample",
QUERY "filter" "sample.id eq '123456789'",
HTTPHEADER "Authorization" "Bearer $(vToken)"
);
NEXT startAt;
I tried to define the 'total' variable as follows so I can loop through all the records for the specific sample.id used above(948):
LET total=peek('totalCount');
TRACE $(total);
But the result is that it just loads the first 100 records 10 times.
Currently these are the settings of my connection:
Key Generation Strategy - SequenceID
Pagination Type - Custom
I tried also the next token pagination type, but it was also getting only the first 100 records.
My guess is that I have to somehow implement the nextToken and currentToken in the WITH CONNECTION part of my script, problem is that I don't know how to do that and if it is indeed what needs to be done.
Thanks in advance!

Related

Need to convert the SQL Query to Gorm query

I have this SQL query
Select CONCAT(kafka_user_stream.FirstName,' ', kafka_user_stream.LastName) AS "Full Name",
kafka_user_stream.UID AS "User ID",
kafka_user_stream.CountryCode AS "Country",
kafka_user_stream.CreatedAt AS "Registration Date & Time",
COUNT(jackpotmessage_stream.UID) AS "Win Count"
FROM kafka_user_stream LEFT JOIN
jackpotmessage_stream ON jackpotmessage_stream.UID = kafka_user_stream.UID
WHERE "Type"='goldenTicketWin'
GROUP BY "Full Name", "User ID", "Country", "Registration Date & Time"
ORDER BY "Win Count" DESC
I want to convert it to Gorm. I can use it using
err = s.db.Exec("...QUERY")
but i cannot extract data from the above query. I need to extract all of the above fields (Full Name, User ID etc) and store them in a struct.
In above query, kafka_user_stream and jackpot_message are the tables extracted from a kafka stream. I am using go-gorm and go.
I tried the Gorm documentation as well as few other references but I am unable to find any solution. Would be very thankful for any leads, insight or help.
With native go/mysql driver, you should use Query() and Scan() methods to get results from the database and store them in a struct, not Exec().
In GORM, you can use SQL Builder for your custom queries:
type Result struct {
ID int
Name string
Age int
}
var result Result
db.Raw("SELECT id, name, age FROM users WHERE name = ?", 3).Scan(&result)
I figured out a slightly different way as suggested by Aykut which works fine.
rows, _err := s.gdb.Raw(`Select CONCAT(kafka_user_stream.FirstName,' ', kafka_user_stream.LastName) AS "FullName",
kafka_user_stream.UID AS "UserID",
kafka_user_stream.CountryCode AS "Country",
kafka_user_stream.CreatedAt AS "CreatedAt",
COUNT(jackpotmessage_stream.UID) AS "WinCount"
FROM kafka_user_stream LEFT JOIN
jackpotmessage_stream ON jackpotmessage_stream.UID = kafka_user_stream.UID
WHERE "Type"='goldenTicketWin'
GROUP BY "FullName", "UserID", "Country", "CreatedAt"
ORDER BY "WinCount" DESC;`).Rows()

How to import JSON values inside MySQL (10.2.36-MariaDB) table?

I have the following JSON file :
{
"ID": 5464015,
"CUSTOMER_ID": 1088020,
"CUSOTMER_NAME": "My customer 1"
}
{
"ID": 5220812,
"CUSTOMER_ID": 523323,
"CUSOTMER_NAME": "My customer 2"
}
{
"ID": 5205039,
"CUSTOMER_ID": 1934806,
"CUSOTMER_NAME": "My customer 3"
}
From a shell script, I would like to import these values into a MariaDB table (MariaDB Server version : 10.2.36-MariaDB) with the related columns already created :
ID
CUSTOMER_ID
CUSTOMER_NAME
But for CUSTOMER_NAME, I don't want to import double quotes at the beginning and at the end of the value.
Is there a simple way to do it?
Or if not possible, If I have a txt or csv file like this :
5464015,1088020,"My customer 1"
5220812,523323,"My customer 2"
5205039,1934806,"My customer 3"
How to import it?
Many thanks
CREATE TABLE test (ID INT, CUSTOMER_ID INT, CUSTOMER_NAME VARCHAR(255));
SET #data := '
[ { "ID": 5464015,
"CUSTOMER_ID": 1088020,
"CUSTOMER_NAME": "My customer 1"
},
{ "ID": 5220812,
"CUSTOMER_ID": 523323,
"CUSTOMER_NAME": "My customer 2"
},
{ "ID": 5205039,
"CUSTOMER_ID": 1934806,
"CUSTOMER_NAME": "My customer 3"
}
]
';
INSERT INTO test
SELECT *
FROM JSON_TABLE(#data,
"$[*]" COLUMNS( ID INT PATH "$.ID",
CUSTOMER_ID INT PATH "$.CUSTOMER_ID",
CUSTOMER_NAME VARCHAR(255) PATH "$.CUSTOMER_NAME")
) AS jsontable;
SELECT * FROM test;
ID
CUSTOMER_ID
CUSTOMER_NAME
5464015
1088020
My customer 1
5220812
523323
My customer 2
5205039
1934806
My customer 3
db<>fiddle here
The solution which must work on 10.2.36-MariaDB (all used constructions are legal for this version):
CREATE TABLE test (ID INT, CUSTOMER_ID INT, CUSTOMER_NAME VARCHAR(255))
WITH RECURSIVE
cte1 AS ( SELECT LOAD_FILE('C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/json.txt') jsondata ),
cte2 AS ( SELECT 1 level, CAST(jsondata AS CHAR) oneobject, jsondata
FROM cte1
UNION ALL
SELECT level + 1,
TRIM(SUBSTRING(jsondata FROM 1 FOR 2 + LOCATE('}', jsondata))),
TRIM(SUBSTRING(jsondata FROM 1 + LOCATE('}', jsondata) FOR LENGTH(jsondata)))
FROM cte2
WHERE jsondata != '' )
SELECT oneobject->>"$.ID" ID,
oneobject->>"$.CUSTOMER_ID" CUSTOMER_ID,
oneobject->>"$.CUSTOMER_NAME" CUSTOMER_NAME
FROM cte2 WHERE level > 1;
Tested on MySQL 8.0.16 (I have no available MariaDB now):
The content of json.txt file matches shown in the question (misprint in attribute name edited).
PS. Of course the SELECT itself may be used for to insert the data into existing table.
If you have access to php a simple script is a good method, as it can turn json into an array (and automatically remove said quotes around text) and then you can decide what columns in the json equate to what mysql columns.
Depending on your mysql version you may have access to this utility to inport json from command line
https://mysqlserverteam.com/import-json-to-mysql-made-easy-with-the-mysql-shell/
But it may not work if your columns don't match perfectly with the MySQL columns ( I belieivd it is not case sensitive however )

How to write one n1ql query to sum values from multiple documents in the same bucket

I am a newbie to Couchbase DB server and I am trying to achieve with one query what I have done with three queries as this is not so efficient.
I have three different documents types (x,y,z) in the same bucket; all having a similar key: 'district' like so:
document x:
{
"type": "x",
"district": "Some district"
}
document y:
{
"type": "y",
"district": "Some district"
}
document z:
{
"type": "z",
"district": "Some district"
}
I have currently implemented something like the following pseudo-code in PHP:
$totalDistrictInX = "SELECT COUNT(x) FROM bucket WHERE type = 'x' AND district = 'Maboro';
$totalDistrictInY = "SELECT COUNT(x) FROM bucket WHERE type = 'y' AND district = 'Maboro';
$totalDistrictInZ = "SELECT COUNT(x) FROM bucket WHERE type = 'z' AND district = 'Maboro';
$totalCountOfMaboro = $totalDistrictInX + $totalDistrictInY + $totalDistrictInZ;
I cannot use a JOIN query because the Couchbase server currently in use is below 5.50 which only supports joining documents between document key to document field and not between document fields.
Is there a way to achieve this with one just n1ql query? Any help will be much appreciated, please.
Use aggregate query without group by for total count, control what documents to count through predicate.
SELECT COUNT(1) AS cnt
FROM bucket
WHERE type IN ['x', 'y', 'z'] AND district = 'Maboro';
If you need count for each type use GROUP BY
SELECT type, COUNT(1) AS cnt
FROM bucket
WHERE type IN ['x', 'y', 'z'] AND district = 'Maboro'
GROUP BY type;
If you want total count and individual type, its counts as array
SELECT ARRAY_SUM(av[*].cnt) AS totalcnt, av AS details
LET av = (SELECT type, COUNT(1) AS cnt
FROM bucket
WHERE type IN ['x', 'y', 'z'] AND district = 'Maboro'
GROUP BY type);
Could a GROUP BY and COUNT combo be your solution?
SELECT COUNT(x) FROM bucket WHERE district = 'Maboro' GROUP BY type
Documentation

How to pass a variable into Subquery with n1ql?

In my situation, entreprise(company) can have several sites(filiales), I want to get all the filiales with format array.
In the json entreprise(company), there is no information of sites(filiales), In the json sites(filiales), it has entreprise(company) uid.
Json entreprise(company):
{
"type": "entreprise",
"dateUpdate": 1481716305279,
"owner": {
"type": "user",
"uid": "PNnqarPqSdaxmEJ4DoMv-A"
}
}
Json sites(filiales):
{
"type": "site",
"entreprise": {
"uid": "3c0CstzsTjqPdycL5yYzJQ",
"type": "entreprise"
},
"nom": "test"
}
The query I tried:
SELECT
META(entreprise).id as uid,
ARRAY s FOR s IN (SELECT d.* FROM default d WHERE d.type = "site" AND d.entreprise.uid = uid) END as sites,
entreprise.*
FROM default entreprise
WHERE entreprise.type = "entreprise";
Result: error
{
"code": 5010,
"msg": "Error evaluating projection. - cause: FROM in correlated subquery must have USE KEYS clause: FROM default."
}
Then i use alias:
SELECT
META(entreprise).id as uid,
ARRAY s FOR s IN (SELECT d.* FROM default d WHERE d.type = "site" AND d.entreprise.uid = META(entreprise).id) END as sites,
entreprise.*
FROM default entreprise
WHERE entreprise.type = "entreprise";
Result: sites array is empty.
First you have to create an index on your site documents :
CREATE INDEX site_ent_idx ON default(entreprise.uid) WHERE type="site";
Then change your query to use the new index :
SELECT
META(entreprise).id as uid,
ARRAY s FOR s IN (
SELECT site.*
FROM default as ent USE KEYS META(entreprise).id
JOIN default as site ON KEY site.entreprise.uid FOR ent
) END as sites,
entreprise.*
FROM default entreprise
WHERE entreprise.type = "entreprise"
This solution should meet your needs.
You need to perform an index join from sites to enterprises. See https://dzone.com/articles/join-faster-with-couchbase-index-joins
After that, use GROUP BY and ARRAY_AGG() to collect the sites into arrays.

How to concatenate a value to a value within json datatype in postgres

The json column "data" contains value like
{"avatar":"kiran1454916822955.jpg","name":"shanthitwos charmlyi"}
I want to concatenate images/profiles/uploads/ for all the json key avatar.
I tried
UPDATE activity SET data->'avatar' = CONCAT('images/profiles/uploads/',data->'avatar')
Example data:
create table activity (data json);
insert into activity values
('{"avatar":"first.jpg","name":"first name"}'),
('{"avatar":"second.jpg","name":"second name"}'),
('{"avatar":"third.jpg","name":"third name"}');
In Postgres 9.4 you should create an auxiliary function:
create or replace function add_path_to_avatar(json)
returns json language sql as $$
select json_object_agg(key, value)
from (
select
key,
case key::text when 'avatar' then
'images/profiles/uploads/' || value
else value
end
from json_each_text($1)
) s
$$;
update activity
set data = add_path_to_avatar(data)
returning data;
data
-----------------------------------------------------------------------------
{ "avatar" : "images/profiles/uploads/first.jpg", "name" : "first name" }
{ "avatar" : "images/profiles/uploads/second.jpg", "name" : "second name" }
{ "avatar" : "images/profiles/uploads/third.jpg", "name" : "third name" }
(3 rows)
In Postgres 9.5 you can use the function jsonb_set():
update activity
set data = jsonb_set(
data::jsonb,
'{avatar}',
format('"images/profiles/uploads/%s"', data#>>'{avatar}')::jsonb);