I'm having a hard time figuring out, how to translate the following SQL to JPQL
SELECT * FROM consorder
WHERE
consorder.redistid LIKE '123%'
HAVING
MAX(consorder.redistid)
(which works like a charm under MySQL).
#Query("select c from ConsignmentOrder c" +
" where c.redistributionId like ?1" +
" having max(c.redistributionId)")
I get the following exception:
Caused by: <openjpa-2.2.2-r422266:1468616 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: Encountered "max ( c . redistributionId ) <EOF>" at character 79, but expected: ["(", ")", "*", "+", "-", ".", "/", ":", "<", "<=", "<>", "=", ">", ">=", "?", "ABS", "ALL", "AND", "ANY", "AS", "ASC", "AVG", "BETWEEN", "BOTH", "BY", "CASE", "COALESCE", "CONCAT", "COUNT", "CURRENT_DATE", "CURRENT_TIME", "CURRENT_TIMESTAMP", "DELETE", "DESC", "DISTINCT", "EMPTY", "ESCAPE", "EXISTS", "FETCH", "FROM", "GROUP", "HAVING", "IN", "INDEX", "INNER", "IS", "JOIN", "KEY", "LEADING", "LEFT", "LENGTH", "LIKE", "LOCATE", "LOWER", "MAX", "MEMBER", "MIN", "MOD", "NEW", "NOT", "NULL", "NULLIF", "OBJECT", "OF", "OR", "ORDER", "OUTER", "SELECT", "SET", "SIZE", "SOME", "SQRT", "SUBSTRING", "SUM", "TRAILING", "TRIM", "TYPE", "UPDATE", "UPPER", "VALUE", "WHERE", <BOOLEAN_LITERAL>, <DATE_LITERAL>, < DECIMAL_LITERAL>, <IDENTIFIER>, <INTEGER_LITERAL>, <STRING_LITERAL2>, <STRING_LITERAL>, <TIMESTAMP_LITERAL>, <TIME_LITERAL>].
at org.apache.openjpa.kernel.jpql.JPQL.generateParseException(JPQL.java:13162)
at org.apache.openjpa.kernel.jpql.JPQL.jj_consume_token(JPQL.java:13036)
at org.apache.openjpa.kernel.jpql.JPQL.conditional_primary(JPQL.java:1980)
at org.apache.openjpa.kernel.jpql.JPQL.conditional_factor(JPQL.java:1958)
at org.apache.openjpa.kernel.jpql.JPQL.conditional_term(JPQL.java:1807)
at org.apache.openjpa.kernel.jpql.JPQL.conditional_expression(JPQL.java:1769)
at org.apache.openjpa.kernel.jpql.JPQL.having_clause(JPQL.java:1701)
at org.apache.openjpa.kernel.jpql.JPQL.select_statement(JPQL.java:107)
at org.apache.openjpa.kernel.jpql.JPQL.parseQuery(JPQL.java:63)
at org.apache.openjpa.kernel.jpql.JPQLExpressionBuilder$ParsedJPQL.parse(JPQLExpressionBuilder.java:2401)
at org.apache.openjpa.kernel.jpql.JPQLExpressionBuilder$ParsedJPQL.<init>(JPQLExpressionBuilder.java:2388)
at org.apache.openjpa.kernel.jpql.JPQLParser.parse(JPQLParser.java:49)
... 96 moreEncountered "max ( c . redistributionId ) <EOF>" at character 79, but expected: ["(", ")", "*",
Any help will be appreciated.
Edit:
I restructured the query a little bit:
#Query("select c from ConsignmentOrder c where c.redistributionId = (select max(co.redistributionId) from (select co ConsignmentOrder co where co.redistributionId like ?1) as tmp )")
Now the exception is similar:
Caused by: <openjpa-2.2.2-r422266:1468616 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: Encountered "c . redistributionId = ( select max ( co . redistributionId ) from (" at character 40, but expected: ["(", ")", "*", "+", ",", "-", ".", "/", ":", "<", "<=", "<>", "=", ">", ">=", "?", "ABS", "ALL", "AND", "ANY", "AS", "ASC", "AVG", "BETWEEN", "BOTH", "BY", "CASE", "CLASS", "COALESCE", "CONCAT", "COUNT", "CURRENT_DATE", "CURRENT_TIME", "CURRENT_TIMESTAMP", "DELETE", "DESC", "DISTINCT", "ELSE", "EMPTY", "END", "ENTRY", "ESCAPE", "EXISTS", "FETCH", "FROM", "GROUP", "HAVING", "IN", "INDEX", "INNER", "IS", "JOIN", "KEY", "LEADING", "LEFT", "LENGTH", "LIKE", "LOCATE", "LOWER", "MAX", "MEMBER", "MIN", "MOD", "NEW", "NOT", "NULL", "NULLIF", "OBJECT", "OF", "OR", "ORDER", "OUTER", "SELECT", "SET", "SIZE", "SOME", "SQRT", "SUBSTRING", "SUM", "THEN", "TRAILING", "TRIM", "TYPE", "UPDATE", "UPPER", "VALUE", "WHEN", "WHERE", <BOOLEAN_LITERAL>, <DATE_LITERAL>, <DECIMAL_LITERAL>, <IDENTIFIER>, <INTEGER_LITERAL>, <STRING_LITERAL2>, <STRING_LITERAL>, <TIMESTAMP_LITERAL>, <TIME_LITERAL>].
Without the misleading EOF.
Is it important to convert it to JPQL? You can easily run native SQLs in JPA
With createNativeQuery() method that exists in your Entity Manager you can create executable Queries with SQL syntax
Related
I am trying to find an array index of two matching values. I have my Groovy script below that is giving me the index of WhenWeighed and that works. Returns the correct Index. The part I am having difficulty figuring out is adding OpSeq to the indexing criteria.
What I'm trying to do is find the index of WhenWeighed and OpSeq. For Example, I want to find the index of WhenWeighed = BH and OpSeq = 30. In my below JSON this should be 4.
Can anyone help explain how you do this in Groovy?
JSON Used:
{
"BusinessUnit": "1111111",
"WorkOrder": 1111111,
"WeightEstimatesInq": [
{
"WhenWeighed": "BH",
"WhenWeighedDesc": "Before Heading Weight",
"TotalWeight": 900,
"Weight": 12,
"OpSeq": "10",
"AdditionalNotes": " ",
"TareWeight": " ",
"Effective Date": "null"
},
{
"WhenWeighed": "AH",
"WhenWeighedDesc": "After Heading Weight",
"TotalWeight": 987,
"Weight": 900,
"OpSeq": "10",
"AdditionalNotes": "Weighed Bin 10 5/17/2022",
"TareWeight": "87",
"Effective Date": "null"
},
{
"WhenWeighed": "BO",
"WhenWeighedDesc": "Before OSP Weight",
"TotalWeight": 900,
"Weight": 9,
"OpSeq": "50",
"AdditionalNotes": " ",
"TareWeight": " ",
"Effective Date": "null"
},
{
"WhenWeighed": "AO",
"WhenWeighedDesc": "After OSP Weight",
"TotalWeight": 1000,
"Weight": 750,
"OpSeq": "50",
"AdditionalNotes": " ",
"TareWeight": "150",
"Effective Date": "null"
},
{
"WhenWeighed": "BH",
"WhenWeighedDesc": "Before Heading Weight",
"TotalWeight": 720,
"Weight": 700,
"OpSeq": "30",
"AdditionalNotes": "Weighed Bin 30 5/17/2022",
"TareWeight": "20",
"Effective Date": "null"
}
],
"status": "SUCCESS",
"startTimestamp": "2022-05-17T12:27:49.302-0400",
"endTimestamp": "2022-05-17T12:27:50.279-0400",
"serverExecutionSeconds": 0.977
}
Groovy Used:
// Read Input Values
String aWhenWeighedUDC = aInputMap.WhenWeighedUDC ?: " "
String aInputJson = aInputMap.InputJson ?: "{}"
// Initialize Output Values
def error = " "
def rowNumber = 0
def lastRowNumber = 1
// Parse JSON
def json = new JsonSlurper().parseText( aInputJson )
// Determine Row Numbers
def rowset = json?.WeightEstimatesInq
if ( rowset ) {
rowNumber = rowset*.WhenWeighed.indexOf( aWhenWeighedUDC ) + 1
lastRowNumber = rowset.size()
}
If you know that WeightEstimatesInq is always going to be the key for the list of items, you can do something like this:
json["WeightEstimatesInq"].findIndexOf {
it["WhenWeighed"] == "BH" && it["OpSeq"] == "30"
}
which will yield 4. You can add more criteria by && it.
Note that this has the potential to return -1 if nothing matches your criteria.
I need some help querying this JSON file I've ingested into a temp table in Snowflake. So, I've created a JSON_DATA variant column and plan to query and do a COPY INTO another table, but my query isn't working yet... I feel I'm close (possibly?)
JSON layout:
{
"nextPage": "01",
"page": "0",
"status": "ok",
"transactions": [
{
"id": "65985",
"recordTp": "vendorbill",
"values": {
"account": [
{
"text": "14500 Deferred Expenses",
"value": "249"
}
],
"account.number": "1450",
"account.type": [
{
"text": "Deferred Expense",
"value": "DeferExpense"
}
],
"amount": "51733",
"classnohierarchy": [
{
"text": "901 Corporate",
"value": "139"
}
],
"currency": [
{
"text": "Canadian Dollar",
"value": "3"
}
],
"customer.altname": "V Sties expenses (Tor)",
"customer.custate": "12/31/2019",
"customer.custentient": "ada Inc.",
"customer.custendate": "1/1/2019",
"customer.entyid": "PR781",
"departmentnohierarchy": [
{
"text": "8rity",
"value": "37"
}
],
"fxamount": "689",
"location": [
{
"text": "Othad Projects",
"value": "48"
}
],
"postingperiod": [
{
"text": "Jan 2020",
"value": "1"
}
],
"subsidiary.custrecord_region": [
{
"text": "CANADA",
"value": "3"
}
],
"subsidiarynohierarchy": [
{
"text": "ada Inc.",
"value": "25"
}
]
}
},
I've been able to query the values that are not (deeply) nested but I need help getting, for example, the values from 'classnohierarchy', to get both the 'text' and 'value' I tried:
transactions.value:"values".classnohierarchy.text::string as class_txt,
transactions.value:"values".classnohierarchy.value::string as class_val,
but it's returning NULL values.
Below is my entire query:
SELECT
JSON_DATA:status::string as connection_status,
transactions.value:id::string as id,
transactions.value:recordType::string as record_type,
transactions.value:"values"::variant as trans_val,
transactions.value:"values".account as acc,
transactions.value:"values".account.text as text,
transactions.value:"values".account.value as val,
transactions.value:"values"."account.number"::string as acc_num,
transactions.value:"values"."account.type".text::string as acc_type_txt,
transactions.value:"values"."account.type".value::string as acc_type_val,
transactions.value:"values".amount::string as amount,
**transactions.value:"values".classnohierarchy.text::string as class_txt,
transactions.value:"values".classnohierarchy.value::string as class_val,**
transactions.value:"values".currency.text::string as currency_text,
transactions.value:"values".currency.value::string as currency_val,
transactions.value:"values"."customer.altname"::string as customer_project_name,
transactions.value:"values"."customer.custate"::string as customer_end_date,
transactions.value:"values"."customer.custentient"::string as customer_end_client,
transactions.value:"values"."customer.custendate"::string as customer_start_date,
transactions.value:"values"."customer.entyid"::string as customer_project_id,
transactions.value:"values".departmentnohierarchy.text::string as department_name,
transactions.value:"values".departmentnohierarchy.value::string as department_value,
transactions.value:"values".fxamount::string as fx_amount,
transactions.value:"values".location.text::string as product_name,
transactions.value:"values".postingperiod.text::string as postingperiod,
transactions.value:"values".postingperiod.value::string as postingperiod,
transactions.value:"values"."subsidiary.custrecord_region".text::string as region_name,
transactions.value:"values"."subsidiary.custrecord_region".value::string as region_value,
transactions.value:"values".subsidiarynohierarchy.text::string as entity_name,
transactions.value:"values".subsidiarynohierarchy.value::string as entity_value,
FROM MY_TABLE,
LATERAL FLATTEN (JSON_DATA:transactions) as transactions
and here's a picture of whats showing in Snowflake:
SNOWFLAKE_SCREENSHOT
departmentnohierarchy is an array. you need to mention the index as below.
select *,transactions.VALUE:"values".departmentnohierarchy[0].value::text as department_name
FROM jsont1,
LATERAL FLATTEN (JSON_DATA:transactions) as transactions
I am new to Python and JSON data structures and was looking for some assistance
I have been able to create some Python code that calls a Web API and converts the returning JSON data (report_rows) into a dataframe successfully using json_normalize()
I am having some issues converting and sorting the JSON column names into the dataframe column names and was wondering if I could get some help on the following...
Get Column Names from JSON data - In the dataframe I would like to convert the column names: c1, c2, c3, etc to RECORD_NO, REF_RECORD_NO, SOV_LINEITEM_NO. The column names are in the JSON data [data][report_header][cXX][name] where cXX is the column number
Sort Column Names - I would like to order the dataframe columns so instead of c1, c10, c11, c12, c2, c3, etc it is c1, c2, c3 ... c10, c11,c12
If someone is able to provide some help, it would be greatly appreciated
Thanks in advance
Python Code
json_data = json.loads(res.read())
data = pd.json_normalize(json_data['data'], record_path=['report_row'])
print(data)
which outputs the following
c1 c10 c11 ... c7 c8 c9
0 CON-0000001 71 VEN-0000001 ... Build IT System Contract 123 Pending
1 CON-0000002 72 VEN-0000002 ... Build IT System Contract XYZ Approved
JSON Data
"data": [
{
"report_header": {
"c11": {
"name": "VENDOR_RECORD",
"type": "java.lang.String"
},
"c10": {
"name": "VENDOR_ID",
"type": "java.lang.Integer"
},
"c12": {
"name": "VENDOR_NAME",
"type": "java.lang.String"
},
"c1": {
"name": "RECORD_NO",
"type": "java.lang.String"
},
"c2": {
"name": "REF_RECORD_NO",
"type": "java.lang.String"
},
"c3": {
"name": "SOV_LINEITEM_NO",
"type": "java.lang.String"
},
"c4": {
"name": "REF_ITEM",
"type": "java.lang.String"
},
"c5": {
"name": "PROJECTNUMBER",
"type": "java.lang.String"
},
"c6": {
"name": "PROJECTNAME",
"type": "java.lang.String"
},
"c7": {
"name": "TITLE",
"type": "java.lang.String"
},
"c8": {
"name": "CONTRACT_NO",
"type": "java.lang.String"
},
"c9": {
"name": "STATUS",
"type": "java.lang.String"
}
},
"report_row": [
{
"c1": "CON-0000001",
"c10": "71 ",
"c11": "VEN-0000001",
"c12": "Microsoft",
"c2": "",
"c3": "1",
"c4": "",
"c5": "P-0037",
"c6": "Project ABC",
"c7": "Build IT System",
"c8": "Contract 123",
"c9": "Pending"
},
{
"c1": "CON-0000002",
"c10": "72 ",
"c11": "VEN-0000002",
"c12": "Google",
"c2": "",
"c3": "1.1",
"c4": "",
"c5": "P-0037",
"c6": "Project ABC",
"c7": "Build IT System",
"c8": "Contract XYZ",
"c9": "Approved"
}
]
}
],
"message": [
"OK"
],
"status": 200
}
i was able to resolve the issue by adding the following code...
# Get the number of fields/columns in the JSON data
number_of_fields = len((json_data['data'][0]['report_header']))
reorder_columns = []
new_column_names = []
field_index = 0
# Loop through the Columns and do the following...
# reorder_columns - this is the column order that i want: c1, c2, c3 ... c10, c11, c12
# new_column_name - this will retrieve the column names from the header: c1.name, c2.name, etc
while field_index < number_of_fields:
field_index += 1
new_column = "c" + str(field_index)
reorder_columns.append(new_column)
column_header = new_column + '.name'
new_column_name = header.iloc[0][new_column + '.name']
new_column_names.append(new_column_name)
data = pd.json_normalize(json_data['data'], record_path=['report_row'])
data = data.reindex(columns=reorder_columns)
data.columns = new_column_names
I tried putting distinct() in my query but when i get the results in my frontend and in the api, I still get duplicate records. Does anyone know why distinct is not working in my code?
My code
$result = DB::connection('mysql2')
->table('xp_pn_ura_transactions')
->whereRaw(DB::raw("CONCAT(block, ' ', street,' ',project_name,' ', postal_code,'')LIKE '%$request->projectname%' order by STR_TO_DATE(sale_date, '%d-%M-%Y') asc"))
->limit($request->limit)
->distinct()
->get();
return \Response::json(array(
//'total_count' => $count,
'result' => $result,
));
Front end result
My response, I only get the first two objects that duplicates
{
"id": 228686,
"transtype": "RESI",
"project_name": "WATERFRONT WAVES",
"unitname": "08-06 ",
"block": "760",
"street": "Bedok Reservoir Road ",
"level": "08",
"stack": "06 ",
"no_of_units": "1",
"area": "147",
"type_of_area": "Strata",
"transacted_price": "1300500",
"nettprice": "-",
"unitprice_psm": "8847",
"unitprice_psf": "822",
"sale_date": "20-JAN-2008",
"contract_date": " ",
"property_type": "Condominium",
"tenure": "99 Yrs From 31/10/2007",
"completion_date": "Uncompleted",
"type_of_sale": "New Sale",
"purchaser_address_indicator": "Private",
"postal_district": "16",
"postal_sector": "47",
"postal_code": "479245",
"planning_region": "East Region",
"planning_area": "Bedok",
"update_time": "2019-12-09 17:14:35"
},
{
"id": 224686,
"transtype": "RESI",
"project_name": "WATERFRONT WAVES",
"unitname": "08-06 ",
"block": "760",
"street": "Bedok Reservoir Road ",
"level": "08",
"stack": "06 ",
"no_of_units": "1",
"area": "147",
"type_of_area": "Strata",
"transacted_price": "1300500",
"nettprice": "-",
"unitprice_psm": "8847",
"unitprice_psf": "822",
"sale_date": "20-JAN-2008",
"contract_date": " ",
"property_type": "Condominium",
"tenure": "99 Yrs From 31/10/2007",
"completion_date": "Uncompleted",
"type_of_sale": "New Sale",
"purchaser_address_indicator": "Private",
"postal_district": "16",
"postal_sector": "47",
"postal_code": "479245",
"planning_region": "East Region",
"planning_area": "Bedok",
"update_time": "2019-12-09 17:11:57"
}
They got different id but same records, is there a way to ignore the id and get the other fields?
You need to select the field that you need to distinct, or it will distinct all the fields that you selected:
So according to your post, the id and updated_time are not duplicated, you don't need to select it out.
Try something like this:
$result = DB::connection('mysql2')
->table('xp_pn_ura_transactions')
->whereRaw(DB::raw("CONCAT(block, ' ', street,' ',project_name,' ', postal_code,'')LIKE '%$request->projectname%' order by STR_TO_DATE(sale_date, '%d-%M-%Y') asc"))
->limit($request->limit)
# select the fields which is duplicated.(In your post, select the field without id and updated_time)
->select("transtype",
"project_name",
"unitname",
"block",
"street",
"level",
"stack",
"no_of_units",
"area",
"type_of_area",
"transacted_price",
"nettprice",
"unitprice_psm",
"unitprice_psf",
"sale_date",
"contract_date",
"property_type",
"tenure",
"completion_date",
"type_of_sale",
"purchaser_address_indicator",
"postal_district",
"postal_sector",
"postal_code",
"planning_region",
"planning_area")
->distinct()
->get();
if you need to select the fields not duplicated, you can use groupBy() instead of distinct()
Below is the sample document for organization
{
"org": {
"id": "org_2_1084",
"organizationId": 1084,
"organizationName": "ABC",
"organizationRoles": [
{
"addressAssociations": [
{
"activeDate": "2019-08-03T18:52:00.857Z",
"addressAssocTypeId": -2,
"addressId": 100,
"ownershipStatus": 1,
"srvAddressStatus": 1
},
{
"activeDate": "2019-08-03T18:52:00.857Z",
"addressAssocTypeId": -2,
"addressId": 105,
"ownershipStatus": 1,
"srvAddressStatus": 1
}
],
"name": "NLUZ",
"organizationRoleId": 893,
"roleSpecId": -104,
"statusId": 1,
"statusLastChangedDate": "2019-08-04T13:14:44.616Z"
},
{
"addressAssociations": [
{
"activeDate": "2019-08-03T18:52:00.857Z",
"addressAssocTypeId": -2,
"addressId": 582,
"ownershipStatus": 1,
"srvAddressStatus": 1
},
{
"activeDate": "2019-08-03T18:52:00.857Z",
"addressAssocTypeId": -2,
"addressId": 603,
"ownershipStatus": 1,
"srvAddressStatus": 1
}
],
"name": "TXR",
"organizationRoleId": 894,
"partyRoleAssocs": [
{
"partyRoleAssocId": "512"
}
],
"roleSpecId": -103,
"statusId": 1,
"statusLastChangedDate": "2019-08-04T13:14:44.616Z"
},
}
and below is the sample document for address
{
"address": {
"address1": "string",
"address2": "string",
"addressId": "1531",
"changeWho": "string",
"city": "string",
"fxGeocode": "string",
"houseNumber": "string",
"id": "1531",
"isActive": true,
"postalCode": "string",
"state": "string",
"streetName": "string",
"tenantId": "2",
"type": "address",
"zip": "string"
}
}
In an organization there are multiple organizationRoles and in an organizationRole there are multiple addressAssociations.Each addressAssociation contains an addressId and corresponding to this addressId
address is stored in address document.
Now i have to get organizationRole name, organizationRole id, city, zip from the two documents.
What should be the best way to approach this situation for the best performance in couchbase?
I am thinking about using join but not able to come up with an exact query for this scenario.
I have tried the below query but its not working.
select *
from 'contact' As A UNNEST 'contact'.organizationRoles as Roles
UNNEST Roles.addressAssociations address
Join 'contact' As B
on address.addressID=B.addressID
where A.type="organization" and B.type="address";
You are in the right direction.
In addressAssociations the addressId is number, In address addressId is string. string and number not same and no implicit type casting. You must fix data or do explicit type casting using TOSTRING(), TONUMBER() etc...
Also N1QL field names are case-sensitive your query using addressID vs addressId (in the document)
SELECT r.name AS organizationRoleName, r.organizationRoleId, a.city, a.zip
FROM contact AS c
UNNEST c.organizationRoles AS r
UNNEST r.addressAssociations AS aa
jOIN contact AS a
ON aa.addressId = a.addressId
WHERE c.type = "organization" AND a.type = "address";
CREATE INDEX ix1 ON contact(addressId, city, zip) WHERE type = "address";
Check out https://blog.couchbase.com/ansi-join-support-n1ql/