I am trying to create a table from a json, the json being like
{"ocNo" : "6090","clientSessionKey" : {"office" : {"ortsCode" : 6090},"workstationNo" : 1}}
I tried to achieve it by executing following query:
CREATE EXTERNAL TABLE events_tryout(
ocNo string,
clientSessionKey struct<office struct<ortsCode: int>,
workstationNo int>
) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://lab.ea38-zplus.cap.nonprod.int2/test/'
However I get the following error message:
FAILED: ParseException line 1:81 missing : at 'struct' near '<EOF>' line 1:118 missing : at 'int' near '<EOF>'
I checked that the json is valid, so that is not the problem.
However, when I run it by removing ClientSessionKey and this nesting it works, which tells me, that the problem is adding another nesting. Can Athena deal with structs inside structs while creating tables from json, or should another approach be taken?
The problem is that there is a missing : after office, just like the error message is saying.
There is also another : missing after workstationNo.
Try struct<office:struct<ortsCode:int>,workstationNo: int>.
Related
I am attempting to create a table using a .tsv file in BigQuery, but keep getting the following error:
"Failed to create table: Error while reading data, error message: Could not parse '41.66666667' as INT64 for field Team_Percentage (position 8) starting at location 14419658 with message 'Unable to parse'"
I am not sure what to do as I am completely new to this.
Here is a file with the first 100 lines of the full data:
https://wetransfer.com/downloads/25c18d56eb863bafcfdb5956a46449c920220502031838/f5ed2f
Here are the steps I am currently taking to to create the table:
https://i.gyazo.com/07815cec446b5c0869d7c9323a7fdee4.mp4
Appreciate any help I can get!
As confirmed with OP (#dan), the error encountered is caused by selecting Auto detect when creating a table using a .tsv file as the source.
The fix for this is to manually create a schema and define the data type for each column properly. For more reference on using schema in BQ see this document.
I'm trying to obtain information from a API's response in the DB. The column is a text type.
So I'm using the following code in the SELECT part:
SELECT KA.id,
KA.response::json -> 'SearchComplianceAlertsResponse' -> 'TransactionResult' ->> 'ResultText' AS result_text,
KA.response::json -> 'SearchComplianceAlertsResponse' -> 'TransactionResult' ->> 'ResultMessage' AS result_description
FROM KA
When I export that as a CSV I'm getting weird information at the end of the CSV:
Also, when I try to join the table with the code from above I'm getting the error mentioned in the title of the question. But When I run the query separately I don't have any issues running it.
I am building a stats table that tracks user data points. The JSON is dynamic and can grow for multiple levels. I'm basically getting an error about invalid JSON using json_merge_patch, which I have used often before. I can not figure out why this is giving me the following error:
ERROR: Invalid JSON text in argument 1 to function json_merge_patch: "Invalid value." at position 0.
insert into
stats.daily_user_stats
VALUES
(null,'2022-02-02',1,18,3,'{"pageviews":{"user":1}}')
on duplicate key update
jdata =
if(
json_contains_path(jdata, 'one', '$.pageviews.user'),
json_set(jdata, '$.pageviews.user', cast(json_extract(jdata, '$.pageviews.user')+1 as UNSIGNED)),
json_merge_patch('jdata','{"pageviews":{"user":1}}')
)
Any help on identifying why the JSON I'm passing to the json_merge_function is not correct?
Solved. The json_merge_patch should look like this:
json_merge_patch(jdata,'{"pageviews":{"user":1}}')
I'm using azure data factory to transform data, I have a derived column to process images.
iif(isNull(column1_images),
iif(isNull(column2_images),
iif(isNull(images),'N/A',toString(images))
,concat(toString(column12_images),' ','(',column2_type,')')),
concat(toString(column1_images),' ','(',column1_type,')'))
when I click on refresh buton I can see the result :
but when I pass this column to the sink I'M GETTING THIS ERROR :
Conversion from StringType to ArrayType(StructType(StructField(url,StringType,true)),false) not defined
can you tell me what is the problem please ?
The error you are getting because there is no Schema designed for the Sink. Hence, you need to use Derived column expression and create a JSON schema to convert the data. Check Building schemas using the expression builder.
You can also check this similar kind of thread for your reference.
I am trying to read json data using Hive External table but I am getting Null pointer exception while using json serde..
Below is the table command and error:
hive> create external table json_tab
> (
> name string, age string, passion string
> )
> row format SERDE 'org.apache.hadoop.hive.contrib.serde2.JsonSerde' location '/home/pandi/hive_in';
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.NullPointerException
I have added below jar as well:
add jar /usr/local/apache-hive-2.1.1-bin/lib/hive-contrib-2.1.1.jar;
add jar /usr/local/apache-hive-2.1.1-bin/lib/hive-json-serde.jar;
Please help.
It looks like an issue with the SerDe class.
Try to make use of this implementation: 'org.apache.hive.hcatalog.data.JsonSerDe' present in hive-hcatalog-core-0.13.0.jar;
This works for me.