I am working on nested json data which i got from facebook using nifi. i have creating a table in hive and loading the data using the command.
CREATE TABLE abmediaanalysis (id string, posts struct< data:array<struct< message:string, shares:struct< count:int>, id:string, reactions:struct< data:array<struct< name:string,id:string>>, paging:struct< cursors:struct< before:string,after:string>, next:string>>, likes:struct< data:array<struct< id:string>>, paging:struct< cursors:struct< before:string,after:string>, next:string>>>>, paging:struct< previous:string,next:string>>, feed struct< data:array<struct< permalink_url:string,message:string,id:string>>, paging:struct< previous:string,next:string>>)ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe';
load data local inpath '/home/10879/facebook1480479682880.json' overwrite into table abmediaanalysis;
i have also added the jar file json-serde-1.3.8-jar-with-dependencies.jar,
but when i am using lateral view explode to print all the columns, i am getting java heap size error. i have also increased the heap size but still same error
select id,posts_message,posts_share_count,posts_id,feed_data_permalink_url,feed_data_message,feed_data_id,reaction_data_name,reaction_data_id,posts_likes_data_id from abmediaanalysis
LATERAL VIEW explode(posts.data.message)MSG as posts_message
LATERAL VIEW explode(posts.data.shares.count)CT as posts_share_count
LATERAL VIEW explode(posts.data.id) I as posts_id
LATERAL VIEW explode(feed.data.permalink_url) PU as feed_data_permalink_url
LATERAL VIEW explode(feed.data.message) MSG as feed_data_message
LATERAL VIEW explode(feed.data.id) I as feed_data_id
LATERAL VIEW explode(posts.data.reactions) NM as posts_reactions_name
LATERAL VIEW explode(posts_reactions_name.data.name) NM as reaction_data_name
LATERAL VIEW explode(posts_reactions_name.data.id) NM as reaction_data_id
LATERAL VIEW explode(posts.data.likes) I as likes_data_id
LATERAL VIEW explode(likes_data_id.data.id) I as posts_likes_data_id;
when i tried to print two or three columns instead of showing 616 records its showing 15625 records approx.
Can anyone help with this issue
Is there a chance to load the above json data directly from nifi into hive table? if so can you tell me.
Thanks in advance
If I may suggest you another approach to just load the whole JSON string into a column as String datatype into an external table.
e.g.
CREATE EXTERNAL TABLE json_data_table (
id String,
json_data String
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\u0001' LINES TERMINATED BY '\n' STORED AS TEXTFILE
LOCATION '/path/to/json';
Use Hive get_json_object to extract individual columns. E.g.
If json_data column has below JSON string
{"store":
{"fruit":\[{"weight":8,"type":"apple"},{"weight":9,"type":"pear"}],
"bicycle":{"price":19.95,"color":"red"}
},
"email":"amy#only_for_json_udf_test.net",
"owner":"amy"
}
The below query fetches
SELECT get_json_object(json_data, '$.owner') FROM json_data_table;
returns amy
In this way you could extract each json element as column from the table.
Related
I'm new to athena even though I have some short experience with Hive.
I'm trying to create a table from JSON files, which are exports from MongoDB. My problem is that MongoDB uses $oid, $numberInt, $numberDoble and others as internal references, but '$' is not accepted in a column name in Athena.
This is a one line JSON file that I created to test:
{"_id":{"$oid":"61f87ebdf655d153709c9e19"}}
and this is the table that referes to it:
CREATE EXTERNAL TABLE landing.json_table (
`_id` struct<`$oid`:string>
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION 's3://bucket-name/test/';
When I run a simple SELECT * it returns this error:
HIVE_METASTORE_ERROR: Error: name expected at the position 7 of
'struct<$oid:string>' but '$' is found. (Service: null; Status Code:
0; Error Code: null; Request ID: null; Proxy: null)
Which is related to the fact that the JSON column contains the $.
Any idea on how to handle the situation? My only resolution for now is to create a script which "clean" the json file from the unaccepted characters but I would really prefer to handle it directly in Athena if possible
If you switch to the OpenX SerDe, you can create a SerDe mapping for JSON fields with special characters like $ in the name.
See AWS Blog entry Create Tables in Amazon Athena from Nested JSON and Mappings Using JSONSerDe , section "Walkthrough: Handling forbidden characters with mappings".
A mapping that would work for your example:
CREATE EXTERNAL TABLE landing.json_table (
`_id` struct<`oid`:string>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
"mapping.oid"="$oid"
)
LOCATION 's3://bucket-name/test/';
I have some JSON data (about 60GB) that I have to load in Hive external table. I am using Hive 3.x with Hadoop 3.x. The schema of table is as follows:
CREATE TABLE people(a string, liid string, link string, n string, t string, e string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.JsonSerDe'
STORED AS TEXTFILE LOCATION '/data/db/';
I have also loaded the jar for serde as follows:
ADD JAR /usr/hive/lib/hive-hcatalog-core-3.1.2.jar;
If I copy a simple text json (or load) then DML queries (select etc.) works fine. As data file is very large and thus I have compressed it (20GB now). I have loaded this compressed file into Hive table (created above).
hive> select * from people;
OK
Failed with exception java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Field name expected
Time taken: 0.096 seconds
hive>
It is working fine with uncompressed data. What is the issue with this ?
I have tried some solutions like this but not successful
I found the solution myself. Actual the issue was there are two columns that are arrays in json. They should be mapped to ARRAY in hive. The sample I taken for schema did not contain these array. Hence, by changing the field type to array<<string>> for one column solved my issue.
I have a bunch of json snappy compressed files in HDFS.
They are HADOOP snappy compressed (not python, cf other SO questions)
and have nested structures.
Could not find a method to load them into
into HIVE (using json_tuple) ?
Can I get some ressources/hints on how to load them
Previous references (does not have valid answers)
pyspark how to load compressed snappy file
Hive: parsing JSON
Put all files in HDFS folder and create external table on top of it. If files have names like .snappy Hive will automatically recognize them. You can specify SNAPPY output format for writing table:
set hive.exec.compress.output=true;
set mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
set mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
set mapreduce.output.fileoutputformat.compress.type=BLOCK;
CREATE EXTERNAL TABLE mydirectory_tbl(
id string,
name string
)
ROW FORMAT SERDE
'org.openx.data.jsonserde.JsonSerDe'
LOCATION '/mydir' --this is HDFS/S3 location
;
JSONSerDe can parse all complex structures, it is much easier than using json_tuple. Simple attributes in json are mapped to columns as is All in the square brackets [] is an array<>, in {} is a struct<> or map<>, complex types can be nested. Carefully read Readme: https://github.com/rcongiu/Hive-JSON-Serde. There is a section about nested structures and many examples of CREATE TABLE.
If you still want to use json_tuple, then create table with single STRING column then parse using json_tuple. But it is much more difficult.
All JSON records should be in single line (no newlines inside JSON objects, as well as \r) . The same is mentioned here https://github.com/rcongiu/Hive-JSON-Serde
If your data is partitioned (ex. by date)
Create the table in Hive
CREATE EXTERNAL TABLE IF NOT EXISTS database.table (
filename STRING,
cnt BIGINT,
size DOUBLE
) PARTITIONED BY ( \`date\` STRING )
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION 'folder/path/in/hdfs'
Recover the partition (before the recovery, the table seems to be empty)
MSCK REPAIR TABLE database.table
I referred to this link: http://docs.aws.amazon.com/emr/latest/ReleaseGuide/EMR_Hive_Commands.html.
My hive script is like below:
DROP TABLE IF EXISTS hiveTableName;
CREATE EXTERNAL TABLE hiveTableName (item map<string,string>)
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES ("dynamodb.table.name" = "test_table", "dynamodb.region"="us-west-2");
DROP TABLE IF EXISTS s3TableName;
CREATE EXTERNAL TABLE s3TableName (item map<string, string>)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
LOCATION 's3://bucket/test-hive2';
SET dynamodb.throughput.read.percent=0.8;
INSERT OVERWRITE TABLE s3TableName SELECT *
FROM hiveTableName;
Dynamodb table can be successfully exported to S3, but the file format is not JSON, it is like:
uuid{"s":"db154955-8555-4b49-bf40-ee36605ac510"}num{"n":"1294"}info{"s":"qwefjdkslafjdafl"}
uuid{"s":"d9898564-2b56-42ba-9cfb-fd092e7d0b8d"}num{"n":"100"}info{"s":"qwefjdkslafjdafl"}
Does someone know how to export in JSON format? I know I can use Data Pipeline, and it can export dynamodb table to S3 in JSON format, but for some reason I need to use EMR. I tried another tool: https://github.com/awslabs/emr-dynamodb-connector, and use the command:
java -cp target/emr-dynamodb-tools-4.2.0-SNAPSHOT.jar org.apache.hadoop.dynamodb.tools.DynamoDBExport /where/output/should/go my-dynamo-table-name
but the error was
Error: Could not find or load main class org.apache.hadoop.dynamodb.tools.DynamoDBExport
Can someone tell me how to solve these problems? Thanks.
== update ==
If I use to_json, as Chris suggested, my code is as below:
DROP TABLE IF EXISTS hiveTableName2;
CREATE EXTERNAL TABLE hiveTableName2 (item map<string, string>)
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES ("dynamodb.table.name" = "test_table", "dynamodb.region"="us-west-2");
DROP TABLE IF EXISTS s3TableName2;
CREATE EXTERNAL TABLE s3TableName2 (item string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
LOCATION 's3://backup-restore-dynamodb/hive-test';
INSERT OVERWRITE TABLE s3TableName2 SELECT to_json(item)
FROM hiveTableName2;
When I look at the generated file, it's like
{"uuid":"{\"s\":\"db154955-8555-4b49-bf40-ee36605ac510\"}","num":"{\"n\":\"1294\"}","info":"{\"s\":\"qwefjdkslafjdafl\"}"}
What I want is a nested map, like
map<string, map<string, string>>
not
map<string, string>
Can someone give me some suggestions? Thanks.
Your SELECT * query is emitting a serialized form of the Hive map, which isn't guaranteed to be JSON. You may want to consider using the Brickhouse Hive UDF's. In particular, calling the to_json function would be a good fit for guaranteeing a JSON format in your output.
to_json -- Convert an arbitrary Hive structure ( list,map, named_struct ) into JSON
INSERT OVERWRITE TABLE s3TableName SELECT to_json(item)
FROM hiveTableName;
On November 9, 2020, DynamoDB released a new feature to export your data to an S3 bucket - you can read more about it here:
https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/
It's a native, server-less solution, and currently (as of 11/20) supports DynamoDB JSON.
I'm new into AWS Athena, and I'm trying to query multiple S3 buckets containing JSON files. I encountered a number of problems that don't have any answer in documentation (sadly their error log is not informative enough to try to solve it myself):
How to query a JSON field named with parenthesis? For example I have a field named "Capacity(GB)", and when I'm trying to include in the CREATE EXTERNAL statement I receive an error:
CREATE EXTERNAL TABLE IF NOT EXISTS test-scema.test_table (
`device`: string,
`Capacity(GB)`: string)
Your query has the following error(s):
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
java.lang.IllegalArgumentException: Error: : expected at the position
of 'Capacity(GB):string>' but '(' is found.
My files are located in sub folders in S3 in a following structure:
'location_name/YYYY/MM/DD/appstring/'
and I want to query all the dates of a specific app-string (out of many). is there any 'wildcard' I can use to replace the dates path?
Something like this:
LOCATION 's3://location_name/%/%/%/appstring/'
Do I have to load the raw data as-is using CREATE EXTERNAL TABLE, and only then query it, or I can add some WHERE statements build-in? Specifically is someting like this is possible:
CREATE EXTERNAL TABLE IF NOT EXISTS test_schema.test_table (
field1:string,
field2:string
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
) LOCATION 's3://folder/YYYY/MM/DD/appstring'
WHERE field2='value'
What would be the outcomes in terms of billing? Cause right now I'm building this CREATE statement only to re-use the data in a SQL query once-again.
Thanks!
1. JSON field named with parenthesis
There is no need to create a field called Capacity(GB). Instead, create the field with a different name:
CREATE EXTERNAL TABLE test_table (
device string,
capacity string
)
ROW FORMAT serde 'org.apache.hive.hcatalog.data.JsonSerDe'
with serdeproperties ( 'paths'='device,Capacity(GB)')
LOCATION 's3://xxx';
If you are using Nested JSON then you can use the Serde's mapping property (which I saw on issue with Hive Serde dealing nested structs):
CREATE external TABLE test_table (
top string,
inner struct<device:INT,
capacity:INT>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
with serdeproperties
(
"mapping.capacity" = "Capacity(GB)"
)
LOCATION 's3://xxx';
This works nicely with an input of:
{ "top" : "123", "inner": { "Capacity(GB)": 12, "device":2}}
2. Subfolders
You cannot wildcard mid-path (s3://location_name/*/*/*/appstring/). The closest option is to use partitioned data but that would require a different naming format for your directories.
3. Creating tables
You cannot specify WHERE statements as part of the CREATE TABLE statement.
If your aim is to reduce data costs, then use partitioned data to reduce the number of files scanned or store in a column-based format such as Parquet.
For examples, see: Analyzing Data in S3 using Amazon Athena