Extracting Key:Value from column fields in Hive - mysql

I'm currently learning/testing Hive and can't seem to find a suitable solution to this problem:
I have log files which look like this:
IP, Date, Time, URL, Useragent
Which I have currently in a Table with these Columns. These Columns are delimited by '\t' but URL has been given some specific client information looking somewhat like this:
example.org/log.gif?userID=xxx&sex=m&age=y&subscriber=y&lastlogin=ddd
and I want to create a new table with these given value-pairs: userID, sex, age, subscriber, lastlogin another problem being that the value-pairs are not always complete, or some are missing. Like this:
example.org/log.gif?userID=xxx&sex=m&age=y&subscriber=y&lastlogin=ddd
example.org/log.gif?userID=xxx&sex=m&age=y&lastlogin=
Which makes Hive's ... format delimited fields terminated by '&'; afaik useless in this case because it would lead to wrong values in columns.
Is there a way to solve this problem in Hive with SQL and regex?

This can be done, albeit with two Hive tables. You first load data into one table with the columns:
IP, Date, Time, URL, Useragent
Here I recommend using an EXTERNAL Hive table - you aren't parsing the data and this Hive table doesn't need to exist for very long, so simply place Hive metadata on top of it:
CREATE EXTERNAL TABLE raw_log (
ip string,
date string,
time string,
url string,
useragent string
)
LOCATION '<hdfs_location_of_the_raw_log_folder>';
Then use an INSERT INTO query with the Hive regexp_extract(string subject, string pattern, int index) method (see https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) to load it into the "final" table with the correct columns.
You can also write your own UDF which would enable you to better-handle the incomplete/missing values that you mention, albeit with the tradeoff that you have to re-compile and re-deploy a JAR every time the input data's format changes (see https://cwiki.apache.org/confluence/display/Hive/HivePlugins).

Related

Export non-varchar data to CSV table using Trino (formerly PrestoDB)

I am working on some benchmarks and need to compare ORC, Parquet and CSV formats. I have exported TPC/H (SF1000) to ORC based tables. When I want to export it to Parquet I can run:
CREATE TABLE hive.tpch_sf1_parquet.region
WITH (format = 'parquet')
AS SELECT * FROM hive.tpch_sf1_orc.region
When I try the similar approach with CSV, then I get the error Hive CSV storage format only supports VARCHAR (unbounded). I would assumed that it would convert the other datatypes (i.e. bigint) to text and store the column format in the Hive metadata.
I can export the data to CSV using trino --server trino:8080 --catalog hive --schema tpch_sf1_orc --output-format=CSV --execute 'SELECT * FROM nation, but then it gets emitted to a file. Although this works for SF1 it quickly becomes unusable for SF1000 scale-factor. Another disadvantage is that my Hive metastores wouldn't have the appropriate meta-data (although I could patch it manually if nothing else works).
Anyone an idea how to convert my ORC/Parquet data to CSV using Hive?
In Trino Hive connector, the CSV table can contain varchar columns only.
You need to cast the exported columns to varchar when creating the table
CREATE TABLE region_csv
WITH (format='CSV')
AS SELECT CAST(regionkey AS varchar), CAST(name AS varchar), CAST(comment AS varchar)
FROM region_orc
Note that you will need to update your benchmark queries accordingly, e.g. by applying reverse casts.
DISCLAIMER: Read the full post, before using anything discussed here. It's not real CSV and you migth screw up!
It is possible to create typed CSV-ish tables when using the TEXTFILE format and use ',' as the field separator:
CREATE TABLE hive.test.region (
regionkey bigint,
name varchar(25),
comment varchar(152)
)
WITH (
format = 'TEXTFILE',
textfile_field_separator = ','
);
This will create a typed version of the table in the Hive catalog using the TEXTFILE format. It normally uses the ^A character (ASCII 10), but when set to ',' it resembles the same structure as CSV formats.
IMPORTANT: Although it looks like CSV, it is not real CSV. It doesn't follow RFC 4180, because it doesn't properly quote and escape. The following INSERT will not be inserted co:
INSERT INTO hive.test.region VALUES (
1,
'A "quote", with comma',
'The comment contains a newline
in it');
The text will be copied unmodified to the file without escaping quotes or commas. This should have been written like this to be proper CSV:
1,"A ""quote"", with comma","The comment contains a newline
in it"
Unfortunately, it is written as:
1,A "quote", with comma,The comment contains a newline
in it
This results in invalid data that will be represented by NULL columns. For this reason, this method can only be used when you have full control over the text-based data and are sure that it doesn't contain newlines, quotes, commas, ...

How to extract the value that is in JSON format in MYSQL table

I have a mysql table WEBSITE_IMAGES in which one of the field name called Value has data in JSON format.
Value field looks like below:
I am wondering how I can extract product_name and image_name only. (eg: 14669 golden.png, 14754 tealglass.png)
{"1235":"custom_images","options":{"1235":{"product_image":"image","color":"","image":"{\"14669\":\"\/s\/i\/golden.png\",\"14754\":\"\/s\/m\/tealglass.png\"
Best solution is the set your directory addresses in the programming settings side.
In case of file trouble or migration, you will have problem with your files and data. Keep your data simple. Let your program do the file locations later.
With that, you will have less trouble with slashes and conversion.

Importing a series of .CSV files that contain one field while adding additional 'known' data in other fields

I've got a process that creates a csv file that contains ONE set of values that I need to import into a field in a MySQL database table. This process creates a specific file name that identifies the values of the other fields in that table. For instance, the file name T001U020C075.csv would be broken down as follows:
T001 = Test 001
U020 = User 020
C075 = Channel 075
The file contains a single row of data separated by commas for all of the test results for that user on a specific channel and it might look something like:
12.555, 15.275, 18.333, 25.000 ... (there are hundreds, maybe thousands, of results per user, per channel).
What I'm looking to do is to import directly from the CSV file adding the field information from the file name so that it looks something like:
insert into results (test_no, user_id, channel_id, result) values (1, 20, 75, 12.555)
I've tried to use "Bulk Insert" but that seems to want to import all of the fields where each ROW is a record. Sure, I could go into each file and convert the row to a column and add the data from the file name into the columns preceding the results but that would be a very time consuming task as there are hundreds of files that have been created and need to be imported.
I've found several "import CSV" solutions but they all assume all of the data is in the file. Obviously, it's not...
The process that generated these files is unable to be modified (yes, I asked). Even if it could be modified, it would only provide the proper format going forward and what is needed is analysis of the historical data. And, the new format would take significantly more space.
I'm limited to using either MATLAB or MySQL Workbench to import the data.
Any help is appreciated.
Bob
A possible SQL approach to getting the data loaded into the table would be to run a statement like this:
LOAD DATA LOCAL INFILE '/dir/T001U020C075.csv'
INTO TABLE results
FIELDS TERMINATED BY '|'
LINES TERMINATED BY ','
( result )
SET test_no = '001'
, user_id = '020'
, channel_id = '075'
;
We need the comma to be the line separator. We can specify some character that we are guaranteed not to tppear to be the field separator. So we get LOAD DATA to see a single "field" on each "line".
(If there isn't trailing comma at the end of the file, after the last value, we need to test to make sure we are getting the last value (the last "line" as we're telling LOAD DATA to look at the file.)
We could use user-defined variables in place of the literals, but that leaves the part about parsing the filename. That's really ugly in SQL, but it could be done, assuming a consistent filename format...
-- parse filename components into user-defined variables
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(f.n,'T',-1),'U',1) AS t
, SUBSTRING_INDEX(SUBSTRING_INDEX(f.n,'U',-1),'C',1) AS u
, SUBSTRING_INDEX(f.n,'C',-1) AS c
, f.n AS n
FROM ( SELECT SUBSTRING_INDEX(SUBSTRING_INDEX( i.filename ,'/',-1),'.csv',1) AS n
FROM ( SELECT '/tmp/T001U020C075.csv' AS filename ) i
) f
INTO #ls_u
, #ls_t
, #ls_c
, #ls_n
;
while we're testing, we probably want to see the result of the parsing.
-- for debugging/testing
SELECT #ls_t
, #ls_u
, #ls_c
, #ls_n
;
And then the part about running of the actual LOAD DATA statement. We've got to specify the filename again. We need to make sure we're using the same filename ...
LOAD DATA LOCAL INFILE '/tmp/T001U020C075.csv'
INTO TABLE results
FIELDS TERMINATED BY '|'
LINES TERMINATED BY ','
( result )
SET test_no = #ls_t
, user_id = #ls_u
, channel_id = #ls_c
;
(The client will need read permission the .csv file)
Unfortunately, we can't wrap this in a procedure because running LOAD DATA
statement is not allowed from a stored program.
Some would correctly point out that as a workaround, we could compile/build a user-defined function (UDF) to execute an external program, and a procedure could call that. Personally, I wouldn't do it. But it is an alternative we should mention, given the constraints.

Athena AWS bad field name and multiple folders with Hive DDL

I'm new into AWS Athena, and I'm trying to query multiple S3 buckets containing JSON files. I encountered a number of problems that don't have any answer in documentation (sadly their error log is not informative enough to try to solve it myself):
How to query a JSON field named with parenthesis? For example I have a field named "Capacity(GB)", and when I'm trying to include in the CREATE EXTERNAL statement I receive an error:
CREATE EXTERNAL TABLE IF NOT EXISTS test-scema.test_table (
`device`: string,
`Capacity(GB)`: string)
Your query has the following error(s):
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
java.lang.IllegalArgumentException: Error: : expected at the position
of 'Capacity(GB):string>' but '(' is found.
My files are located in sub folders in S3 in a following structure:
'location_name/YYYY/MM/DD/appstring/'
and I want to query all the dates of a specific app-string (out of many). is there any 'wildcard' I can use to replace the dates path?
Something like this:
LOCATION 's3://location_name/%/%/%/appstring/'
Do I have to load the raw data as-is using CREATE EXTERNAL TABLE, and only then query it, or I can add some WHERE statements build-in? Specifically is someting like this is possible:
CREATE EXTERNAL TABLE IF NOT EXISTS test_schema.test_table (
field1:string,
field2:string
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1'
) LOCATION 's3://folder/YYYY/MM/DD/appstring'
WHERE field2='value'
What would be the outcomes in terms of billing? Cause right now I'm building this CREATE statement only to re-use the data in a SQL query once-again.
Thanks!
1. JSON field named with parenthesis
There is no need to create a field called Capacity(GB). Instead, create the field with a different name:
CREATE EXTERNAL TABLE test_table (
device string,
capacity string
)
ROW FORMAT serde 'org.apache.hive.hcatalog.data.JsonSerDe'
with serdeproperties ( 'paths'='device,Capacity(GB)')
LOCATION 's3://xxx';
If you are using Nested JSON then you can use the Serde's mapping property (which I saw on issue with Hive Serde dealing nested structs):
CREATE external TABLE test_table (
top string,
inner struct<device:INT,
capacity:INT>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
with serdeproperties
(
"mapping.capacity" = "Capacity(GB)"
)
LOCATION 's3://xxx';
This works nicely with an input of:
{ "top" : "123", "inner": { "Capacity(GB)": 12, "device":2}}
2. Subfolders
You cannot wildcard mid-path (s3://location_name/*/*/*/appstring/). The closest option is to use partitioned data but that would require a different naming format for your directories.
3. Creating tables
You cannot specify WHERE statements as part of the CREATE TABLE statement.
If your aim is to reduce data costs, then use partitioned data to reduce the number of files scanned or store in a column-based format such as Parquet.
For examples, see: Analyzing Data in S3 using Amazon Athena

MySQL DATE and BIGINT with LOAD DATA LOCAL INFILE

Dear All,
I am working on a Java script capable of loading historical market data from files to MySQL tables, where each data record contains 'date' and 'timestamp' field. A usual record is represented as:
01/11/2010,10:00:00.007,P,210.8,210.86,6,2,R,,2486662,P,P,,0,,N
01/11/2010,10:00:00.577,W,210.51,211.61,1,1,R,,2487172,W,W,,0,,N
where the first column represents 'date' and the second represents 'timestamp' (with a millisecond granularity). Since MySQL do not support millisecond granularity, I am using BIGINT (long) representation of the 'timestamp' field.
Currently the load query is as follows:
LOAD DATA LOCAL INFILE 'path to file'
INTO TABLE table
FIELDS TERMINATED BY ','
(#date, #time, remaining columns ...)
SET date=STR_TO_DATE(#date, '%m/%d/%Y'),
SET time=UNIX_TIMESTAMP( STR_TO_DATE(#time, '%H:%i:%S.%f') )
The query works fine, apart from the last line (which throws SQLExceptions). I've tried different format patterns but am having problems working out the conversion from string to timestamp, before converting to unix timestamp. Google isn't helping either. I would really appreciate if anyone could suggest reasonable solution. Unfortunately, changing MySQL to other database is not an option.
Regards,
UNIX_TIMESTAMP wants a "date time" string - you're giving it a "time" string.
select UNIX_TIMESTAMP(CONCAT(STR_TO_DATE("01/11/2010", "%m/%d/%Y"), " ", STR_TO_DATE("10:00:00.577", "%H:%i:%S.%f")));