Error parsing JSON: more than one document in the input (Redshift to Snowflake SQL) - json

I'm trying to convert a query from Redshift to Snowflake SQL.
The Redshift query looks like this:
SELECT
cr.creatives as creatives
, JSON_ARRAY_LENGTH(cr.creatives) as creatives_length
, JSON_EXTRACT_PATH_TEXT(JSON_EXTRACT_ARRAY_ELEMENT_TEXT (cr.creatives,0),'previewUrl') as preview_url
FROM campaign_revisions cr
The Snowflake query looks like this:
SELECT
cr.creatives as creatives
, ARRAY_SIZE(TO_ARRAY(ARRAY_CONSTRUCT(cr.creatives))) as creatives_length
, PARSE_JSON(PARSE_JSON(cr.creatives)[0]):previewUrl as preview_url
FROM campaign_revisions cr
It seems like JSON_EXTRACT_PATH_TEXT isn't converted correctly, as the Snowflake query results in error:
Error parsing JSON: more than one document in the input
cr.creatives is formatted like this:
"[{""previewUrl"":""https://someurl.com/preview1.png"",""device"":""desktop"",""splitId"":null,""splitType"":null},{""previewUrl"":""https://someurl.com/preview2.png"",""device"":""mobile"",""splitId"":null,""splitType"":null}]"

It seems to me that you are not working with valid JSON data inside Snowflake.
Please review your file format used for the copy into command.
If you open the "JSON" text provided in a text editor , note that the information is not parsed or formatted as JSON because of the quoting you have. Once your issue with double quotes / escaped quotes is handled, you should be able to make good progress
Proper JSON on Left || Original Data on Right
If you are not inclined to reload your data, see if you can create a Javascript User Defined Function to remove the quotes from your string, then you can use Snowflake to process the variant column.
The following code is working POJO that can be used to remove the doublequotes for you.
var textOriginal = '[{""previewUrl"":""https://someurl.com/preview1.png"",""device"":""desktop"",""splitId"":null,""splitType"":null},{""previewUrl"":""https://someurl.com/preview2.png"",""device"":""mobile"",""splitId"":null,""splitType"":null}]';
function parseText(input){
var a = input.replaceAll('""','\"');
a = JSON.parse(a);
return a;
}
x = parseText(textOriginal);
console.log(x);

For anyone else seeing this double double quote issue in JSON fields coming from CSV files in a Snowflake external stage (slightly different issue than the original question posted):
The issue is likely that you need to use the FIELD_OPTIONALLY_ENCLOSED_BY setting. Specifically, FIELD_OPTIONALLY_ENCLOSED_BY = '"' when setting up your fileformat.
(docs)
Example of creating such a file format:
create or replace file format mydb.myschema.my_tsv_file_format
type = CSV
field_delimiter = '\t'
FIELD_OPTIONALLY_ENCLOSED_BY = '"';
And example of querying from a stage using this file format:
select
$1 field_one
$2 field_two
-- ...and so on
from '#my_s3_stage/path/to/file/my_tab_separated_file.csv' (file_format => 'my_tsv_file_format')

Related

mySql JSON string field returns encoded

First week having to deal with a MYSQL database and JSON field types and I cannot seem to figure out why values are encoded automatically and then returned in encoded format.
Given the following SQL
-- create a multiline string with a tab example
SET #str ="Line One
Line 2 Tabbed out
Line 3";
-- encode it
SET #j = JSON_OBJECT("str", #str);
-- extract the value by name
SET #strOut = JSON_EXTRACT(#J, "$.str");
-- show the object and attribute value.
SELECT #j, #strOut;
You end up with what appears to be a full formed JSON object with a single attribute encoded.
#j = {"str": "Line One\n\tLine 2\tTabbed out\n\tLine 3"}
but using JSON_EXTRACT to get the attribute value I get the encoded version including outer quotes.
#strOut = "Line One\n\tLine 2\tTabbed out\n\tLine 3"
I would expect to get my original string with the \n \t all unescaped to the original values and no outer quotes. as such
Line One
Line 2 Tabbed out
Line 3
I can't seem to find any JSON_DECODE or JSON_UNESCAPE or similar functions.
I did find a JSON_ESCAPE() function but that appears to be used to manually build a JSON object structure in a string.
What am I missing to extract the values to the original format?
I like to use handy operator ->> for this.
It was introduced in MySQL 5.7.13, and basically combines JSON_EXTRACT() and JSON_UNQUOTE():
SET #strOut = #J ->> '$.str';
You are looking for the JSON_UNQUOTE function
SET #strOut = JSON_UNQUOTE( JSON_EXTRACT(#J, "$.str") );
The result of JSON_EXTRACT() is intentionally a JSON document, not a string.
A JSON document may be:
An object enclosed in { }
An array enclosed in [ ]
A scalar string value enclosed in " "
A scalar number or boolean value
A null — but this is not an SQL NULL, it's a JSON null. This leads to confusing cases because you can extract a JSON field whose JSON value is null, and yet in an SQL expression, this fails IS NULL tests, and it also fails to be equal to an SQL string 'null'. Because it's a JSON type, not a scalar type.

Unexpected end of JSON input at undefined line XXXX, columns xx-xx while reading in BigQuery

I have a table in Bigquery which has 2 columns - job_id and json_column(string which is in JSON format). When I tried to read the data and identify some objects it gives me error as below:
SyntaxError:Unexpected end of JSON input at undefined line XXXX, columns xx-xx
It Always gives me line 5931 and second time I execute again it gives line 6215.
If it's related to JSON structure issue, how can I know which row/job_id that number 5931 corresponds to? If I subset for a specific job_id, it returns the values but when I tried to execute on the complete table, I got this error. I tried looking at the job_id at the row_numbers mentioned and code works fine for those job_ids.
Do you think its JSON structure issue and how to identify which row/job_id has this Issue?
Table Structure:
Code:
CREATE TEMPORARY FUNCTION CUSTOM_JSON_EXTRACT(json STRING, json_path STRING)
RETURNS ARRAY<STRING>
LANGUAGE js AS """
var result = jsonPath(JSON.parse(json), json_path);
if(result){return result;}
else {return [];}
"""
OPTIONS (
library="gs://json_temp/jsonpath-0.8.0.js"
);
SELECT job_id,dist,gm,sub_gm
FROM lz_fdp_op.fdp_json_file,
UNNEST(CUSTOM_JSON_EXTRACT(trim(conv_column), '$.Project.OpsLocationInfo.iDistrictId')) dist ,
UNNEST(CUSTOM_JSON_EXTRACT(trim(conv_column), '$.Project.GeoMarketInfo.Geo')) gm,
UNNEST(CUSTOM_JSON_EXTRACT(trim(conv_column), '$.Project.GeoMarketInfo.SubGeo')) sub_gm
Would this work for you?
WITH
T AS (
SELECT
'1000149.04.14' AS job_id,
'{"Project":{"OpsLocationInfo":{"iDistrictId":"A"},"GeoMarketInfo":{"Geo":"B","SubGeo":"C"}}}' AS conv_column
)
SELECT
JSON_EXTRACT_SCALAR(conv_column, '$.Project.OpsLocationInfo.iDistrictId') AS dist,
JSON_EXTRACT_SCALAR(conv_column, '$.Project.GeoMarketInfo.Geo') AS gm,
JSON_EXTRACT_SCALAR(conv_column, '$.Project.GeoMarketInfo.SubGeo') AS sub_gm
FROM
T
BigQuery JSON Functions docs:
https://cloud.google.com/bigquery/docs/reference/standard-sql/json_functions
how can I read multiple arrays in an object in JSON without using
unnest?
Can you explain better with an input sample your comment?

Unable to Extract simple Csv file using U-SQL

I have this csv file,
Almost all the records are getting processed fine, however there are two cases in which i am experiencing an issue.
Case 1:
A record containing quotes within quotes:
"some data "some data" some data"
Case 2:
A record containing comma within quotes:
"some data, some data some data"
i have looked into this issue, and got my way around looking into quoting parameter of the extractor, but i have observed that setting (quoting:false) solves case 1 and fails for case 2 and setting (quoting:true) solves case 2 but fails for case 1.
constraints: There is no room for changing the data file, the future data will be tailored accordingly but for this existing data i have to resolve this.
Try this, import records as one row and fix the row text using double quotes (do the same for the commas):
DECLARE #input string = #"/Samples/Data/Sample1.csv";
DECLARE #output string = #"/Output/Sample1.txt";
// Import records as one row
#data =
EXTRACT rowastext string
FROM #input
USING Extractors.Text('\n', quoting: false );
// Fix the row text using double quotes
#query =
SELECT Regex.Replace(rowastext, "([^,])\"([^,])", "$1\"\"$2") AS rowascsv
FROM #data;
OUTPUT #query
TO #output
USING Outputters.Csv(quoting : false);

SSIS write DT_NTEXT into an UTF-8 csv file

I need to write the result of an SQL query into a CSV file (UTF-8 (I need this encoding as there are French letters)). One of the columns is too large (more than 20000 char) so I can't use DT_WSTR for it. The type that is inputted is DT_TEXT so I use a Data Conversion to change it to DT_NTEXT. But then when I want to write it to the file I have this error message :
Error 2 Validation error. The data type for "input column" is
DT_NTEXT, which is not supported with ANSI files. Use DT_TEXT instead
and convert the data to DT_NTEXT using the data conversion component
Is there a way I can write the data to my file?
Thank you
I had this kind of issues also sometimes. When working with data larger than 255 characters SSIS sees it as blob data and will always handle this as such.
I then converted this blob stream data to a readable text with a script component. Then other transformation should be possible.
This was the case in ssis that came with sql server 2008 but I believe this isn't changed yet.
I ended up doing just like Samyne says, I used a script.
First I've modified my SQL SP, instead of having several columns I put all the info in one single column like follows :
Select Column1 + '^' + Column2 + '^' + Column3 ...
Then I used this code in a script
string fileName = Dts.Variables["SLTemplateFilePath"].Value.ToString();
using (var stream = new FileStream(fileName, FileMode.Truncate))
{
using (var sw = new StreamWriter(stream, Encoding.UTF8))
{
OleDbDataAdapter oleDA = new OleDbDataAdapter();
DataTable dt = new DataTable();
oleDA.Fill(dt, Dts.Variables["FileData"].Value);
foreach (DataRow row in dt.Rows)
{
foreach (DataColumn column in dt.Columns)
{
sw.WriteLine(row[column]);
}
}
sw.WriteLine();
}
}
Putting all the info in one column is optional, I just wanted to avoid handling it in the script, this way if my SP is changed I don't need to modify the SSIS.

Null Values by migration from MySQL to mongo

I need to migrate some tables from MySQL to mongoDB. After searching the web, for me it looks like an MySQL export to CSV and an import from that CSV to mongoDB should be the fastest and easiest way.
I'm export MySQL using that query:
select * into outfile '/tmp/feed.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY ''
from feeds;
But there is one problem.
If an MySQL field is NULL, so the MySQL export writes an \N (or \\N) into the CSV file.
By importing that file, mongoDB imports the \\N as string instead of an NULL value.
The mongoDB import option --ignoreBlanks will not work, becaouse \\N is not "blank" in mongoDB's point of view.
So my question:
1.) how could I avoid exporting NULLas \\N?
or
2.) how could mongodbimport read/interpret \\N as NULL or empty value?
By the way: it's not an option to postprocess the CSV to search and replace the \\N
On possible answer for 1.) could be the modification of the select statement: SELECT IFNULL( field1, "" ) But in this case I have to define and check every column. An export script would not so flexible, if all columns are defined in the select statement.
//Edit: while playing around with that import<->export I found an other problem: date fields, which also interpreted as strings from mongoimport
I would comment rather than adding an answer, but my reputation is still quite low...
What I've done in a project I'm working on is do the migration using a Python script. I have the exported table in a CSV. The code I use looks like this:
import csv
import zip
import pymongo
f = open( filename )
reader = csv.reader( f )
destinationItems = []
The following reads the column names (first row in CSV)
columns = next( reader )
The columns can be put in a tuple that here I call 'keys'. The code is here oblivious of the column names. Each row is then converted to a dictionary ready to be amended to remove (or do something else with -) NULLs.
keys = tuple( columns )
for property in reader:
entry = dict( zip( keys, property ) )
and the following deals with NULL; in this case I remove the entry altogether if found to be 'NULL' in the exported CSV.
entry = { k:v for k,v in entry.iteritems() if ( k in keys and ( v != 'NULL' ) or k not in keys ) }
destinationItems.append( entry )
Update the mongodb instance
mongoClient = pymongo.MongoClient()
mongoClient['mydb'].mycollection.insert( destinationItems )