PostgreSQL: error with parse JSON with '' - json

can you get advise me how I can fix the error when I try to parse JSON from PostgreSQL table?
ERROR: invalid input syntax for type json Hint: Token "'" is invalid. Where: JSON data, line 1: [{'...
I have researched this issue and see that it comes up due to the fact that some contain '' in JSON:
[{'name':'cc','desc':'What is your credit card number? I promise to keep it real secure like.','type':'string','regex':'\\d+','min_length':1,'max_length':16,'example':'736363627'},{'name':'height','desc':'How tall are you?','type':'int','min':4,'max':666,'example':55},{'name':'likescake','desc':'Do you like cake?','type':'bool'},{'name':'address','desc':'What is your address','type':'string','min_length':5,'example':'blk 6 lot 26 blah blah'},{'name':'single','desc':'Are you single?','type':'bool'},{'name':'weight','desc':'what is your weight in kgs?','type':'float','example':55}]
another JSONs contain "":
[{"desc": "What is your credit card number? I promise to keep it real secure like.", "name": "cc", "type": "string", "regex": "\\d+", "max_length": 16, "min_length": 1}, {"max": "666", "min": "4", "desc": "How tall are you?", "name": "height", "type": "int"}, {"desc": "Do you like cake?", "name": "likescake", "type": "bool"}]
I try to parse with this command:
-- For multiple choice from JSON
SELECT
s.projectid,
s.prompttype,
el.inputs->>'name' AS name,
el.inputs->>'desc' AS desc,
el.inputs->>'values' AS values,
s.created,
s.modified
FROM source_redshift.staticprompts AS s,
jsonb_array_elements(s.inputs::jsonb) el(inputs);

As #Jim-Jones said, JSON is invalid.
There are many online and offline JSON validation tools. I recomend to use it.
Its help you detect the cause of problems: is it invalid JSON or error in your code.
For example, JSON Formatter said that it replaced incorrect quotes.

Related

422 error trying to save json data to the database

I'm trying to save data to my MySql db from a Node method. This includes a field called attachments.
console.log(JSON.stringify(post.acf.attachments[0])); returns:
{
"ID": 4776,
"id": 4776,
"title": "bla",
"filename": "bla.pdf",
"filesize": 1242207,
"url": "https://example.com/wp-content/uploads/bla.pdf",
"link": "https://example.com/bla/",
"alt": "",
"author": "1",
"description": "",
"caption": "",
"name": "bla",
"status": "inherit",
"uploaded_to": 0,
"date": "2020-10-23 18:05:13",
"modified": "2020-10-23 18:05:13",
"menu_order": 0,
"mime_type": "application/pdf",
"type": "application",
"subtype": "pdf",
"icon": "https://example.com/wp-includes/images/media/document.png"
}
This is indeed the data I want to save to the db:
await existing_post.save({
...
attachments: post.acf.attachments[0],
)};
However, the attachments field produces a 422 server error (if I comment out this field, the other fields save without a problem to the db). I'm not getting what is causing this error. Any ideas?
I've also tried
await existing_post.save({
...
attachments: post.acf.attachments,
)};
but then it seems to just save "[object Object]" to the database.
The field in the database is defined as text. I've also tried it by defining the field as json, but that made no difference.
exports.up = function (knex, Promise) {
return knex.schema.table("posts", function (table) {
table.longtext("attachments");
});
};
The 422 error code is about the server unable to process the data you are sending to it. In your case, your table field is longtext when post.acf.attachments seems like an object. That's why it saves [object Object] to your db (It is the return value of the toString() method).
Try using
await existing_post.save({
...
attachments: JSON.stringify(post.acf.attachments),
)};
MySQL and knex both support the JSON format, I'd suggest you change the field to json. (See knex docs and mysql 8 docs). You'll stiil need to stringify your objects tho.
EDIT: I just saw that Knex supports jsonInsert (and plenty other neat stuff) as a query builder that should be useful for you.
Mysql also support a large range of cool stuffs for handling jsons
In addition, when you fetch the results in the database, you'll need to parse the JSON result to get an actual JSON object:
const acf = await knex('posts').select('acf').first();
const attachment = JSON.parse(acf.attachment;
Knex also provide jsonExtract that should fill your needs (See also the mysql json_extract

How long can messages be when posting to a Microsoft Teams connector webhook?

I am posting the results/logs of a CI/CD system to Microsoft Teams. While handling some failed builds with longer results, I stumbled upon the following error returned by the webhook URL https://outlook.office.com/webhook/bb6bfee7-1820-49fd-b9f9-f28f7cc679ff#<uuid1>/IncomingWebhook/<id>/<uuid2>:
Webhook message delivery failed with error: Microsoft Teams endpoint returned HTTP error 413 with ContextId tcid=3626521845697697778,server=DB3PEPF0000009A,cv=BmkbJ1NdTkv1EDoqr7n/rg.0..
As I observe, this is caused by too long payload posted to the Teams webhook URL.
The initial complex message (sections, titles, subtitles, formatted links, <pre> formatted text, etc.) was failing when the JSON payload was a above 18000 characters.
Testing a bit with the payload I observed that the more formatting I remove from the raw JSON payload, the longer can the Teams message be. The longest message I could post had (according cu cURL): Content-Length: 20711. The JSON payload for this message was:
{"themeColor":"ED4B35","text":"a....a"}
whitespaces in the JSON format seem not to count (i.e. adding spaces will not decrease the maximum message length that I can sent to the Teams webhook).
For reference, the initial message was looking similar to this:
{
"themeColor": "ED4B35",
"summary": "iris-shared-libs - shared-library-updates - failure",
"sections": [
{
"activityTitle": "Job: [iris-shared-libs](https://my.concourse.net/teams/hsm/pipelines/iris-shared-libs) - [shared-library-updates #89](https://my.concourse.sccloudinfra.net/teams/hsm/pipelines/iris-shared-libs/jobs/shared-library-updates/builds/89) (FAILURE)",
"activityImage": "https://via.placeholder.com/200.png/ED4B35/FFFFFF?text=F",
"facts": [
{
"name": "Failed step",
"value": "update-shared-libraries"
}
]
},
{
"text": "Trying a new strategy with gated versioned releases",
"facts": [
{
"name": "Repository",
"value": "[iris-concourse-resources](https://my.git.com/projects/IRIS/repos/iris-concourse-resources)"
},
{
"name": "Commit",
"value": "[2272145ddf9285c9933df398d63cbe680a62f2b7](https://my.git.com/projects/IRIS/repos/iris-concourse-resources/commits/2272145ddf9285c9933df398d63cbe680a62f2b7)"
},
{
"name": "Author",
"value": "me#company.com"
}
]
},
{
"activityTitle": "Job failed step logs part 1",
"text": "<pre>...very long log text goes here ...</pre>"
}
]
}
What is the actual maximum lengths of the Microsoft Teams connector webhook posted message?
The official page does not mention it. In the Feedback section at the bottom there is still an open question regarding "Messages size limits?" with the feedback: "We are currently investigating this."
From the tests I made so far, some limits observed (if this is independent of the server) are roughly, based on the JSON message payload (structure and formatting) between 18000 and 40000 (with length below 18000 never breaking and above 40000 always breaking).
Use case 18000: one long text for a section
Use case 40000: 600 facts with very short name and empty string as value
And removing a fragment of the JSON payload and adding an equal number of characters in another JSON value will not give you the same maximum.
I have observed a soft limit (message truncated, but no error) as well on the maximum number of sections: 10. The sections starting with the 11th are discarded.

JSON request using Postman

I am sending a raw Json requet using postman to an API service which feeds it to another web service and finally a database. I want to attach a file to the raw Json request.
I am attaching below the current request I am sending. Is it the right way? The first name and other information is going through but the attachment is not. Any suggestions?
{
"Prefix": "",
"FirstName": "test-resume-dlyon",
"LastName": "test-dlyon-resume",
"AddressLine1": "test2",
"AddressLine2": "",
"City": "Invalid Zipcode",
"State": "GA",
"Zip": "99999",
"Phone": "9999999999",
"Email": "testresumedlyon#gmail.com",
"Source": "V",
"WritingNumber": "",
"AgeVerified": true,
"AdditionalSource": "",
"EnableInternetSource": true,
"InternetSource": "",
"ExternalResult": "",
"PartnerID": "",
"SubscriberID": "15584",
"Languages": [
"English",
"Spanish"
],
"fileName": "resume",
"fileExtension": "docx",
"fileData": "UELDMxE76DDKlagmIF5caEVHmJYFv2qF6DpmMSkVPxVdtJxgRYV"
}
There is no "correct" format to attach a file to a JSON.
JSON is not multipart/form-data (which is designed to include files).
JSON is a text-based data format with a variety of data types (such as strings, arrays, and booleans) but nothing specific for files.
This means that to attach a file, you have to get creative.
For example, you could encode a file in text format (e.g. using base64), but it wouldn't be very efficient, and any Word document would result in you getting a much longer string than "UELDMxE76DDKlagmIF5caEVHmJYFv2qF6DpmMSkVPxVdtJxgRYV".
Of course, the method you use to encode the file has to be the method that whatever is reading the JSON expects you to use. Since there is no standard for this, and you have said nothing about the system which is consuming the JSON you are sending, we have no idea what that method is.
First of all, I'd recommend reading the postman API docs. They have some extremely useful information on there for using the API. Two particular articles that might of interest here are these:
Looking at it and running it through a validator like this one shows that there are no syntax errors so it must be to do with the JSON parameters the API is expecting.
Here's something you can try:
In postman, set method type to POST.
Then select Body -> form-data -> Enter your parameter name (file according to your code)
and on right side next to value column, there will be dropdown "text, file", select File. choose your image file and post it.
For rest of "text" based parameters, you can post it like normally you do with Postman. Just enter parameter name and select "text" from that right side dropdown menu and enter any value for it, hit send button. Your controller method should get called.

Convert string to JSON in Freemarker

Any ways on how we can convert VALID JSON STRING to actual JSON(sequence) in freemarker. I mean this string is actually returned by a JSON.stringify() call.
I follow what this post says but it seems this is not applicable to mine.
<#assign test = "(record.custpage_lineitems?json_string)">
<#assign m = test?eval>
<#list m as k>
${k.item}
</#list>
ERROR says
Expected collection or sequence. m evaluated instead to freemarker.template.SimpleScalar on line 286, column 32 in template.
Sample JSON String
{
"34952": {
"item": "TRAVEL",
"notes": "Travel Time To Client Site to Perform Repairs.1.0",
"type": "Service"
},
"34963": {
"item": "MECHANIC RECOMMENDATION",
"notes": "MECHANIC RECOMMENDATION\nr&r drive tires 21x7x15 smooth black \nr&r lp tank latch on bracket \nr&r lp hose cupler",
"type": "Service"
},
"9938": {
"item": "T1",
"notes": "Field Service Call Charge75$ labor 124$",
"type": "Service"
},
"34549": {
"item": "GENERAL SERVICE INFO",
"notes": "SERVICE NOTES:\ndrove to customer location found lift found to broken hydraulic hoses had to remove attachment in order to remove broken hoses then drove to get hoses made installed hoses back on lift re installed loose brackets I found out attachment back on lift topped off hydraulic resivoir and lift was ready",
"type": "Service"
},
"36264": {
"item": "FSO PARTS (UN CHECK IF NEEDED)",
"notes": "MARK CHECK IF PARTS NOT NEEDED.",
"type": "Service"
},
"36266": {
"item": "FSO QUOTE (UN CHECK IF NEEDED)",
"notes": "MARK CHECK IF QUOTE NOT NEEDED.",
"type": "Service"
},
"29680": {
"item": "0199992-HY32F",
"notes": "2 x 0199992-HY32F",
"type": "Inventory Item"
}
}
It seems that it is not converting to a valid sequence because if i'll try to print ${m} it displays the escaped json string.
I am looking for a way that I will just say <#assign test=toJSON(record.custpage_lineitems) but I think you have to write methods in java since I am doing this in 'netsuite'
UPDATE: I tried to hard code the json string like
<#assign m = '{"34952":{"item":"TRAVEL","notes":"Travel Time To Client Site to Perform Repairs.1.0","type":"Service"}....}'>
and try to loop through, it seems working. But if I substitute the value of m to myvariable seems not working. I am 100% sure myvariable is not null nor empty but contains the same JSON string.
My assessment is that, if I could just wrap the myvariable to single quote then I think it will solve the issue. I tried
<#assign m = 'myvariable'> and
<#assign m = '(myvariable)'> and
<#assign m = '(${myvariable})'> and
<#assign m = '(myvariable?string)'> etc.
but none is correct. Can someone just direct me into what is the proper syntax on how to wrap existing variable to single quote.
Any help guys? Thanks.
I think the \n inside the json string could cause some problems. Try replacing it first with (or something similar what will suit your needs)
record.custpage_lineitems?replace("\n", "\\n")
and then do the eval
If your record.custpage_lineitems is already a stringified JSON then you do not have to use ?json_string.
Replace your first two lines with this:
<#assign m = record.custpage_lineitems?eval>
See eval documentation for details.
Update:
Your custpage_lineitems is a hash map and #list accepts sequence. Try this:
<#list m?keys as k>
${m[k].item}
</#list>

JSON: Is there an equivalent of Schematron for JSON and JSON Schema? (That is, a JSON technology to express co-constraints)

Here is a JSON instance showing the start-time and end-time for a meeting:
{
"start time": "2015-02-19T08:00:00Z",
"end time": "2015-02-19T09:00:00Z"
}
I can specify the structure of that instance using JSON Schema: the instance must contain an object with a "start time" property and an "end time" property and each property must be a date-time formatted string. See below for the JSON schema. But what I cannot specify is this: the meeting must start before it ends. That is, the value of "start time" must be less than the value of "end time". Some people call this data dependency a co-constraint. In the XML world there is a wonderful, simple technology for expressing co-constraints: Schematron. I am wondering if there is an equivalent technology in the JSON world? What would you use to declaratively describe the relationship between the value of "start time" and "end time"? (Note: writing code in some programming language is not what I mean by "declaratively describe the relationships". I am seeking a declarative means to describe the data dependencies that are present in JSON documents, not procedural code.)
{
"$schema": "http://json-schema.org/draft-04/schema#",
"definitions": {
"meeting": {
"type": "object",
"properties": {
"start time": { "type": "string", "format": "date-time"},
"end time": { "type": "string", "format": "date-time"}
},
"required": [ "start time", "end time" ],
"additionalProperties": false
}
},
"$ref": "#/definitions/meeting"
}
Yes.There is a JSON Semantic Validator based on Schematron available at:
https://www.npmjs.com/package/jsontron
It implements 'schema', 'phase', 'rule', 'assert' and reporting features of Schematron.
Here is when the original example of start time and end time was run through the validator:
good_time.json file contents:
{
"starttime": "2015-02-19T08:00:00Z",
"endtime": "2015-02-19T09:00:00Z"
}
bad_time.json file contents:
{
"starttime": "2015-02-19T09:00:00Z",
"endtime": "2015-02-19T08:00:00Z"
}
Schematron Rules file meeting-times-rules.json snippet:
"rule":[
{
"context": "$",
"assert":[
{
"id":"start_stop_meeting_chec",
"test":"jp.query(contextNode, '$..starttime') < jp.query(contextNode, '$..endtime')",
"message": "Meeting cannot end before it starts"
}
]
}
]
When ran with correct example:
$jsontron\bin>node JSONValidator -i ./good_time.json -r ./meeting-times-rules.json
The output was:
Starting Semantic Validation .........
Parsing Pattern: Meetingtimings
1 Pattern(s) Requested. 1 Pattern(s) Processed. 0 Pattern(s) Ignored.
**** THIS INSTANCE IS SEMANTICALLY VALID ****
Completed Semantic Validation .........
When ran with bad data example. The output was:
$jsontron\bin>node JSONValidator -i ./bad_time.json -r ./meeting-times-rules.json
Starting Semantic Validation .........
Parsing Pattern: Meetingtimings
1 Pattern(s) Requested. 1 Pattern(s) Processed. 0 Pattern(s) Ignored.
**** THIS INSTANCE CONTAINS SEMANTIC VALIDATION ISSUES. PLEASE SEE FULL REPORT BY ENABLING DEBUG WITH -d OPTION ****
Completed Semantic Validation .........
The message with debug options was:
...validation failed...
message: 'Meeting cannot end before it starts'
Sadly, the answer is no. JSON Schema allows you to validate the structure, and permitted values, but there are no mechanisms for validating sets of values, a'la Schematron.
The simplest way to solve this is to have another script in the pipeline which runs these kinds of checks.
There is an implementation in Oxygen JSON Editor that allows you to validate JSON documents against Schematron.
https://www.oxygenxml.com/doc/versions/22.0/ug-editor/topics/json-validating-documents-against-schema.html
The Schematron rules are expressed using XPath expressions, and the problems are reported in the JSON documents.
<!-- The 'genre' property should be none but one of the items declared in 'literatureGenres' property -->
<sch:rule context="genre">
<sch:let name="genre" value="text()"/>
<sch:let name="literatureGenres" value="//literatureGenres/text()"/>
<sch:assert test="normalize-space($genre) = $literatureGenres">
Wrong genre: '<sch:value-of select="$genre"/>'. See the 'literatureGenres' property for the permitted ones.
</sch:assert>
</sch:rule>
https://www.slideshare.net/nottavy/schematron-for-nonxml-languages
The json-schema.org website lists quite a few implementations.