"Unexpected token in JSON at position 0"? - json

I'm writing a node.js module which imports a JSON file:
const distDirPath = "c:/temp/dist/";
const targetPagePath = "c:/temp/index.html";
const cliJsonPath = "C:/CODE/MyApp/.angular-cli.json";
const fs = require('fs');
function deployAot()
{
var version = JSON.parse(fs.readFileSync(cliJsonPath, 'utf8')).version;
}
// export the module
module.exports = {
DeployAot: deployAot
};
I validated the contents of the json file above in https://jsonlint.com/ and it's valid json but the first line of code above in deployAot() returns the following error when I exec the node module:
"Unexpected token in JSON at position 0"
Here's the specific json:
https://jsonblob.com/cd6753d2-9e51-11e7-aa97-2f95b001b178
Any idea what the problem might be here?

As #cartant already mentioned in comments to the question, most probably you get a special character (Byte order mark) at the beginning of the file.
I would try to replace this
fs.readFileSync(cliJsonPath, 'utf8')
with this
fs.readFileSync(cliJsonPath, 'utf8').substring(1)
to get rid of the very first character from the string and would see what happens.
GitHub issue fs.readFileSync(filename, 'utf8') doesn't strip BOM markers
Recommendation from the issue:
Workaround:
body = body.replace(/^\uFEFF/, '');
After reading a UTF8 file where you are uncertain whether it may have a BOM marker in it.

Related

FormatException (FormatException: Unexpected character (at character 1)

I am trying to get data from a json from a link and decode it to display data in a calendar, so it worked fine until this error appears in this line
dynamic jsonAppData = convert.jsonDecode(data.body);
Which trows this:
Exception has occurred. FormatException (FormatException: Unexpected
character (at character 1) <!doctype html><base href="https://accou... ^ )
I don't really know why it is caused, I searched for solutions but I didn't find anything for my case.
I hope you can help me.
Future<List> getDataFromGoogleSheet() async {
Response data = await http.get(
Uri.parse(
"https://script.google.com/macros/s/AKfycbybaFrTEBrxTIni8izFKMQYNNAe7ciVMlqF0OUHyWujjRR2AQ8zDyQzh96tleRKMHSN/exec"),
);
dynamic jsonAppData = convert.jsonDecode(data.body);
final List<Meeting> appointmentData = [];
for (dynamic data in jsonAppData) {
var recurrence = data['byday'];
Meeting meetingData = Meeting(
eventName: data['subject'],
from: _convertDateFromString(data['starttime']),
to: _convertDateFromString(data['endtime']),
background: Colors.grey.shade800,
recurrenceRule: 'FREQ=DAILY;INTERVAL=7;BYDAY:$recurrence;COUNT=10',
);
appointmentData.add(meetingData);
String notes = data['notes'];
}
return appointmentData; }
Your response body is not of json type.
You should check your request before
You can't parse the json because you have to authenticate with google first. If you call the page in the browser, where you are not logged in with Google, then you are redirected to the login page of Google. And my guess is this page is parsed, not the json.

Flutter - Writing json to file sometimes breaks causing issue when reading back in

I am having an odd issue, whereby my app is writing JSON into a file, but in some cases, it is leaving spurious characters at the end.
It does not happen all the time, making it difficult to work around.
The JSON files are created inside the app, to be used later inside the app (though some are being sent to an API and I occasionally see this same issue). What I see, is after the final closing brace, are parts of a previously saved data. It is as though the writeAsString is not truncating the file... just writing over the top, and if what it is writing is shorter, then leaving the rest inside the file.
An example...
// Working with my map<String, dynamic> adding or modifying fields
// In this case, my map is called sFormData
await jFile.writeFile("Submitted.json", json.encode(sFormData));
the writeFile routine is...
Future<File> writeFile(String fileName, String content) async {
final path = await localPath;
File file = File('$path/' + fileName
.split('/')
.last);
// Write the file.
return await file.writeAsString(content);
}
which without a FileMode, should default to FileMode.Write, which should truncate the original file during writing.
Mostly, this is fine. However, when it breaks, then either when it is sent to the API, or re-used again inside the app... then the issues start. Inside the app, I am getting errors like...
FormatException: Unexpected character (at line x, character y)
when I try
String filejson = await file.readFile(fileName);
// When I look at filejson, I can see the extra characters, which causes the jsonDecode below to break
List<InProgressOrSubmittedItems> formsList = InProgressOrSubmittedList.fromJson(jsonDecode(filejson)).pForms as List<InProgressOrSubmittedItems>;
This leads me to believe that it is something in the writeAsString method that is not clearing down the file before writing.
=== EDIT ===
After trying things, this appears to work, but I think it is more of a hack. Can anyone see any potential issues with this?
Future<File> writeJsonFile(String fileName, Map jsonData) async {
final path = await localPath;
File file = File('$path/' + fileName
.split('/')
.last);
String encodedJson = json.encode(jsonData);
await file.writeAsString(encodedJson);
try{
// Test the written json...
String jsonContent = await file.readAsString();
jsonData = json.decode(jsonContent);
}
catch (e) {
// Oh-oh... if we got here, the Json did not save properly.
// Let's try again.
// Try deleting the file first this time.
if (await file.exists()) {
await file.delete();
}
await file.writeAsString(encodedJson);
}
// Write the file.
return file;
}

JSON - WepAPI - Unexpected character encountered while parsing value

ANY help will be greatly appreciated
I have a Generic class that facilitates WebAPI calls, Its been in place for quite sometime and has had no issue. Today I'm getting an error and not sure where to track the problem. the exact error is
{"Unexpected character encountered while parsing value: [. Path 'PayLoad', line 1, position 12."}
what I'm getting back as the result of the call is
"{\"PayLoad\":[\"file_upload_null20180629155922²AAGUWVP2XUezeM3CiEnSOw.pdf\"],\"Success\":true,\"Message\":\"1 File(s) Uploaded\",\"Exceptions\":[]}"
Which looks right and is what I expect back from the service call
Here is the method that I'm calling that suddenly quit working, and its failing on the last line
public static TR WebApiPost(string serveraddress, string endpoint, object data)
{
HttpResponseMessage msg;
var clienthandler = new HttpClientHandler
{
UseDefaultCredentials = false,
Credentials = new NetworkCredential(user, password, domain)
};
using (var client = new HttpClient(clienthandler) { BaseAddress = new Uri(serveraddress) })
{
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
msg = client.PostAsync(endpoint, new StringContent(new JavaScriptSerializer().Serialize(data), Encoding.UTF8, "application/json")).Result;
}
var result = msg.Content.ReadAsStringAsync().Result;
return JsonConvert.DeserializeObject<TR>(result);
}
AND finally the line that actually makes the call (which should not matter)
returned = CallHelper<ResultStatus<string>>.WebApiPost(serviceurl, sendFileUrl, model);
It's not clear where your web service is getting the value of PayLoad from, so it is very possible that the value has a Byte Order Mark (BOM) at its beginning. This is especially the case if you are returning the content of what was originally a Unicode encoded file.
Be aware that a BOM is NOT visible when you are viewing a string in the debugger.
On your web service, make sure that you are not returning a BOM in the value of PayLoad. Check for this byte sequence at the beginning of the string:
0xEF,0xBB,0xBF
For more information on Byte Order Mark:
https://en.wikipedia.org/wiki/Byte_order_mark

JSON report not generating for failed scenarios using protractor

If my scenarios got failed the JSON report not generating. But for passes scenarios I can able to see the JSON report.
Please find my config file as below.
In comment prompt console I can able to see the failure message:
W/launcher - Ignoring uncaught error AssertionError: expected false to equal true
E/launcher - BUG: launcher exited with 1 tasks remaining
You can save the report by using a hook, so don't generate the file form the protractor.conf.js file, but use a cucumber-hook for it.
The hook can look like this
reportHook.js:
const cucumber = require('cucumber');
const jsonFormatter = cucumber.Listener.JsonFormatter();
const fs = require('fs-extra');
const jsonFile = require('jsonfile');
const path = require('path');
const projectRoot = process.cwd();
module.exports = function reportHook() {
this.registerListener(jsonFormatter);
/**
* Generate and save the report json files
*/
jsonFormatter.log = function(report) {
const jsonReport = JSON.parse(report);
// Generate a featurename without spaces, we're gonna use it later
const featureName = jsonReport[0].name.replace(/\s+/g, '_').replace(/\W/g, '').toLowerCase();
// Here I defined a base path to which the jsons are written to
const snapshotPath = path.join(projectRoot, '.tmp/json-output');
// Think about a name for the json file. I now added a featurename (each feature
// will output a file) and a timestamp (if you use multiple browsers each browser
// execute each feature file and generate a report)
const filePath = path.join(snapshotPath, `report.${featureName}.${new Date}.json`);
// Create the path if it doesn't exists
fs.ensureDirSync(snapshotPath);
// Save the json file
jsonFile.writeFileSync(filePath, jsonReport, {
spaces: 2
});
};
}
You can save this code to the file reportHook.js and then add it to the cucumberOpts:.require so it will look like this in your code
cucumberOpts: {
require: [
'../step_definitions/*.json',
'../setup/hooks.js',
'../setup/reportHook.js'
],
....
}
Even with failed steps / scenario's it should generate the report file.
Hope it helps

node.js readfile error with utf8 encoded file on windows

I'm trying to load a UTF8 json file from disk using node.js (0.10.29) on Windows 8.1. The following is the code that runs:
var http = require('http');
var utils = require('util');
var path = require('path');
var fs = require('fs');
var myconfig;
fs.readFile('./myconfig.json', 'utf8', function (err, data) {
if (err) {
console.log("ERROR: Configuration load - " + err);
throw err;
} else {
try {
myconfig = JSON.parse(data);
console.log("Configuration loaded successfully");
}
catch (ex) {
console.log("ERROR: Configuration parse - " + err);
}
}
});
I get the following error when I run this:
SyntaxError: Unexpected token ´╗┐
at Object.parse (native)
...
Now, when I change the file encoding (using Notepad++) to ANSI, it works without a problem.
Any ideas why this is the case? Whilst development is being done on Windows the final solution will be deployed to a variety of non-Windows servers, I'm worried that I'll run into issues on the server end if I deploy an ANSI file to Linux, for example.
According to my searches here and via Google the code should work on Windows as I am specifically telling it to expect a UTF-8 file.
Sample config I am reading:
{
"ListenIP4": "10.10.1.1",
"ListenPort": 8080
}
Per "fs.readFileSync(filename, 'utf8') doesn't strip BOM markers #1918", fs.readFile is working as designed: BOM is not stripped from the header of the UTF-8 file, if it exists. It at the discretion of the developer to handle this.
Possible workarounds:
data = data.replace(/^\uFEFF/, ''); per https://github.com/joyent/node/issues/1918#issuecomment-2480359
Transform the incoming stream to remove the BOM header with the NPM module bomstrip per https://github.com/joyent/node/issues/1918#issuecomment-38491548
What you are getting is the byte order mark header (BOM) of the UTF-8 file. When JSON.parse sees this, it gives an syntax error (read: "unexpected character" error). You must strip the byte order mark from the file before passing it to JSON.parse:
fs.readFile('./myconfig.json', 'utf8', function (err, data) {
myconfig = JSON.parse(data.toString('utf8').replace(/^\uFEFF/, ''));
});
// note: data is an instance of Buffer
To get this to work without I had to change the encoding from "UTF-8" to "UTF-8 without BOM" using Notepad++ (I assume any decent text editor - not Notepad - has the ability to choose this encoding type).
This solution meant that the deployment guys could deploy to Unix without a hassle, and I could develop without errors during the reading of the file.
In terms of reading the file, the other response I sometimes got in my travels was a question mark appended before the start of the file contents, when trying various encoding options. Naturally with a question mark or ANSI characters appended the JSON.parse fails.
Hope this helps someone!
New answer
As i had the same problem with several different formats I went ahead and made a npm that try to read textfiles and parse it as text, no matter the original format. (as original question was to read a .json it would fit perfect). (files without BOM and unknown BOM is handled as ASCII/latin1)
https://www.npmjs.com/package/textfilereader
So change the code to
var http = require('http');
var utils = require('util');
var path = require('path');
var fs = require('textfilereader');
var myconfig;
fs.readFile('./myconfig.json', 'utf8', function (err, data) {
if (err) {
console.log("ERROR: Configuration load - " + err);
throw err;
} else {
try {
myconfig = JSON.parse(data);
console.log("Configuration loaded successfully");
}
catch (ex) {
console.log("ERROR: Configuration parse - " + err);
}
}
});
Old answer
Run into this problem today and created function to take care of it.
Should have a very small footprint, assume it's better than the accepted replace solution.
function removeBom(input) {
// All alternatives found on https://en.wikipedia.org/wiki/Byte_order_mark
const fc = input[0].charCodeAt(0).toString(16);
switch (fc) {
case 'efbbbf': // UTF-8
case 'feff': // UTF-16 (BE) + UTF-32 (BE)
case 'fffe': // UTF-16 (LE)
case 'fffe0000': // UTF-32 (LE)
case '2B2F76': // UTF-7
case 'f7644c': // UTF-1
case 'dd736673': // UTF-EBCDIC
case 'efeff': // SCSU
case 'fbee28': // BOCU-1
case '84319533': // GB-18030
return input.slice(1);
break;
default:
return input;
}
}
const fileBuffer = removeBom(fs.readFileSync(filePath, "utf8"));