Custom WP API endpoint NULL body data - json

I have a weird issue, that bashes my head in. It's probably something minor that I am overlooking, but for the life of me I cannot figure it out.
Here's the premise:
I am making a POST request to a custom registered api endpoint in a wp environment, to which i am sending json data from a form. Content type is set correctly and if i debug by dumping $request->get_body() it shows the correct data that i've passed on.
however, i also send base64 encoded image data, resulted from a file reader. If i add another item to the data being send and the base64 string as the value for it, the dump becomes NULL. Taking the base64 string out of the json, makes the dump become ok again.
I have also tried to increase the max upload size, and post size however, since the file i am using as test is 20 KB, I do not think this is the issue.
I am hoping somebody can help me see the error of my ways.
Here's a code snippet. Note that the URL is not real here, but real in my environment. Also, due to char limit, I could not post the whole base64 image in the snippet, but rest assured it is correct. I even tried with a 1px by 1px transparent image and I had the same problem.
var image = "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/4QDeRXhpZgAASUkqAAgAAAAGABIBAwABAAAAAQAAABoBBQABAAAAVgAAABsBBQABAAAAXgAAACgBAwABAAAAAgAAABMCAwABAAAAAQAAAGmHBAABAAAAZgAAAAAAAAA4YwAA6AMAADhjAADoAwAABwAAkAcABAAAADAyMTABkQcABAAAAAECAwCGkgcAFgAAAMAAAAAAoAcABAAAADAxMDABoAMAAQAAAP//AAACoAQAAQAAABgCAAADoAQAAQAAAGIBAAAAAAAA...";
var data = {
'test': 'hello world',
'image': image
};
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
var raw = JSON.stringify(data);
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
fetch("/wp-json/test/v1/testapi", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

I have solved the issue. It was unrelated to json / javascript / api.
It was a simple issue of tmp folder ownership. Since there were no errors spewing out by the php managing the request, I never noticed a PHP NOTICE saying to check permissions in temp folder as php was unable to upload the temp file there. And indeed, while the permissions were fine, the group ownership was not.
Sorry to have wasted everybody's time. Thank you.

Related

Superagent: Error: Parser is unable to parse the response

I'm using Superagent in my react app, and i'm making some call's to the IPFS api. Specifically, I am uploading files to my IPFS server. Now, everything works, When I upload one or multiple files the call goes through and the files show up in IPFS no problem.
A problem occurs when I upload multiple files though, the response seems to come back as plain text, instead of JSON, and superagent throws the error
client.js:399 Uncaught (in promise) Error: Parser is unable to parse the response
at Request.<anonymous> (client.js:399)
at Request.Emitter.emit (index.js:133)
at XMLHttpRequest.xhr.onreadystatechange (client.js:708)
So to be clear, when uploading a single file, I get a nice JSON response, but when I upload multiple files, the response is in plain text.
Can I force Superagent to give me the response back and parse it myself? Or can I set something when making the call so that it forces a json parse? Below is my superagent request function
add : acceptedFiles => {
const url = ipfs.getUrl("add")
const req = request.post(url)
acceptedFiles.forEach(file => req.attach(file.name, file))
req.then(res => {
return console.log(res);
})
}
I'm searching for this for a more elegant solution, but before I would have found it , I'd like to provide my own solution.
I think this problem caused by wrong responsive Content-Type set, but I've not confirmed this opinion yet.
However, you can try this:
req.catch(function (err) {
console.log(err.rawResponse)
})
At least, this solves my problem.
According to their docs you can specify custom parser that will take precedence over built-in parser:
You can set a custom parser (that takes precedence over built-in parsers) with the .buffer(true).parse(fn) method. If response buffering is not enabled (.buffer(false)) then the response event will be emitted without waiting for the body parser to finish, so response.body won't be available.
I tried and it worked well for me.
superagent.get('....')
.buffer(true)
.parse(({ text }) => JSON.parse(text))
.then(...)

New line in json array is getting converted to a comma | nodejs

I am relatively new to nodejs and running into an issue while parsing a Json post request.
Here is the JSON format of the post request:
{"parameters":{"issuerId":[96409],"source":["'XYZ'"]}}
And here is my code to read it.
function getSearchData(req, res, next) {
console.log("req is" + req.body);
try {
JSON.parse(reqJSON);
} catch (e) {
console.log(e);
}
}
This parsing works fine and I am able to parse it and do my further logic. However, if I change my format of post request(same request with additional new lines) it fails to parse as it adds additional commas in place of each new line in the request.
{
"parameters": {
"issuerId": [96409],
"source":["'XYZ'"]
}
}
Here's the output from the code with the second request.
req is{,"parameters":{"id":[96409],,"source":["'XYZ'"]}}
[SyntaxError: Unexpected token ,]
If you notice, an extra comma gets added at each new line, which was never in the request to begin with.
What am I doing wrong here?
You should never have to parse the JSON yourself, unless you're concatenating the request body stream yourself.
Hint 1: Do you use any framework like Express? Do you use body parser?
Hint 2: How do you create the JSON?
Hint 3: Do you use correct content-type?
Hint 4: How do you create req.body from the request stream?
You didn't include the entire code so it's impossible to give you a specific solution.
What am I doing wrong here?
Whatever you're doing wrong here, it's not included in the question.
However, if I change my format of post request(same request with additional new lines)
It would be useful if you included more details of how you do it.
I see two potential sources of that problem:
either the commas are introduced during the on the client side
or they are introduced during the request reading on the server side
You didn't show us any of those two parts - you didn't show the serializing code and the code that sends the data, and you didn't include the code that gets the data, possibly joins it from chunks and parses the JSON. But the problem is likely in one of those parts.
Update
Here is an example on how to do what you need using Express. You didn't answer whether you use any framework like Express or not, but I think that you should if you can't achieve that simple task without it, so here is a working example:
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
function getSearchData(req, res, next) {
console.log('req body is', req.body);
console.log('req body JSON is', JSON.stringify(req.body));
res.json({ ok: true });
}
app.use(bodyParser.json());
app.use(getSearchData);
app.listen(3335, () => console.log('Listening on http://localhost:3335/'));
It shows how to correctly get a parsed JSON request body as an object req.body, how to print the data as a standard console.log representation and serialized again as JSON. See how it works and compare it to your own solution. This is all I can do because not having seen your entire solution I cannot tell you more than the hints that I've already given.

Chrome dev tools fails to show response even the content returned has header Content-Type:text/html; charset=UTF-8

Why does my Chrome developer tools show
Failed to show response data
in response when the content returned is of type text/html?
What is the alternative to see the returned response in developer tools?
I think this only happens when you have 'Preserve log' checked and you are trying to view the response data of a previous request after you have navigated away.
For example, I viewed the Response to loading this Stack Overflow question. You can see it.
The second time, I reloaded this page but didn't look at the Headers or Response. I navigated to a different website. Now when I look at the response, it shows 'Failed to load response data'.
This is a known issue, that's been around for a while, and debated a lot.
As described by Gideon, this is a known issue with Chrome that has been open for more than 5 years with no apparent interest in fixing it.
Unfortunately, in my case, the window.onunload = function() { debugger; } workaround didn't work either. So far the best workaround I've found is to use Firefox, which does display response data even after a navigation. The Firefox devtools also have a lot of nice features missing in Chrome, such as syntax highlighting the response data if it is html and automatically parsing it if it is JSON.
For the ones who are getting the error while requesting JSON data:
If your are requesting JSON data, the JSON might be too large and that what cause the error to happen.
My solution is to copy the request link to new tab (get request from browser)
copy the data to JSON viewer online where you have auto parsing and work on it there.
As described by Gideon, this is a known issue.
For use window.onunload = function() { debugger; } instead.
But you can add a breakpoint in Source tab, then can solve your problem.
like this:
If you make an AJAX request with fetch, the response isn't shown unless it's read with .text(), .json(), etc.
If you just do:
r = fetch("/some-path");
the response won't be shown in dev tools.
It shows up after you run:
r.then(r => r.text())
"Failed to show response data" can also happen if you are doing crossdomain requests and the remote host is not properly handling the CORS headers. Check your js console for errors.
For the once who receive this error while requesting large JSON data it is, as mentioned by Blauhirn, not a solution to just open the request in new tab if you are using authentication headers and suchlike.
Forturnatly chrome does have other options such as Copy -> Copy as curl.
Running this call from the commandoline through cURL will be a exact replicate of the original call.
I added > ~/result.json to the last part of the commando to save the result to a file.
Otherwise it will be outputted to the console.
For those coming here from Google, and for whom the previous answers do not solve the mystery...
If you use XHR to make a server call, but do not return a response, this error will occur.
Example (from Nodejs/React but could equally be js/php):
App.tsx
const handleClickEvent = () => {
fetch('/routeInAppjs?someVar=someValue&nutherVar=summat_else', {
method: 'GET',
mode: 'same-origin',
credentials: 'include',
headers: {
'content-type': 'application/json',
dataType: 'json',
},
}).then((response) => {
console.log(response)
});
}
App.js
app.route('/getAllPublicDatasheets').get(async function (req, res) {
const { someVar, nutherVar } = req.query;
console.log('Ending here without a return...')
});
Console.log will here report:
Failed to show response data
To fix, add the return response to bottom of your route (server-side):
res.json('Adding this below the console.log in App.js route will solve it.');
I had the same problem and none of the answers worked, finally i noticed i had made a huge mistake and had chosen other as you can see
Now this seems like a dumb mistake but the thing is even after removing and reinstalling chrome the problem had remained (settings are not uninstalled by default when removing chrome) and so it took me a while until I found this and choose All again...!
This happened because my backend doesn't handle OPTIONS method and because I had clicked on other by mistake which caused me to spend a couple days trying answers!
As long as the body of the Response is not consumed within your code (using .json() or .text() for instance), it won't be displayed in the preview tab of Chrome dev tools
Bug still active.
This happens when JS becomes the initiator for new page(200), or redirect(301/302)
1 possible way to fix it - it disable JavaScript on request.
I.e. in puppeteer you can use: page.setJavaScriptEnabled(false) while intercepting request(page.on('request'))
another possibility is that the server does not handle the OPTIONS request.
One workaround is to use Postman with same request url, headers and payload.
It will give response for sure.
For me, the issue happens when the returned JSON file is too large.
If you just want to see the response, you can get it with the help of Postman. See the steps below:
Copy the request with all information(including URL, header, token, etc) from chrome debugger through Chrome Developer Tools->Network Tab->find the request->right click on it->Copy->Copy as cURL.
Open postman, import->Rawtext, paste the content. Postman will recreate the same request. Then run the request you should see the JSON response.
[Import cURL in postmain][1]: https://i.stack.imgur.com/dL9Qo.png
If you want to reduce the size of the API response, maybe you can return fewer fields in the response. For mongoose, you can easily do this by providing a field name list when calling the find() method.
For exmaple, convert the method from:
const users = await User.find().lean();
To:
const users = await User.find({}, '_id username email role timecreated').lean();
In my case, there is field called description, which is a large string. After removing it from the field list, the response size is reduced from 6.6 MB to 404 KB.
Use firefox, it always display the response and give the same tools that chrome does.

How to update Google Drive file using byte range

I'm trying to understand how the Google API works server side in order to allow me to implement my own type of resumable upload. I understand that I can use the MediaFileUpload or MediaInMemoryUpload mechanism, but I am looking for something much more raw. For example, I want to deliberately upload 1k from a file, then later on (like days later), append another 1k of the file. Obviously not real figures here, but hopefully you get the idea. Well here is where I am with the code:
headers = {
'range': 'bytes=%d-%d' % (
offset,
offset + len(data)
)
}
body = {
'title': "MyFile.bin",
'description': "",
'modifiedDate': datetime.datetime.now().isoformat(),
'mimeType': 'application/octet-stream',
'parents': [{ 'id': parentId }]
}
res = http.request(
url, method="PUT", body=body, headers=headers
).execute()
So as you can see, it is clear where you specify the parameters for the file (file attributes) and the header specification for the request. But where do you specify the actual data stream to be uploaded in that request? Is it the case that I can just specify a media_body in the request?
You need to implement a multipart HTTP request which is explained on https://developers.google.com/drive/manage-uploads#multipart
I'd recommend you to use our JS client library and use the existing implementation on the API reference right under the JavaScript tab.
It is not possible and is not formally on Google's roadmap to introduce this functionality. The only way to append to a file is to update the entire file again from scratch.

UrlFetchApp.fetch Link Length Limit

I'm having some trouble with the UrlFetchApp class, fetch() method. I've singled out the issue, and it seems to be the fact that the actual link I'm fetching is just too long
When I eliminate some needed data(resulting in ~1900 characters), it send the fetch request fine
The length limit is somewhere between 2040 and 2060 characters, as that is where it stops working and I receive a "Bad request" error. I'm assuming it's 2048, as that seems to have been the industry standard some time ago.
I'm needing to fetch data from a link that's upwards of 3400 characters! Is this just too long? 2048 characters might have been understandable a while back, but in this day in age it's a limit that is going to be met quite often
My question is this: Is there a way around this? I'm assuming Google set the limit, is there some way to request this limit be raised?
Thank you!
The restriction is on the size (2kB) and not on the length of the url.
On March 30, 2018, Google deprecated the URL Shortener service that was used in the accepted answer.
I wrote a script to use the Firebase Dynamic Links Short Links API service.
The docs are here if you want to cook your own.
You can try UrlShortener to shorten the URL and then use UrlFetchApp with the shortened URL
I used the POST method with payload data instead, showed here:
Google Apps Script POST request UrlFetchApp
The classic code is:
// Make a POST request with form data.
var resumeBlob = Utilities.newBlob('Hire me!', 'text/plain', 'resume.txt');
var formData = {
'name': 'Bob Smith',
'email': 'bob#example.com',
'resume': resumeBlob
};
// Because payload is a JavaScript object, it is interpreted as
// as form data. (No need to specify contentType; it automatically
// defaults to either 'application/x-www-form-urlencoded'
// or 'multipart/form-data')
var options = {
'method' : 'post',
'payload' : formData
};
UrlFetchApp.fetch('https://httpbin.org/post', options);