How to extract a large JSON object from Firefox's developer console - json

I'm working on a webapp, which produces large JSON objects. I log these JSONs via console.log and I am looking now for a way to extract these JSONs and store them in a text file.
The actually problem are the sizes of the JSONs.
My approch so far was to store the JSON to a local variable
And then calling JSON.stringify(temp0). But afterwards firefox will not print the whole string.

(i am german - my english is horrible and dont know exact translations - but:)
if its json from "remote" (or local by service) - not generatet inside the script -
then
dont look at tab console -
look at tab networkanalyze
select the script (by url)
and there you get in responsetab all you need -
but this works only with get/post/put etc ....

Related

How connect json file with after effects for rendering dynamic video

I wan't to learn how connect some json file with after effects for rendering dynamic videos.
Eg i have a form in some webpage:
this form included one input which people are using there their name.
And then i create some json file like that array of objects with this form.
data = [
{
name: 'John'
},
{
name: 'Mike'
}
]
and i wan't to create with these json objects for each name some video about few second there will be shown just name from json and render some mp4 video.
How to do that?
which steps following?
if it will be web form i think i'll need to connect json file dynamically too right?
so after effects will read this json file from some url ?
There are many ways to go about doing this, but a single answer on Stack Overflow probably won't give you everything you need.
Network communication can be done using the CEP framework provided by Adobe which can then execute ExtendScript code which actually does the manipulation of the layers inside the AEP project file. You can use node modules to perform the network communication, and then write ExtendScript code to pass in the JSON data to that.
While not free, you might want to explore Dataclay's Templater extension to help you accomplish what you want. It not only does what you are asking out of the box, but it has some rules-based AI to reconfigure layers both temporally and spatially. You can point Templater to a URL that response with an array of JSON objects and have it process that data. In addition to this, it has event hooks which allow you to execute any script within the Shell or with the ExtendScript engine during its versioning process.
Hope this helps!

WP REST API v2 JSON endpoints appear difficult to read

I found WP REST API very interesting in making custom functionalities in WordPress websites. However, I find it hard to read my JSON endpoints' results.
The normal output of JSON endpoint is wrapped in html and pre tags. T result appears in one long line of compressed string.
I need to integrate my website to a mobile app to be done by another developer and I would like to display the API endpoints (e.g. link) to appear as a regular JSON Object like:
I'm trying to find a workaround like a hook or a filter to make the JSON results appear as I desired. Or equivalent AJAX related code would be nice.
I use a Chrome extension of JSON Formatter to view the results which prints out with readability in mind.
https://github.com/callumlocke/json-formatter

How to export request Json string using DeveloperTools/Network of FF or Chrome

We want to rewrite a large web project. To make the work more safe we want to cover it by numerous API tests that will be extracted from peeking at the real web calls. (And let us be honest, from the code analysis, too).
Thus I am trying to extract the Json strings sent by different requests. The problem is that the tool provided by the browser (it is practically the same for both FF and Chrome) gives me the Json in a structured form. And I need to use it as strings.
To rewrite all large and deeply structured strings from more than a hundred of requests manually is a horror. How can I copypaste the string representation of request parameters?
I have found that in Chrome - near the "request payload" header there is a switch: view source <-> view parsed. The first variant shows the Json string. BTW, IE has buttons for that and FF... has nothing?
In Firefox: Right click > Copy > Copy POST Data.
You can also "Copy All As HAR" to get the raw body (of every request and response in the list), and "Edit and Resend" will show you the raw body in the UI.

JSON and changing datasets

I am fairly new to JSON and I want to create a choropleth example as so. http://gabrielflor.it/a-half-decade-of-rising-poverty Whenever the years are clicked it just goes to a different portion of the JSON (I'm assuming). Is this how functionality like this is usually done to avoid redrawing the whole map again and calling another JSON.js file? If so these .JSON files can get quite large?
Using a JSON is only a way to store values you need for each year. When you switch to another year the JS parse the JSON for the giving year and update the choropleth. For the example you have provided, here is the JSON used:
http://gabrielflor.it/static/data/saipe.json
This is a good way since you only have one JSON with every year you need and you load it only once. However since d3 needs datas this way I think you should add another JSON if you want to provide additional data like in gabrielflor example:
http://gabrielflor.it/static/js/d3.poverty-by-county.js?v=121107
He loads JSON like this with d3:
d3.json('../static/data/states.json', function (json) {
states = json;
});
or
d3.json('../static/data/saipehighlights.json', function (json) {
saipehighlights = json;
});
If you look at the network traffic for the example page you gave (ex. by using Chrome Developer Tools).
The file with the poverty data is quite large, but the mapping data file is even larger. You'll notice, that it takes longer for the website to load, but afterwords it runs very smoothly in the client without making any server calls.
The site is just about browsing information and nice design - for that purpose I think a longer load time is quite acceptable if the user experience after is smoother(i.e. user doesn't have to wait for year data to load).

How can I get the json object which represents a Yahoo! pipe

It seems that Yahoo pipes are represented using JSON. I want to download these JSON objects for some research purpose. Usually a Yahoo pipe is rendered in a browser editor thru a url like this: http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg, but you can't get the corresponding JSON object to this Yahoo pipe. Does anyone know how to get JSON objects representing Yahoo pipes and store them in any persistent form?
It is possible to get hold of a JSON description of a Yahoo Pipe using a URL of the form:
http://pipes.yahoo.com/pipes/pipe.info?_out=json&_id=PIPE_ID
The pipe2py python library demonstrates how to grab the JSON description of a pipe and "compile" it to a Python equivalent that can be run on your own server.
The post Exporting Yahoo Pipe Definitions, Compiling Them to Python, and Running Them in Scraperwiki describes how you can use pipe2py in the Scraperwiki environment to compile and execute pipes on Scraperwiki using pipe definitions imported directly from Yahoo Pipes, or exported from Yahoo Pipes and then stored locally in a Scraperwiki database table.
When I load that page in a browser I can see that it makes an ajax request for:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=XgRo96h13BGtJWvS8SvLAg&_out=json&modinfo=true&rnd=7560&.crumb=MjvGjpzhPLl
That's your object but I'm not sure if I'm answering your question of how to "get it". If you need to get it through a program you would need a script that loges into pipes and extracts that url.
A quick way, while not automated, is to use an HTTP analyzer. Here's a process for getting the object using HttpFox (I use v0.8.9) for Firefox. With the analyzer running, load the edit page for a pipe, like the one you linked:
http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg
Look at the request with a URL that starts with:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=....
Next, explore the content of the request (there's a 'Content' tab in HttpFox). That's the JSON object representing the pipe structure.
Use pipe.run?[your pipe id here]&_render=json as opposed to pipe.edit
So in your case to get the json it would be - http://pipes.yahoo.com/pipes/pipe.run?_id=XgRo96h13BGtJWvS8SvLAg&_render=json
I guess how you implement the client is dependent on what you like writing in/what other functionality you need.
You could also do it the other way around and use the web service service module to post the data to a script that can extract the json and persist it to a database. You could check out json.org.