How connect json file with after effects for rendering dynamic video - json

I wan't to learn how connect some json file with after effects for rendering dynamic videos.
Eg i have a form in some webpage:
this form included one input which people are using there their name.
And then i create some json file like that array of objects with this form.
data = [
{
name: 'John'
},
{
name: 'Mike'
}
]
and i wan't to create with these json objects for each name some video about few second there will be shown just name from json and render some mp4 video.
How to do that?
which steps following?
if it will be web form i think i'll need to connect json file dynamically too right?
so after effects will read this json file from some url ?

There are many ways to go about doing this, but a single answer on Stack Overflow probably won't give you everything you need.
Network communication can be done using the CEP framework provided by Adobe which can then execute ExtendScript code which actually does the manipulation of the layers inside the AEP project file. You can use node modules to perform the network communication, and then write ExtendScript code to pass in the JSON data to that.
While not free, you might want to explore Dataclay's Templater extension to help you accomplish what you want. It not only does what you are asking out of the box, but it has some rules-based AI to reconfigure layers both temporally and spatially. You can point Templater to a URL that response with an array of JSON objects and have it process that data. In addition to this, it has event hooks which allow you to execute any script within the Shell or with the ExtendScript engine during its versioning process.
Hope this helps!

Related

dynamically update the request json and send it as multipart form data in karate [duplicate]

In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.

Flutter: rendering an UI from JSON and store / map data dynamic

I want to render a UI from a jSON string that has multiple layers.
The user should be able to enter data, which will then be stored and shared, without the overhead of the jSON Render Structure.
However, the assignment of the data must be possible.
The app renders a template from a multidimensional json string that can capture metrics (user inputs). The measured data are entered via text fields by the user.
There are different windows in the app, which are rendered from different json UI-render-files.
The stored algorithms in the windows at the Frontend differ.
The following should be possible:
All windows are created with different jSON strings (works with build_value now).
The user's input is saved. (Currently only works by saving the render json string with the data under a different name (with Package: Shared Preferences)).
The data entered by the user is copied from one window to the other window. (Data Binding / Data Mapping)
The data entered by the user will be sent to a backend.
I only have the idea to use id's in the render json, which allow a mapping.
Are there better solutions?
Saving the entered data is possible by saving the whole jSON - String with an other name.
The goal is to map the data.
It should also be possible for the user to insert another object for the measurement data acquisition at the client / device. The entered data must also be saved.
This sounds like a REST API. Then you would use a frontend framework like angular or react to take user input.
I could build this with a .net core web api with sql db as the backend. this would be a web service (REST API solution) on the backend of your solution.
Next I would integrate Swagger with dependency injection and add authentication.
now its time to create the front end that will use your tested Web Service.
you would have 1 side of your SPA POST new JSON to the Web service in a form.
you would have the other side GET the new record as a new tab with pagination or in a table with pagination.
This would be a good PoC for your idea and will let you learn the parts of the solutions architecture in your language of choice. the "design" would be the same no matter what language your choice.
backend web service can be done in a flask and with a MySQL DB. or with any other combo you have skills with.
the Frontend could be done in knockout.js, making it a little easier to learn than angular or react.
Please create new questions when you choose your software design. I would love to give answers for each:)
I would love to level up with you! ;)

Skipping precaching: Cannot read property 'concat' of null`

Here's my question: How might I try to get rid of the 'skipping precaching' and cache everything that comes in from https://laoadventist.info/beta/r as the precache list?
Also, is it correct for me to set precache="https://laoadventist.info/beta/r" or should I be setting that to a function that grabs the data and returns it instead?
Skipping precaching: Cannot read property 'concat' of null
comes out on the console when using My Polymer App
<platinum-sw-cache default-cache-strategy="fastest" cache-config-file="cache-config.json" precache="https://laoadventist.info/beta/r">
I am assuming correctly I can precahce a URL like this, right?
I am trying to load a json result from laravel 5.1 to set what my precache should be... I know it's not the most elegant, but I'm new to Polymer, cache, service workers, etc, and using this app as a learning opportunity. It'll be a bit different at the end of the day, but for now I just want to load everything. :)
I want to precache all of the data so that a user can fully utilize this app when offline (though later I'll set it up so that they don't have to precache loads and loads of json requests, only the ones they want, like per book - but that's for later).
If you have a array of resource URLs that you want precached, the cleanest way to specify them is to use the cacheConfigFile option and to point to a JSON file that contains your array as its precache property. Take a look at the example in the docs for cacheConfigFile: https://elements.polymer-project.org/elements/platinum-sw?active=platinum-sw-cache
You shouldn't have to use the precache attribute on the element if you're using cacheConfigFile.
It sounds like you're using Polymer Starter Kit, and that will create the JSON config file and populate it for you with an array corresponding to your local resources. But if you'd like to specify additional resources to be precached, you can modify the build process to append those URLs to the auto-generated list.
The reason you're seeing that error is because you're pointing to a JSON config file that is effectively empty, and is just meant for the development environment.

HTTP request from XQuery code inside MarkLogic pipe

I want to send text using POST method to another HTTP server and receive a JSON file in response from XQuery code of pipe in MarkLogic. The idea is whenever XML or JSON documents are inserted into MarkLogic, it triggers a pipe to read it and sends one element to another web server; in my situation, I want to send to Rosoka server to do natural language processing, after that I want to store the returned data "it is json file" in MarkLogic.
I appreciate if you could help.
marklogic only mentioned that is possible but no further help
There are many ways to do this.
A start is to look at "Triggers"
https://docs.marklogic.com/guide/app-dev/triggers
There is also Content Processing Framework ( "CPF" which is a higher level workflow based on triggers).
https://docs.marklogic.com/guide/cpf
if you read those guides you should be able to ask more specific questions if needed.

How can I get the json object which represents a Yahoo! pipe

It seems that Yahoo pipes are represented using JSON. I want to download these JSON objects for some research purpose. Usually a Yahoo pipe is rendered in a browser editor thru a url like this: http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg, but you can't get the corresponding JSON object to this Yahoo pipe. Does anyone know how to get JSON objects representing Yahoo pipes and store them in any persistent form?
It is possible to get hold of a JSON description of a Yahoo Pipe using a URL of the form:
http://pipes.yahoo.com/pipes/pipe.info?_out=json&_id=PIPE_ID
The pipe2py python library demonstrates how to grab the JSON description of a pipe and "compile" it to a Python equivalent that can be run on your own server.
The post Exporting Yahoo Pipe Definitions, Compiling Them to Python, and Running Them in Scraperwiki describes how you can use pipe2py in the Scraperwiki environment to compile and execute pipes on Scraperwiki using pipe definitions imported directly from Yahoo Pipes, or exported from Yahoo Pipes and then stored locally in a Scraperwiki database table.
When I load that page in a browser I can see that it makes an ajax request for:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=XgRo96h13BGtJWvS8SvLAg&_out=json&modinfo=true&rnd=7560&.crumb=MjvGjpzhPLl
That's your object but I'm not sure if I'm answering your question of how to "get it". If you need to get it through a program you would need a script that loges into pipes and extracts that url.
A quick way, while not automated, is to use an HTTP analyzer. Here's a process for getting the object using HttpFox (I use v0.8.9) for Firefox. With the analyzer running, load the edit page for a pipe, like the one you linked:
http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg
Look at the request with a URL that starts with:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=....
Next, explore the content of the request (there's a 'Content' tab in HttpFox). That's the JSON object representing the pipe structure.
Use pipe.run?[your pipe id here]&_render=json as opposed to pipe.edit
So in your case to get the json it would be - http://pipes.yahoo.com/pipes/pipe.run?_id=XgRo96h13BGtJWvS8SvLAg&_render=json
I guess how you implement the client is dependent on what you like writing in/what other functionality you need.
You could also do it the other way around and use the web service service module to post the data to a script that can extract the json and persist it to a database. You could check out json.org.