Using SAS 9.4 M6, is there an easy way to convert an .sas7bat file into a JSON format?
This is needed inside a STP, so that the JSON can be included in some way in a proc stream.
Either have a function/macro which can print the file formatted inside of a proc stream or create a sepereate STP which only returns a file as JSON, so that it can be loaded in the background from a web application.
SAS can write JSON in several ways - the two easiest are:
PROC JSON - this allows you to write JSON, within some limited formats. If the JSON you're writing is pretty simple, this is a very easy way to do it - just pass it a dataset and it will make a single level JSON file with it. This doesn't work as well if you have complicated structures.
See also the PROC JSON Tip sheet.
The data step - if you need something more complicated, just write it with the data step! I've written custom macros before to simplify it - things like %write_array to write an array or %start_object to start a new object, things like that; but it does get a bit complex. This is the only way to do truly complex structures, though, beyond what PROC JSON can do.
PROC JSON is as simple as a PROC EXPORT, though... the example from the documentation says it all:
proc json out="C:\Users\sasabc\JSON\DefaultOutput.json";
export sashelp.class;
run;
To include it in a stream, you'd either use the _webout fileref, or if you really need PROC STREAM specifically, you could probably write it to a file, read it into a big macro variable or maybe use %include, and then write it back in during the PROC STREAM execution. But I suspect that you can use _webout here and skip PROC STREAM.
Here's an example using PROC JSON with _WEBOUT in a stored procedure.
proc json is great but it can cause issues sometimes if your data contains invalid characters - in that case you might want to use this macro: https://core.sasjs.io/mp__jsonout_8sas.html
Fyi, my team built an entire (open source) framework to enable rapid development of SAS-powered web applications - it's called sasjs
You can use it to deploy both backend (data) services and even the frontend (as a streamed app). It works the same on both SAS 9 and Viya, so there are no issues with upgrades. Further info here: https://sasjs.io/resources/
Related
I have a CloudFormation template that consists of a Lambda function that reads messages from the SQS Queue.
Lambda function will read the message from the queue and transform it using a JSON template(Which I want it to be injected externally)
I will deploy different stacks for different products and for each product I will provide different JSON templates to be used for transformation.
I have different options but couldn't decide which one is better;
I can write all JSON files under the project and pack them together and pass related JSON name as a parameter to lambda.
I can store JSON files on S3 and pass S3 URL to lambda so I can read on runtime.
I can store JSON files on Dynamo DB and read from there using the same approach with 2
The first one seems like a better approach as I don't need to read from an external file on every lambda execution. But I will need to pack all templates together.
The last two are a more clear approach but require an external call to read JSON for every call.
Another approach could be (I'm not sure if it is possible) to inject a JSON file to Lambda on deploy from S3 bucket or sth. And Lambda function will read it like an environment variable.
As you can see from the cloudformation documentation Lambda environment variables can be only a Map of Strings, so the actual value you can pass to the function as an environment variable must be a String. You could pass your JSON as a string but the problem is that the max size for all environment variables is 4 KB.
If your templates are bigger and you don't want to call S3 or DynamoDB at runtime you could do a workaround like writing a simple shell script that copies the correct template file to the lambda folder before building and deploying the stack. This way the lambda gets deployed in a package with the code and only the desired json template.
I decided to go with S3 setup and also improved efficiency by storing Json on a global variable (after reading the first time). So I read once and use it for the lifetime of the Lambda container.
I'm not sure this is the best solution but works well enough for my scenario.
Tornado 4.5.2 using Python3 represents the request body as a byte object instead of a native dictionary. This presents a problem for methods like RequestHandler.get_body_argument() which will not access the field correctly.
My question is how to correctly have tornado parse these bodies into more useable dictionaries so the standard library will work. I've looked throughout tornado's documentation and there's next to nothing on even the existence of this problem.
Am I missing something here or will I need to re-implement those methods myself?
Tornado never automatically parses JSON; it only automatically parses HTML-standard form encoding (the data models of form encoding and JSON are different, so it wouldn't make sense to use the same family of get_argument/get_arguments methods in the less-ambiguous JSON format). If you want to handle JSON requests, it's one line to parse it yourself:
args = tornado.escape.json_decode(self.request.body)
I am loading data from a mongodb collection to a mysql table through Kettle transformation.
First I extract them using MongodbInput and then I use json input step.
But since json input step has very low performance, I wanted to replace it with a
javacript script.
I am a beginner in Javascript and even though i tried somethings, the kettle javascript script is not recognizing any keywords.
can anyone give me sample code to convert Json data to different columns using javascript?
To solve your problem you need to see three aspects:
Reading from MongoDB
Reading from JSON
Reading from (probably) String
Reading from MongoDB Except if you changed the interface, MongoDB returns not JSON but BSON files (~binary JSON). You need to see the MongoDB documentation about reading and writing BSON: probably something like BSON.to() and BSON.from() but I don't know it by heart.
Reading from JSON Once you have your BSON in JSON format, you can read it using JSON.stringify() which returns a String.
Reading from (probably) String If you want to use the capabilities of JSON (why else would you use JSON?), you also want to use JSON.parse() which returns a JSON object.
My experience is that to send a JSON object from one step to the other, using a String is not a bad idea, i.e. at the end of a JavaScript step, you write your JSON object to a String and at the beginning of the next JavaScript step (can be further down the stream) you parse it back to JSON to work with it.
I hope this answers your question.
PS: writing JavaScript steps requires you to learn JavaScript. You don't have to be a master, but the basics are required. There is no way around it.
you could use the json input step to get the values of this json and put in common rows
It seems that Yahoo pipes are represented using JSON. I want to download these JSON objects for some research purpose. Usually a Yahoo pipe is rendered in a browser editor thru a url like this: http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg, but you can't get the corresponding JSON object to this Yahoo pipe. Does anyone know how to get JSON objects representing Yahoo pipes and store them in any persistent form?
It is possible to get hold of a JSON description of a Yahoo Pipe using a URL of the form:
http://pipes.yahoo.com/pipes/pipe.info?_out=json&_id=PIPE_ID
The pipe2py python library demonstrates how to grab the JSON description of a pipe and "compile" it to a Python equivalent that can be run on your own server.
The post Exporting Yahoo Pipe Definitions, Compiling Them to Python, and Running Them in Scraperwiki describes how you can use pipe2py in the Scraperwiki environment to compile and execute pipes on Scraperwiki using pipe definitions imported directly from Yahoo Pipes, or exported from Yahoo Pipes and then stored locally in a Scraperwiki database table.
When I load that page in a browser I can see that it makes an ajax request for:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=XgRo96h13BGtJWvS8SvLAg&_out=json&modinfo=true&rnd=7560&.crumb=MjvGjpzhPLl
That's your object but I'm not sure if I'm answering your question of how to "get it". If you need to get it through a program you would need a script that loges into pipes and extracts that url.
A quick way, while not automated, is to use an HTTP analyzer. Here's a process for getting the object using HttpFox (I use v0.8.9) for Firefox. With the analyzer running, load the edit page for a pipe, like the one you linked:
http://pipes.yahoo.com/pipes/pipe.edit?_id=XgRo96h13BGtJWvS8SvLAg
Look at the request with a URL that starts with:
http://pipes.yahoo.com/pipes/ajax.pipe.load?id=....
Next, explore the content of the request (there's a 'Content' tab in HttpFox). That's the JSON object representing the pipe structure.
Use pipe.run?[your pipe id here]&_render=json as opposed to pipe.edit
So in your case to get the json it would be - http://pipes.yahoo.com/pipes/pipe.run?_id=XgRo96h13BGtJWvS8SvLAg&_render=json
I guess how you implement the client is dependent on what you like writing in/what other functionality you need.
You could also do it the other way around and use the web service service module to post the data to a script that can extract the json and persist it to a database. You could check out json.org.
IIs there a package similar to HTml::Template in perl which takes a JSON object and maps it to a HTML template file? I am building a web application using HTML::Template and will be receiving JSOn from a web services API, things will be made simple if I can templatize this JSOn to HTML instead of doing it the exact way HTML::Template requires.
HTML::Template just takes a data structure consisting of strings, hashes and arrays. JSON maps directly onto that.
$template->param(myData => JSON::Any->new->decode($json_string));
HTML::Template is a rather 'simple' templating engine - I am using quotes because its simplicity let's you do whatever you need in a view part from the Model View Controller architecture.
However, you can not execute arbitrary perl code inside a HTML::Template.
Also, due to the fact that in JSON you could have very complex data structures, I doubt that there are any suitable ways of using JSON in a straight way in your templates.
The only solution I see as possible is for you to use a Perl script that will parse the JSON, create some 'objects' and pass them to your templates. You already have that perl script - is the one that instantiate your HTML::Template object.
ok, a bit late, but:
HTML::Template always wants a hash of arrays of hashes and so on.
and you cannot navigate through the parameter stash.
If you want to do this, you might try out HTML::Template::Compiled which allows you to do this.
<tmpl_var some_hash.key.another_key[23] >
or with alternative delimiters:
[%= some_hash.key.another_key[23] %]
but note the documented differences of the module to HTML::Template.
So you decode your JSON string to a data structure and pass it to the template and then you can access all values somewhere deep in the structure.