can you recommend me a json viewer application? [closed] - json

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I need one that will indent the json for ease of reading
The famous simple file editors - editplus , notepad++ - they all dont have json doc type.
There are a lot of online sites with built in viewer. I need an offline one.
10x.

I have used JSON Viewer, it works nicely and is lightweight. However it's really down to your taste or whether you require some specific features. It's probably worth it to take a look at some screenshots of several different ones first.

The Notepad++ plug in NPP JSON Viewer worked well for me too, it is a little slow with large amounts of data, but does a good job of formatting long JSON strings and displays a structured tree of the data for click through navigation. It would be nice if the tree was searchable, but unfortunately it is currently not.

I have been using the jsmin pluggin which has both JSON viewer and JSON formatter. No problem opening 20MB of data. You can hilight a single json line and Alt+Ctrl+M to format just one line, or you can format the entire file.
http://www.sunjw.us/jsminnpp

You can search in google for notepad++ json plugin... and one of the results => http://sourceforge.net/projects/nppjsonviewer/

Related

What are some good data cleanup tools? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am parsing large amounts of complex files (mostly CSV files but some are not) and I need to structure/parse them into some standard formats. This not only involves row wise cleanup of data but some simple individual cell-based logic. I want a tool that a non-programmer can use also so a business team member can write simple drag and drop logic and not take up engineering time. So far, I have looked at Google Refine and Data wrangler and the last one looks great. Are there any other such tools out there?
ETL tools are oriented more towards relational databases, but also have support for XML and CSV file input/output. Examples:
http://www.talendforge.org/
http://kettle.pentaho.com/
Could easily be too complicated for your requirements though. Also, see this similar question on SO (with additional links): What software is availible for data quality checking .

Best web scraping Ruby on Rails library that handles dynamic HTML produced by javascript [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am using Ruby on Rails with the Mechanize library to scrape store websites. The problem is that many times I can't crawl certain elements. However, I can see this when I 'view source' on the site.
For example, Walmart's category (in this case below it is "Health") is unscapeable. I believe this is because it is dynamically produced HTML (e.g. from javascript). In order to scrape this, I need a browser to process the web request.
http://www.walmart.com/ip/Replacement-Sensor-Module-for-AlcoMate-Prestige-Breathalyzer/10167376
I am also using a linux machine on Amazon EC2. It would be tough to install browser for UI scraping. Is there any Rails gem/plugin that can help me?
Thanks, all!!
Your question, rephrased, is, what's an easy way to parse an HTML document's DOM in the same way a web browser would, then execute the JavaScript in the document against the parsed DOM? Without running an actual web browser.
That's a little tricky.
However, all is not lost. Take a look at Capybara. Though created for acceptance testing you can also use it for general grokking of documents. To execute JavaScript you'll need to use a driver that supports it, and since you want it to be "headless" (no browser GUI) that probably means using capybara-webkit, Akephalos or capybara-envjs.
Another option might be Harmony, which I know nothing about except that it appears to do what you want but also appears not to be maintained anymore, so YMMV.

debate over JSON and XML [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am reading the debate over JSON and XML
And i am not sure why JSON seems to have come out of nowhere to threaten precious XML
And one of the point mentioned is
JSON is a better fit for Web services that power Web mashups and AJAX widgets due to the fact that it is essentially serialized Javascript objects. That’s it. XML will never fit the bill as well for these scenarios without changes to the existing browser ecosystem which I doubt are forthcoming anytime soon.
Could you please tell me what is the exact meaning of this ??
JSON doesn't require html-like markup with opening and closing tags (lots of unneeded code), can be easily fit into JS as it's almost a native JS object sintax, code it's easier to read and to maintain. It's also easier to modify during run-time.
Anyway, for tree-like structures, deep nested children and/or complex structures, I think XML is best suited for the job. JSON is better for data transfer/exchange.
In really short: It is simpler to use JSON than XML in a lot of scenarios (because it consists of serialized JS objects just as you said) and also it provides faster data transfer over the network (and that comes from the not so or less redundant data structures of it, compared to XML).
Regards, Peter

Open Source Data Mining Software [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was wondering; what is the best open source software that I can use for non-binary association rule generations. I need a non-binary implementation because converting my currently non-binary data to binary data would not give the desired results.
Thanks and can't wait to here your comments!
Also take a look at Weka
Check out:
RapidMiner
and
R with Rattle
Try the Orange data mining toolkit.
http://www.ailab.si/orange/
Try Data Mining SDK.
These days I like Knime. See http://knime.org.
you could even try another one called Tanagra http://eric.univ-lyon2.fr/~ricco/tanagra/en/tanagra.html
Its mainly for research purpose but works well and has good tutorials here
http://data-mining-tutorials.blogspot.com
I have an open-source software named SPMF with more than 130 algorithms related to association rules mining, frequent itemset mining, sequential rule mining and sequential pattern mining. You can check my webpage for more details and to download it:
It is Java source code. It has a simple graphical user interface. It also has many specialized algorithms that you will not find in other data mining software.

Is JSON validation a best practice? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Is it a best practice to validate JSON?
With both a JSON schema proposal and a JavaScript implementation of a JSON Schema validator, this practice would seem relatively frictionless to implement. So, is it a no-brainer that should be part of any robust application? Or do you employ other preferred strategies to handle bad JSON?
On the server, validation of data coming from outside is a must.
In the browser, it is redundant from a security POV, if you can vouch that the JSON is generated by server code you control and that any data it depends on has been validated on the server. Even so it can still be useful for debugging.
My 2c on this is that:
(a) Yes, obviously incoming data should be validated, but
(b) The best place to do this is NOT with Json data as is, but with actual business logic objects, iff data binding is used. JSON validation makes only sense if you handle "raw" JSON, but most services (at least in Java) use data binding first and then operate on biz logic objects, not on data format (which often is almost an implementation detail)