Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have very large JSON file which is of several GB. I am looking for any efficient JSON viewer. In which we are also able to view JSON in tree format.
I understand such huge file can't be loaded in one go. I wonder is there any software to view JSON in parts (i.e opening ~10k records) at a time?
You can try Sublime. Its quite good in handling large files:
If you're using Mac, try Hex Fiend. Loads large files super fast. Just disable the hex view and you'll get a regular-ish editor.
I've used JSONViewer to view large files (over 100Mb). It's not all that fast at loading them, but it gets there eventually.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I have a large Json log file which contain result of a web service call. now I need to search inside it and find some element with multiple condition.
is there any tool for making query and administration On json log file?
First open the file in an editor to understand its structure. If the file is huge, as you write, browse it page-by-page with a command such as more or less.
Then use jq to obtain the data that interests you. Start by extracting some top-level elements to get the hang of the process and then gradually refine your invocation to what you actually want to obtain.
The jq tutorial has many invocation examples. You can also see a specific use case in a blog post I wrote on recovering Firefox session data from the browser's json session file.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I need a sample database dump for testing the performance of mysql fulltext search feature. I need around 1-10 million rows.
Would this be available anywhere? If not, what is the simplest way of generating this database.
I also tested various full text solutions. See Full Text Search Throwdown.
I used the StackOverflow data dump to test.
https://blog.stackoverflow.com/2014/01/stack-exchange-cc-data-now-hosted-by-the-internet-archive/
It's in XML format, but it wasn't too difficult to write a script to turn the XML into SQL to load it into a database for testing.
You can download some StackOverflow real data. Check the following link for more details.
You can check the following link for more details about how to restore it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I need XSD unloaded into CSV file to build a mapping doc. The CSV is a list of all tags defined in XSD, in format path,type,cardinality. Something like:
tag1/tag2/tag3,E,0..n
tag1/tag2/tag3#attr1,A,0..n
tag1/tag2/tag4,E,1..1
XSD may import schemas. Is there a tool to accomplish this task? Thanks.
I've posted a possible solution here. If it is something you're willing to try, then download the tool and the sample files; please follow the document for step by step guidance. If you run into any issues, send me an email using the contact info (support) on the web site.
The cardinality problem, again, is very tricky. The sample I've prepared for you (all the download links are in the document) is one of the test cases I was using, except that I had to come up with a specific template for your file layout. One issue that seems to be subject to debate is how to calculate the value for XPaths that, from an XML Schema perspective, traverse choice compositors. Another, not so controversial maybe, is how to calculate the cardinality for particles under repeating compositors, etc.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
What is the best FREE solution to implement one ETL project in MySql?
I need to extract for analisys big amount of data, and put the results in other tables.
Regards,
Pedro
Pentaho Kettle (PDI) is open source and it has a community version here, which works quite good.
Talend also does an excellent job for ETL and ELT. You can take a look at this page on my website: http://www.hiregion.com/2010/01/data-loading-through-talend-etl-studio.html and related articles. I have also loaded hundreds of thousands of rows to millions through MySQL bulk loading (LOAD DATA INFILE syntax - dev.mysql.com/doc/refman/5.1/en/load-data.html ) and then doing some transformations in MySQL. You can do most of transformations before the load (ETL) or after load (ELT) or use hybrid technique.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Where are some good sources of information on binary file layout structures?
If I wanted to pull in a BTrieve index file, parse MP3 headers, etc. Where does one get reliable information?
I'm not sure if there's a general information source for this kind of information. I always just search on google or wikipedia for that particular file type. The binary file layout structure information should be included.
For example, http://en.wikipedia.org/wiki/MP3#File_structure">MP3 file layout structure