I have both JSON file and JL file on my computer but when I open them in Notepad their structure looks like the same. What is the difference between them? where shall I use each one?
Actually, the time that I was asking this question I didn't know that "the file type is no guarantee of what is inside it". in other words I thought that for every file name there is a separate manifesto and if a files name is ".something", there is a unique manifesto for it. But now I know that I can create a file, write anything that I want into it and name it ".peyman" and yes there is nothing special with it!
What was that file? the file was JSON Lines file format.
Where did I find it? in the Scrapy except writing scrapy crawl name -o file.json I saw that somebody wrote scrapy crawl name -o file.jl. I tried that and the file was 99% like JSON file so I wondered and asked this question here.
So:
What is the difference between a .JSON file and .JL file? Now I know that the better question is "What is the difference between a .JSON file and .JL file in the Scrapy?"
The JSON Line is like JSON but without the "[" and "]" at the
beginning and the end. it is used in the Scrapy because of this
There's quite a few things that a jl file extension could be referring to. If I remember correctly, it originally had something to do with the window manager Sawfish.
Sawfish was developed in Lisp, and the jl file was a Lisp source file for Sawfish. However, I'm guessing (because you said that inside was JSON-like sauce) that's not what you're asking about.
In that case, I do recall a few projects on GitHub... JSON lambda and Julia.
Both of those may be the reason why you're seeing JSON in a jl file. Without more information on where you got that file, or what it was part of, though, we won't be able to help you much.
That said, file extensions rarely matter in terms of Linux. In Windows, they're far more important, but in Linux you could literally append anything to a file as an "extension" (ie. thisfile.whatever) and you could still open it up in an editor. The same is true for most editors in Windows.
Likely, the packager of that file decided on jl for their own reasons, rather than following convention of using .json.
I guess JL extension is used for many purposes, but JL is also one of the few extensions used for JSON-lines (also known as NDJSON or JSONL).
This format can contain multiple JSON values, one JSON value (with "compact" formatting) per line and is useful for e.g. streaming or logging.
Related
for legal reason I should let the customer be able to download a CSV file but she/he should be able only to read it, not modify it.
What's a common way of handling this use case?
Some kind of signature on the file so that if it's modified you can see it's not in his original form?
I don't need a solution bound to a specific language, I would just like to know what is the best practice.
If customer will be able to download this file into his computer, than you can't stop her/him from modifying it.
However, you may easy detect changes, the easiest will be generating a cryptographic hash function for the file, i.e.:
$ sha256sum data.csv
eea8254c7500ba3de996aa8ad6af399183f04e17d4a8102fde539dbc93a90012 data.csv
OK, I have a protobuf formatted data file. I also have a .proto file that describes the schema of the file.
I have found copious libraries that let me extract known messages out of the file. How nice.
However, I don't really know the structure of the file. There may be different top-level "messages" in the file, and what I really want to do is just inspect the file and get a dump of what's in it.
Would love to have a command that lets me do something like:
proto2json <format.proto> <datafile> -o <output.json>
Is this too much to ask for? The Google isn't yielding an obvious answer, so maybe there's something subtle about protobufs I don't get yet.
Ideas?
Thanks to some helpful people on the protocol-buffers google group, I have an answer.
The answer is, "sorry, no".
Well, close. The problem is it's up to you to know what the "root" message is in the data file. In my case it wasn't obvious, so I was hoping a dump of the file would divulge the root. No luck, as the file itself doesn't know what the fields or messages are, they just have data that you can extract if you have the right .proto file.
In my case, I had a few suspicions as to what the root might be, so I did trial and error until I found the message that seemed to know what all of the fields are in the file.
It would have been nice if the .proto file indicated what the root message is, in which case I'm sure a tool to do this conversion would exist already.
I hope this helps.
Here's a pass at solving the problem that you posed. Here's an example command line to run this tool.
$ ./proto2json.sh --schema=test/test.proto \
--root=Recording --in=test/test.pb --out=out.json
https://github.com/rohitsaboo/proto2json
Currently, the tool only supports protocol buffer schemas that do not depend on protos from other files. However, it should be possible to extend it to support "dependency_schemas".
probably a dumb question, but I have a .jsn file that I'm supposed to strip away some unnecessary info from in python and I wanted to make sure .json and .jsn are the same before I proceed. From what I can tell, they are but just wanted to check. Thanks!
JavaScript Object Notation i.e. JSON filenames use the extension .json.
There is a .jsn file format attributed to Shield Now Shield File (JetSoft Corporation) However there is no known link to a real company or file format. In all likely hood it is a simple spelling error. Looking at the format of the file will clearly answer the difference.
I'm, writing a Puppet (3.6.2) module that reads data fields from a CSV file via the extlookup function and I cannot figure out how to tell extlookup that the first line is the header field. Does extlookup support this? If not, can anyone recommend an external function I could import and use?
thanks,
PS - Yes I know about hiera, and having the data in YAML or JSON files but my requirement is CSV files only.
Brandon
The behavior of extlookup() is pretty well documented. It makes no special provision for column headers, which are by no means an inherent feature of CSV format. Indeed, if your header line is not readable as a data line, then your file is not CSV at all.
Supposing that your file is indeed valid CSV, the absolute simplest solution would be to ignore the issue. It presents a problem only if the first column heading duplicates an actual or potential data name. If it does not, then you will never look up or use the psuedo-value represented by the first row.
If your file in fact is not CSV on account of its first line, or if the first column name conflicts with a real data name, then it seems the next best alternative would be to just remove that line, or to avoid creating it in the first place. I don't see any reason why one of these should not be possible.
I know about heira, and having the data in YAML or JSON files but my requirement is CSV files only.
How sad. Do be aware that extlookup() has long been deprecated, and it was removed from Puppet 4.
I'm inclined to suggest you implement a translator from CSV to Hiera-friendly YAML, and use Hiera in your module. Alternatively, Hiera supports custom backends, and it's not too hard to write one. I am unaware of an existing CSV backend for Hiera, but you could write one. Ignoring a header line would then be under your control, and you would simultaneously achieve a measure of future-proofing.
I have an A.chm file for my windows application which runs as expected.
When I decompile it using HTML workshop I get set of html files, .hhc file, .hhk file. When I compile another file B.chm from these extracted files without changing any of the files.((I want to add more html contents to this file but looks like I am losing some information after decompiling)) The output file I get is 72K where as the original file was 75K. B.chm's contents look all file when viewed in the chm viewer but the behavior is lost when when used with the application.
After reading around I found that if .hhp can be extracted from a .chm file then it can be re-constructed as it is without losing any mapping or aliases. Is that true?
How can I extract .hhp file from a .chm file?
Thanks,
Sam
No, Yes , and no.
The original hhp can't be guaranteed extracted
however since chm is an archive type, the project could have added all project files to the archive. I assume you already would have found them if that were the case.
If the decompile process does its administration, it can regenerate the .hhp to a certain degree.
Comments and #define names will probably be lost though, maybe more, but that should not result in problems when recompiling.
But of course it could be that the decompiler is limited. You could try some other (search for something from "keytools").
If not, then take "chmlib" and start drilling down into the format.