Unexpected end of JSON input with MongoDB Compass - json

I exported my local MongoDB Collections using the JSON file type on my pc. Then I wanted to import these Collections on my root server using MongoDB Compass.
Everytime when I try to export the Collection, it throws the following error:
That's how my JSON file looks like:
{..."settings":{"inventory":{"crate":{"$numberInt":"0"},"cratekey":{"$numberInt":"0"},"pickaxe":{"$numberInt":"0"},...}
(I don't know if it's relevant to answer this question but this JSON line is just 1/142000)
How can I fix this error?

TL;DR
You need to have 1 empty line at the very bottom of the json file.
Long version
I don't know if this will help your case, but I ran into a similar issue when trying to import json data. I had 1 document per line, but something was still wrong. I then exported a similar piece of data as json, and tried playing around with it to see what was causing the issue. It turns out that the json must have an empty line at the bottom. So let's say you have 1 document to import. You place the entire document onto the first line. Then you just hit enter and create the second line at the bottom. After this, my data was imported without a problem.

So the problem is you need to minify your json document. That is it should be in one line. So here's link to website where you will paste your json document on the left and get the minified document on the right. It worked for me. I hope this also helps you.
https://codebeautify.org/jsonminifier

I had exact same problem, apparently their should be one document per line.
However use https://studio3t.com/ it'll work fine.

Related

Trying to parse a JSON file but it seems the format is different or something is wrong with the JSON file

Hi I'm trying to parse any of the files from the link underneath. I've tried reaching out to the owner of the data dumps, but nothing works in trying to parse the files as proper JSON files. No program we use (Power BI, Jupyter, Excel) anything really, wants to recognise the files as JSON and we can't figure out why this might be. I was wondering if anyone could help figuring out what the issue is here as this dataset is very interesting to me and my co-students. I hope I'm using the word 'parsing' correctly.
The link to the data dumps is linked underneath:
https://files.pushshift.io/reddit/comments/
The file I downloaded (I just tried one at random) was handled just fine by jq, my preferred command-line tool for processing JSON files.
jq accepts an input consisting of a sequence of JSON objects, which is what I found when I decompressed the test file. This format is commonly known as JSON lines, and many tools can handle it. The Wikipedia article on JSON streaming contains more information and a (possibly outdated) list of tools.
If your tools aren't capable of handling more than one JSON object in an input, you could turn the files into something which you can handle by adding a comma to the end of every line except the last one (since each JSON object is a single line) and then surrounding the whole input inside a pair of brackets to turn the sequence into a JSON list. Since JSON does not actually care about newlines, it would be sufficient to add a line containing [ at the beginning and a line containing ] at the end. I don't know what command-line tools you have available and are comfortable with, but the task shouldn't be too difficult.

How to view contents of JSON file? Data type error

I am trying to view the contents of a json file but everything I use it doesn't load.
I tried many sites that claimed to be json viewers, extensions for browser, application and even when trying to edit in notepad++ it seems to be completely unreadble I am not sure if it's obfuscated/encrypted or if I am just doing the wrong thing.
I have tried googling many sources about this and have came to the conclusion that I am not aware of what "type" of json file this is and how to read it. Every application I have used gives data type error, which suggests it is only able to be viewed/compiled in a specific program / method.
I am wondering if anybody can help point me in the right direction! Below I will attach the json file contents
Warning: Large file may take a while to load in browser window
Example of code:
¦¦p­ÛâTÅbØ*«‹c—¤Î`¯²ÆSú0ÒEX…ÕÊh QDN ‘ùîó/8dzҩݾ 4Ý(úk48–v¹Ôì¯úÓ„é…ƒº¯ÈŸ k"l¾NüžÏ¹úá¾Oð¹ )yà]ŒZš[>øÜáRÜ>¼ksÎÞT,èJ×Àåÿ;+\ LÙ ¯Ki5Uù×]åÁgp
I am sorry if this is not enough information, I am not sure what information is needed from me, I hope I provided enough, if anybody requires viewing the full contents of the json file to help further inspect this I would be happy to :) It is too large to upload to a site like pastebin or others

BE_HTTP_GET_File returning text

I am trying to pull a csv file into my FM15 database from FTP using BE_HTTP_GET_File.
By using the BE_CurlTrace at the end of the script, I can see that connection to the FTP is always successful, however instead of grabbing the file and returning it to a container field or documents path (I have tried both unsuccessfully) the BE_CurlTrace output simply lists all the text in the file within the BE_CurlTrace output :/
Am I doing something wrong, I know my syntax is correct on the BE_HTTP_GET_File as I have used this before, and I'm also getting a successful connection, I just can't seem to get the actual file into a container or local filepath.
Any suggestions welcome.
Thanks
Use the BE_HTTP_GET function directly in the Set Field line. Don’t use the trace function like that, it’s for debugging.

Edit a large JSON file

How can I edit a large JSON manually?
I have a large JSON file, about 100 MB. I'd like to manually inspect some attributes, and then add more attributes to some of the objects.
I'd start off by looking at a subset of the file. Say, the 1st 100 objects. I'd gradually scale up to looking then at maybe 250, then a thousand, etc.
Can someone suggest a language or software (I'm running Windows) that excels at this task?
Some previous suggestion that aren't working or can't work.
Sublime - Could never load the file. Loading bar forever. Had to kill.
NotePad++ - Could never load. Froze. Had to kill.
Anything online - The data is confidential.
More Python and Jupyter information.
with open(path, 'r') as f:
data = json.load(f)
for i, (k, v) in enumerate(data.items()):
print(i, k, v)
if i == 2:
break
Causes an error. I think it has to do with Jupyter, but I'm not sure.
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_data_rate_limit`.
Current values:
NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
NotebookApp.rate_limit_window=3.0 (secs)
That makes me wonder if going about it this way is just dumb.
Possible Solutions
Build a custom app using TKinter
Just don't use a Jupyter Notebook
What you can do is to write a simple GUI program. use TKinter, to create a window and a text area inside it to show the json, a text box where you will input, how many objects you want to see, and a button named Next or something to see next and one more button to save.The following will be the functionalities for each of the items.
First you will be reading complete json in python and making it a dict.
Next Button - This will keep iterating based on the value in the TextBox. you could write a custom generator, where it will be yielding based on the number of values required.
Save Button-: This will keep saving the current json into a new json or if you could, you can try to write a function to update the current json directly.
Text Area - you should take the dictionary and convert to json and show the output from the Next Button's generator.
If you are using linux (or have an opportunity to transfer the file to *nix) you might wish to check out for number of lines within a file via
wc -l myfile.json
Let's say, for the purpose of simplicity, that your file has 2530000 lines and you wish to split it into 100k lines each, you can utilize any of the commands available at your distro to split the file further into desired chunks and then to edit them, one by one.
If you are comfortable with going the "linux way", check out some of the hints given on other topics, i.e.
edit multi-GB file when vi editor doesn't work
I hope it helps!
The only viewer I have used that works on large files (I had up to 250MB size files) is Dadroit. It is fast to view and comes with search.
Now, to edit, I use vi. I search for the location and make local edits. Vim or another simpler editor should work on Windows. Have you tried vscode? 100MB shouldn't be too large for it.
The other awesome terminal tool for viewing and editing data is Visidata. I have had mixed luck with it working on json files.
Not the best answer, but the problem with reading the JSON seems limited to Jupyter Notebooks (or even the limitations of my laptop).
Working in Spyder or running from the command line circumvents the Jupyter error mentioned in the original question.
It'd be great if someone knew how to tweak Jupyter to avoid this problem (sorry, I'm not sure how yet).
for editor,try notepad++
for language, try Python
since you haven't give your data structure, I can't give more answer.

How to automate getting a CSV file from this website?

I've never worked with web pages before and I'd like to know how best to automate the following through programming/scripting:
go to http://financials.morningstar.com/ratios/r.html?t=GMCR&region=USA&culture=en_US
invoke the 'Export to CSV' button near the top right
save this file into local directory
parse file
Part 4 doesn't need to use the same language as for 1-3 but ideally I would like to do everything in one shot using one language.
I noticed that if I hover my mouse over the button it says: javascript:exportKeyStat2CSV(); Is this a java function I could call somehow?
Any suggestions are appreciated.
It's a JavaScript function, which is not Java!
At first glance, this may seem like you need to execute Javascript to get it done, but if you look at the source of the document, you can see the function is simply implemented like this:
function exportKeyStat2CSV(){
var orderby = SRT_keyStuts.getOrderFromCookie("order");
var urlstr = "//financials.morningstar.com/ajax/exportKR2CSV.html?&callback=?&t=XNAS:GMCR&region=usa&culture=en-US&cur=&order="+orderby;
document.location = urlstr;
}
So, it builds a url, which is completely fixed, except the order by part, which is taken from a cookie. Then it simply navigates to that url by setting document.location. A small test shows you even get a csv file if you leave the order by part empty, so probably, you can just download the CSV from the base url that is in the code.
Downloading can be done using various tools, for instance WGet for Windows. See SuperUser for more possibilities. Anyway, 'step 1 to 3' is actually just a single command.
After that, you just need to parse the file. Parsing CSV files can be done using batch, and there are several examples available. I won't get into details, since you didn't provide any in your question.
PS. I'd check their terms of use before you actually implement this.
The button directs me to this link:
http://financials.morningstar.com/ajax/exportKR2CSV.html?&callback=?&t=XNAS:GMCR&region=usa&culture=en-US&cur=&order=asc
You could use the Python 3 module urllib and fetch the file, save it using the os or shutil modules, then parse it using one of the many CSV parsing modules, or by making your own.