Firebase: Exporting JSON Unable to export The size of data exported at a single location cannot exceed 256 MB - json

I used to download a node of firebase real-time database every day to monitor some outputs by exporting the .JSON file for that node. The JSON file itself is about 8MB.
Recently, I started receiving an error:
"Exporting JSON Unable to export The size of data exported at a single location cannot exceed 256 MB.Navigate to a smaller part of the database or use backups. Read more about limits"
Can someone please explain why I keep getting this error, since the JSON file I exported just yesterday was only 8.1 MB large.

I probably solved it! I disabled CORS addon in Chrome and suddenly it worked to export :)

To get rid of this, you can use Postman's Import feature because downloading a large JSON file sometimes faces failure in the middle of the way using a browser from the dashboard of the firebase. You can put the traditional cUrl commands on it. You just need to click save the response after the response is reached. To get rid of complex authentication complexity, you make the rule permission of the firebase database to read:true until the download is complete thought you need to ensure security for this. Postman also needs sometimes to preview the JSON even freezing the UI but you don't need to be bothered with it.

Related

Azure Synapse Dedicated Pool COPY INTO function fails due to base64 encode image in CSV file

I am using Azure Synapse Link for Dynamics 365. It automatically exports data from Dynamics 365 in CSV format into blob storage/data lake. I use the COPY INTO function to load the data into a Dedicated Pool instance. However, the contact model has recently started failing.
I investigated the issue and found that the cause was due to a field that has an image encoded as text. I only copy selected fields from the CSV files and this is not one of them, but it still causes the copy to fail. I manually updated the CSV file to exclude this data from the one row where it was found and it worked fine.
The error message associated with the error is:
The column is too long in the data file for row 1328, column 32.
This is supposed to be an automated process so I do not want to be manually editing CSV files when this occurs. Are there any parameters that I can add to the COPY INTO function to prevent this error? I tried using MAXERRORS but that made no difference.
The only other thing that I could think of is to write a script (maybe an Azure Function?) that checks the file for this issue and corrects it. Maybe there is a simpler approach though?

Failing to Upload Large JSON file to Firebase Real Time Database

I have a 1GB json file to upload to Firebase RTDB but when I press Import, it's loading for a while and then I get this Error:
There was a problem contacting the server. Try uploading your file again.
I have tried to upload a 30mb file and everything is ok.
It sounds like your file it too big to upload to Firebase in one go. There are no parameters to tweak here, and you'll have to use another means of getting the data into the database.
You might want to give the Firebase-Import library ago, the Firebase CLI's database:set command, or write your own import for your file format using the Firebase API.

CKAN : Upload to datastore failed; Resource too large to download

When i try to upload a large csv file to CKAN datastore it fails and shows the following message
Error: Resource too large to download: 5158278929 > max (10485760).
I changed the maximum in megabytes a resources upload to
ckan.max_resource_size = 5120
in
/etc/ckan/production.ini
What else do i need to change to upload a large csv to ckan.
Screenshot:
That error message comes from the DataPusher, not from CKAN itself: https://github.com/ckan/datapusher/blob/master/datapusher/jobs.py#L250. Unfortunately it looks like the DataPusher's maximum file size is hard-coded to 10MB: https://github.com/ckan/datapusher/blob/master/datapusher/jobs.py#L28. Pushing larger files into the DataStore is not supported.
Two possible workarounds might be:
Use the DataStore API to add the data yourself.
Change the MAX_CONTENT_LENGTH on the line in the DataPusher source code that I linked to above, to something bigger.

insecure string pickle error when uploading and downloading to MKS Integrity

I am getting the exception "ValueError: insecure string pickle" when attempting to run my program after creating a sandbox from MKS.
Hopefully you are still interested in helping if you are still reading this, so here's the full story.
I created an application in Python that analyzes data. When saving specific data from my program, I pickle the file. I correctly read and write it in binary and everything is working correctly on my computer.
I then used py2exe to wrap everything into an .exe. However, in order to get the pickled files to continue to work, I have to physically copy them into the the folder that py2exe. So my pickle is inside of the .exe folder and everything is working correctly when I run the .exe.
Next, I upload everything to MKS (an ALM, here is the Wikipedia page http://en.wikipedia.org/wiki/MKS_Integrity).
When I proceed to create a sandbox of my files and run the program, I get the dreaded "insecure string pickle" error. In other words, I am wondering if MKS screwed something up or added an end of line character to my pickle files. When I compare the contents of the MKS pickle file and the one I created before I uploaded the program to MKS, there are no differences.
I hope this is enough detail to describe my problem.
Please help!
Thanks
Have you tried adding your pickled files to your Integrity sandbox as binaries and not text?
When adding the file, on the Create Archive interface, select the options button, and change data type to "Binary" from "Auto". This will maintain any non-text formatting within the file.

AIR/AS3: Upload portion of a File without creating a new new File chunk

I'm currently building an AIR file uploader designed to handle multiple and very large files. I've played around with different methods of breaking up file into chunks(100mb) and progressively uploading each so that I can guard agains a failed upload/disconnection etc.
I have managed to break up the file in smaller files which I then write to a scratch area on the disc however I'm finding that the actual process of writing the file is quite slow and chews up a lot of processing power. My UI basically grinds to a halt when its writing. not to mention that I'm effectively doubling the local disc space of every file.
The other method I used was to read into the original file in 100mb chunks and store that data in a byteArray which I can then upload as a POST data using the URLLoader class. Problem is that this way I cant keep track of the upload progress because the ProgressEvent.PROGRESS does not work properly for POST requests.
What I would like to know is if it's possible to read into the file in my 100mb chunks and upload that data without having to create a new file but still using the FileReference.upload() method in order to listen to all the available events that method gives me. Ie. create a File() that is made up of bytes 0 - 100mb of the original file, then call upload() on that new File.
I can post my code for both methods if that helps.
Cheers, much appreciated
I had such problem, but we were solve it in another way, we decided to write an socket connector, which will connect to server (e.g. FTP/HTTP) and write down to socket this ByteArray, and we did it also in chunks around the same size, and the biggest file we had to upload was BlueRay movie around ~150GB.
So I hope you got some interesting ideas from my message, If you'd like it, I could share some piece of code for you.