How to save very long string into firebase firestore database? - json

I'm trying to insert very long string to firebase firestore
and I receive this error message
Exception has occurred.
PlatformException (PlatformException(Error performing setData,
INVALID_ARGUMENT: The value of property "json" is longer than 1048487 bytes., null))
my code is :
Future<void> addREportToDB(String addedCount, String deletCount, String updatedCount, String json) {
// Map decodedCloud = jsonDecode(json);
return Firestore.instance.collection("reports").add({
"dateCreate": new DateTime.now(),
"addedCount": addedCount,
"deletCount": deletCount,
"updatedCount": updatedCount,
"json": json,
// "json":decodedCloud,
}).then((doc) {
print(doc.documentID.toString());
});
}
and the (json)variable This is a text I get from another API and this text contains the data of more than 600 employees in the form of JSON string
and all I need to save this as it
any help will be appreciated

There is no way you can add into a single document data above that limitation. So there are some limits when it comes to how much data you can put into a single document. According to the official documentation regarding usage and limits:
Maximum size for a document: 1 MiB (1,048,576 bytes)
As you can see, you are limited to 1 MiB total of data in a single document. You can use alternative solution for storing larger amounts of data. You should try using Firebase Storage.

The maximum size of a document in Cloud Firestore is 1MB. If you want to store more data, consider either splitting it over more documents or (more likely in this case) storing it in Cloud Storage (for which a Flutter SDK also exists).

Just save them separately to firestore and when you want to use them in your app, you can use string operation to join the two seperate strings you saved to make it the original long string

Related

Writing data from MATLAB to firebase database

I am using MATLAB to write data from MATLAB to firebase. I am using following lines of code to do so:
thingSpeakURL = 'https://hybrid-cabinet-265907.firebaseio.com/Ship A/Time Stamp.json';
lat = num2str(42);
lon = num2str(42);
data = struct('lat',lat,'lon',lon);
webwrite(thingSpeakURL,data)
Data is successfully written to Firebase. It is making my original JSON data as a child to a random string been generated on run-time.
For example, my JSON string is {lat: '40',lon:'40'} but instead it is creating a random string, let say, "Mxkkllslsll-1112", making that random string as parent and writing something like {"Mxkkllslsll-1112": lat:'40', lon:'40'} to the firebase database.
Please have a look at following image. It shows that for ship A, I have written data from MATLAB and it is not writing properly(I am facing the problem which I discussed above). I want to make it something like data written for Ship B.
I want to write the data without making any random string as a parent. Kindly assist me in that.
This is because webwrite uses the HTTP POST method by default.
As shown in the Firebase Realtime Database REST API documentation, if you do a POST you will push the data and therefore automatically generate a unique key every time a new child is added to the specified Firebase reference (the -MDJVMk..... value we can see in your question).
You need to use the PUT method.
I don't know matlab but a rapid look at the documentation shows that you need to use the RequestMethod option with a put value, in the weboptions object.
The above pushed me in the right direction (thanks!), and I had success with the following.
CAUTION: The following will overwrite everything in your database!
url = 'https://***.firebaseio.com/.json';
data.users(1) = struct('first','John','last','Locke');
data.users(2) = struct('first','Thomas','last','Hobbes');
data.users(3) = struct('first','Rene','last','Descartes');
headers = {'Content-Type' 'application/json'; 'Accept' 'application/json'};
options = weboptions('RequestMethod', 'put', 'HeaderFields', headers, 'ArrayFormat', 'json');
response = webwrite(url, data, options);
If your data is stored in a .json file (i.e., you don't want to create structures manually in Matlab), you can read it using "fileread" and pass in data as a string (instead of a structure).

Handling big JSONs in Azure Data Factory

I'm trying to use ADF for the following scenario:
a JSON is uploaded to a Azure Storage Blob, containing an array of similar objects
this JSON is read by ADF with a Lookup Activity and uploaded via a Web Activity to an external sink
I cannot use the Copy Activity, because I need to create a JSON payload for the Web Activity, so I have to lookup the array and paste it like this (payload of the Web Activity):
{
"some field": "value",
"some more fields": "value",
...
"items": #{activity('GetJsonLookupActivity').output.value}
}
The Lookup activity has a known limitation of an upper limit of 5000 rows at a time. If the JSON is larger, only 5000 top rows will be read and all else will be ignored.
I know this, so I have a system that chops payloads into chunks of 5000 rows before uploading to storage. But I'm not the only user, so there's a valid concern that someone else will try uploading bigger files and the pipeline will silently pass with a partial upload, while the user would obviously expect all rows to be uploaded.
I've come up with two concepts for a workaround, but I don't see how to implement either:
Is there any way for me to check if the JSON file is too large and fail the pipeline if so? The Lookup Activity doesn't seem to allow row counting, and the Get Metadata Activity only returns the size in bytes.
Alternatively, the MSDN docs propose a workaround of copying data in a foreach loop. But I cannot figure out how I'd use Lookup to first get rows 1-5000 and then 5001-10000 etc. from a JSON. It's easy enough with SQL using OFFSET N FETCH NEXT 5000 ROWS ONLY, but how to do it with a JSON?
You can't set any index range(1-5,000,5,000-10,000) when you use LookUp Activity.The workaround mentioned in the doc doesn't means you could use LookUp Activity with pagination,in my opinion.
My workaround is writing an azure function to get the total length of json array before data transfer.Inside azure function,divide the data into different sub temporary files with pagination like sub1.json,sub2.json....Then output an array contains file names.
Grab the array with ForEach Activity, execute lookup activity in the loop. The file path could be set as dynamic value.Then do next Web Activity.
Surely,my idea could be improved.For example,you get the total length of json array and it is under 5000 limitation,you could just return {"NeedIterate":false}.Evaluate that response by IfCondition Activity to decide which way should be next.It the value is false,execute the LookUp activity directly.All can be divided in the branches.

What does it mean when a KinematicBody2D is stored in a JSON file - Godot?

After writing a few files for saving in my JSON file in Godot. I saved the information in a variable called LData and it is working. LData looks like this:
{
"ingredients":[
"[KinematicBody2D:1370]"
],
"collected":[
{
"iname":"Pineapple",
"collected":true
},{
"iname":"Banana",
"collected":false
}
]
}
What does it mean when the file says KinematicBody2D:1370? I understand that it is saving the node in the file - or is it just saving a string? Is it saving the node's properties as well?
When I tried retrieving data - a variable that is assigned to the saved KinematicBody2D.
Code:
for ingredient in LData.ingredients:
print(ingredient.iname)
Error:
Invalid get index name 'iname' (on base: 'String')
I am assuming that the data is stored as a String and I need to put some code to get the exact node it saved. Using get_node is also throwing an error.
Code:
for ingredient in LData.ingredients:
print(get_node(ingredient).iname)
Error:
Invalid get index 'iname' (on base: 'null instance')
What information is it exactly storing when it says [KinematicBody2D:1370]? How do I access the variable iname and any other variables - variables that are assigned to the node when the game is loaded - and is not changed through the entire game?
[KinematicBody2D:1370] is just the string representation of a Node, which comes from Object.to_string:
Returns a String representing the object. If not overridden, defaults to "[ClassName:RID]".
If you truly want to serialize an entire Object, you could use Marshalls.variant_to_base64 and put that string in your json file. However, this will likely bloat your json file and contain much more information than you actually need to save a game. Do you really need to save an entire KinematicBody, or can you figure out the few properties that need to be saved (postion, type of object, ect.) and reconstruct the rest at runtime?
You can also save objects as Resources, which is more powerful and flexible than a JSON file, but tends to be better suited to game assets than save games. However, you could read the Resource docs and see if saving Resources seems like a more appropriate solution to you.

Storing GBs of JSON-like data in MongoDB

I'm using MongoDB because Meteor doesn't support-officially-anything else.
The main goal is to upload CSV files, parse them in Meteor and import the data to database.
The inserted data size can be 50-60GB or maybe more per file but I can't even insert anything bigger than 16MB due to document size limit. Also, even 1/10 of the insertion takes a lot of time.
I'm using CollectionFS for CSV file upload on the client. Therefore, I tried using CollectionFS for the data itself as well but it gives me an "unsupported data" error.
What can I do about this?
Edit: Since my question creates a confusion about storing data techniques, I want to clear something: I'm not interested in uploading the CSV file; I'm interested in storing the data in the file. I want to collect all user's data in one place and I want to fetch data with the lowest resources.
You could insert the csv file as a collection (the filename can become the collection name), with each row of the csv as a document. This will get around the 16 MB per document size limit. You may end up with a lot of collections, but that is okay. Another collection could keep track of the filename to collection name mapping.
In CollectionFS you can save file directly filesystem, just add the proper package and create your collection like these:
Csv = new FS.Collection("csv", {
stores: [
new FS.Store.FileSystem("csv","/home/username/csv")
],
filter: {
allow: {
extensions: ['csv']
}
}
});

Getting an inserted file from MySQL byte array to actual file content

I have inserted a file into MySQL database as a byte array:
preparedStatment.setBytes(10, inpStream.toString().getBytes());
I had already gotten the file from the database as a byte array but I'm stuck at getting the actual file's contents. What would be the correct way of doing this?
This is wrong:
preparedStatment.setBytes(10, inpStream.toString().getBytes());
You are storing the result of the InputStream#toString() as bytes in the DB. The InputStream#toString() does not return the file's contents as you seem to think, instead it returns the default classname#hashcode representation inherited from Object#toString().
You need PreparedStatement#setBinaryStream() instead:
preparedStatement.setBinaryStream(10, inpStream);
Then you can retrieve it by ResultSet#getBinaryStream():
InputStream inpStream = resultSet.getBinaryStream("columnname");
Or if you really need PreparedStatement#setBytes(), then you would need to write the InputStream to a ByteArrayOutputStream the usual way first and then get the bytes by its toByteArray() method. But this is not memory efficient per se.