How to invoke encryption API when we have multiple data to be encrypted at a single time? Suppose if we have 10 records and our requirement is to use encryption API only once. Then how it can be done?
Take a look at KMS API request body reference:
{
"plaintext": string,
"additionalAuthenticatedData": string,
"plaintextCrc32c": string,
"additionalAuthenticatedDataCrc32c": string
}
This API takes only one plain text value. You can concatenate multiple data, but they will be all made into one encrypted text, and will have to be decrypted all at the same time.
Related
I have a big json payload of around 12K characters. AWS Eventbridge is not letting me create event as there is a payload limit of 8192 characters.
How would I resolve this?
Thanks
According to API doc, this is a hard limit on the API level. A workaround would be to split it into two targets.
Another way to handle larger payloads on Eventbridge would be to put the payload to S3 and pass a reference (bucket name + key) as the payload.
I'm trying to use ADF for the following scenario:
a JSON is uploaded to a Azure Storage Blob, containing an array of similar objects
this JSON is read by ADF with a Lookup Activity and uploaded via a Web Activity to an external sink
I cannot use the Copy Activity, because I need to create a JSON payload for the Web Activity, so I have to lookup the array and paste it like this (payload of the Web Activity):
{
"some field": "value",
"some more fields": "value",
...
"items": #{activity('GetJsonLookupActivity').output.value}
}
The Lookup activity has a known limitation of an upper limit of 5000 rows at a time. If the JSON is larger, only 5000 top rows will be read and all else will be ignored.
I know this, so I have a system that chops payloads into chunks of 5000 rows before uploading to storage. But I'm not the only user, so there's a valid concern that someone else will try uploading bigger files and the pipeline will silently pass with a partial upload, while the user would obviously expect all rows to be uploaded.
I've come up with two concepts for a workaround, but I don't see how to implement either:
Is there any way for me to check if the JSON file is too large and fail the pipeline if so? The Lookup Activity doesn't seem to allow row counting, and the Get Metadata Activity only returns the size in bytes.
Alternatively, the MSDN docs propose a workaround of copying data in a foreach loop. But I cannot figure out how I'd use Lookup to first get rows 1-5000 and then 5001-10000 etc. from a JSON. It's easy enough with SQL using OFFSET N FETCH NEXT 5000 ROWS ONLY, but how to do it with a JSON?
You can't set any index range(1-5,000,5,000-10,000) when you use LookUp Activity.The workaround mentioned in the doc doesn't means you could use LookUp Activity with pagination,in my opinion.
My workaround is writing an azure function to get the total length of json array before data transfer.Inside azure function,divide the data into different sub temporary files with pagination like sub1.json,sub2.json....Then output an array contains file names.
Grab the array with ForEach Activity, execute lookup activity in the loop. The file path could be set as dynamic value.Then do next Web Activity.
Surely,my idea could be improved.For example,you get the total length of json array and it is under 5000 limitation,you could just return {"NeedIterate":false}.Evaluate that response by IfCondition Activity to decide which way should be next.It the value is false,execute the LookUp activity directly.All can be divided in the branches.
I'm trying to insert very long string to firebase firestore
and I receive this error message
Exception has occurred.
PlatformException (PlatformException(Error performing setData,
INVALID_ARGUMENT: The value of property "json" is longer than 1048487 bytes., null))
my code is :
Future<void> addREportToDB(String addedCount, String deletCount, String updatedCount, String json) {
// Map decodedCloud = jsonDecode(json);
return Firestore.instance.collection("reports").add({
"dateCreate": new DateTime.now(),
"addedCount": addedCount,
"deletCount": deletCount,
"updatedCount": updatedCount,
"json": json,
// "json":decodedCloud,
}).then((doc) {
print(doc.documentID.toString());
});
}
and the (json)variable This is a text I get from another API and this text contains the data of more than 600 employees in the form of JSON string
and all I need to save this as it
any help will be appreciated
There is no way you can add into a single document data above that limitation. So there are some limits when it comes to how much data you can put into a single document. According to the official documentation regarding usage and limits:
Maximum size for a document: 1 MiB (1,048,576 bytes)
As you can see, you are limited to 1 MiB total of data in a single document. You can use alternative solution for storing larger amounts of data. You should try using Firebase Storage.
The maximum size of a document in Cloud Firestore is 1MB. If you want to store more data, consider either splitting it over more documents or (more likely in this case) storing it in Cloud Storage (for which a Flutter SDK also exists).
Just save them separately to firestore and when you want to use them in your app, you can use string operation to join the two seperate strings you saved to make it the original long string
I want to use query string as json, for example: /api/uri?{"key":"value"}, instead of /api/uri?key=value. Advantages, from my point of view, are:
json keep types of parameters, such are booleans, ints, floats and strings. Standard query string treats all parameters as strings.
json has less symbols for deep nested structures, For example, ?{"a":{"b":{"c":[1,2,3]}}} vs ?a[b][c][]=1&a[b][c][]=2&a[b][c][]=3
Easier to build on client side
What disadvantages could be in that case of json usage?
It looks good if it's
/api/uri?{"key":"value"}
as stated in your example, but since it's part of the URL then it gets encoded to:
/api/uri?%3F%7B%22key%22%3A%22value%22%7D
or something similar which makes /api/uri?key=value simpler than the encoded one; in this case both for debugging outbound calls (ie you want to check the actual request via wireshark etc). Also notice that it takes up more characters when encoded to valid url (see browser limitation).
For the case of 'lesser symbols for nested structures', It might be more appropriate to create a new resource for your sub resource where you will handle the filtering through your query parameters;
api/a?{"b":1,}
to
api/b?bfield1=1
api/a?aBfield1=1 // or something similar
Lastly for the 'easier to build in client side', i think it depends on what you use to create your client, usually query params are represented as maps so it is still simple.
Also if you need a collection for a single param then:
/uri/resource?param1=value1,value2,value3
The Box v2 REST API appears to contain methods that return "full objects". That is, they return all the fields and properties of the object requested with one "simple" call.
When trying the official .Net SDK, it appears that if you don't specify fields by name in the "FoldersManager" or "FilesManager" (for example), you get minimal details of the objects returned.
Is there a way to make the request return all fields/properties? I realize maybe ItemsCollection is one you'd want to retrieve specifically, but the rest should really be included in one call (like the REST capability).
Thanks for any ideas!
-AJ
If no fields are specified in the request, the default fields are returned in the response (ie. what the API decides is the most commonly used fields). If a field is specified the API returns all required fields along with the specified fields (usually type, id, and etag).
There is currently no simple flag that will return all fields as this would likely be abused out of convenience. The only way to return all fields is to manually specify all of the fields you are looking for. If using any of the official SDKs, these fields names can usually be found in the object models
HTH