I want to create dynamodb tables in localhost I have downloaded my remote dynamodb tables using this script.
https://github.com/bchew/dynamodump
This script I got from this answer over here.
How to export an existing dynamo table schema to json?
And I got a local back up of all the tables in my local machine Now I want to create those tables in my dynamodb local system for that reason I am uploading my tables to local DB using this command.
sudo aws dynamodb create-table --cli-input-json file:///home/evbooth/Desktop/dynamo/table/dynamodump/dump/admin/schema.json --endpoint-url http://localhost:8000
But I am getting an error like this.
Parameter validation failed:
Missing required parameter in input: "AttributeDefinitions"
Missing required parameter in input: "TableName"
Missing required parameter in input: "KeySchema"
Missing required parameter in input: "ProvisionedThroughput"
Unknown parameter in input: "Table", must be one of: AttributeDefinitions, TableName, KeySchema, LocalSecondaryIndexes, GlobalSecondaryIndexes, ProvisionedThroughput, StreamSpecification, SSESpecification
The downloaded json file is like this.
{
"Table": {
"TableArn": "arn:aws:dynamodb:us-west-2:xxxx:table/admin",
"AttributeDefinitions": [
{
"AttributeName": "userid",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 1,
"ReadCapacityUnits": 1
},
"TableSizeBytes": 0,
"TableName": "admin",
"TableStatus": "ACTIVE",
"TableId": "fd21aaab-52fe-4f86-aba6-1cc9a7b17417",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userid"
}
],
"ItemCount": 0,
"CreationDateTime": 1403367027.739
}
}
How can i fix this? I really got annoyed with aws dont have much idea about dynamo db as well
#wolfson thank you for your suggestion after working some time removing these stuff from the schema helped me to create a table anyway.
I removed
1)"Table": {
"TableArn": "arn:aws:dynamodb:us-west-2:xxxx:table/admin",
2)"NumberOfDecreasesToday": 0,
3),
"ItemCount": 0,
"CreationDateTime": 1403367027.739
}
4)"TableSizeBytes": 0,
5)
"TableStatus": "ACTIVE",
"TableId": "fd21aaab-52fe-4f86-aba6-1cc9a7b17417",
The resultant json is like but i forced to do this for all the tables and made me to do the create table operation n number of tables.
{
"AttributeDefinitions": [
{
"AttributeName": "userid",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"WriteCapacityUnits": 1,
"ReadCapacityUnits": 1
},
"TableName": "admin",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userid"
}
]
}
Thank you anyway.
I was able to create the table locally in DynamoDB by removing this completely:
"BillingModeSummary": {
"BillingMode": "PROVISIONED",
"LastUpdateToPayPerRequestDateTime": 0
}
Once the table got created I looked at the table meta information, strangely it contains this:
"BillingModeSummary": {
"BillingMode": "PROVISIONED",
"LastUpdateToPayPerRequestDateTime": "1970-01-01T00:00:00.000Z"
}
I went back and tried to create table using the format as shown above. However it failed. Bottomline just remove "BillingModeSummary" and voila the table got created! :-)
Related
In an existing project I need to parse some JSON into a dataset using TRESTResponseDataSetAdapter, but I can't get nested fields to work.
As a simplified example, let's say the data is structured like this:
[
{
"category": {
"name": "Animals",
"display": true
},
"label": "Horse"
},
{
"category": {
"name": "Animals",
"display": true
},
"label": "Elephant"
},
...and so on...
]
The dataset has fields with the following field names: label, category.name and category.display.
Only the label gets successfully retrieved from the JSON, but the nested properties in the category JSON object do not. The TRESTResponseDataSetAdapter has NestedElements = true and NestedElementsDepth = 2 (I also tried 0 and 1).
I'm pretty sure this has worked before upgrading to RAD Studio 10.4, but I'm not 100% sure if it started failing before upgrading to 10.4 or as a result of the upgrade.
I can't really find any good info on how to use nested fields, but I seem to remember reading somewhere that you're supposed to separate the path with dots. Any ideas why it is not working?
I want to take the data from here: https://raw.githubusercontent.com/usnistgov/oscal-content/master/examples/ssp/json/ssp-example.json
which I've pulled into a mySQL database called "ssp_models" into a JSON column called 'json_data', and I need add a new 'name' and 'type' entry into the 'parties' node with a new uuid in the same format as the example.
So in my mySQL database table, "ssp_models", I have this entry: Noting that I should be able to write the data by somehow referencing "66c2a1c8-5830-48bd-8fdd-55a1c3a52888" as the record to modify.
All the example I've seen online seem to force me to read out the entire JSON into a variable, make the addition, and then cram it back into the json_data column, which seems costly, especially with large JSON data-sets.
Isn't there a simple way I can say
"INSERT INTO ssp_models JSON_INSERT <somehow burrow down to 'system-security-plan'.metadata.parties (name, type) VALUES ('Raytheon', 'organization') WHERE uuid = '66c2a1c8-5830-48bd-8fdd-55a1c3a52888'
I was looking at this other stackoverflow example for inserting into JSON:
How to create and insert a JSON object using MySQL queries?
However, that's basically useful when you are starting from scratch, vs. needing to add JSON data to data that already exists.
You may want to read https://dev.mysql.com/doc/refman/8.0/en/json-function-reference.html and explore each of the functions, and try them out one by one, if you're going to continue working with JSON data in MySQL.
I was able to do what you describe this way:
update ssp_models set json_data = json_array_append(
json_data,
'$."system-security-plan".metadata.parties',
json_object('name', 'Bingo', 'type', 'farmer')
)
where uuid = '66c2a1c8-5830-48bd-8fdd-55a1c3a52888';
Then I checked the data:
mysql> select uuid, json_pretty(json_data) from ssp_models\G
*************************** 1. row ***************************
uuid: 66c2a1c8-5830-48bd-8fdd-55a1c3a52888
json_pretty(json_data): {
"system-security-plan": {
"uuid": "66c2a1c8-5830-48bd-8fdd-55a1c3a52888",
"metadata": {
"roles": [
{
"id": "legal-officer",
"title": "Legal Officer"
}
],
"title": "Enterprise Logging and Auditing System Security Plan",
"parties": [
{
"name": "Enterprise Asset Owners",
"type": "organization",
"uuid": "3b2a5599-cc37-403f-ae36-5708fa804b27"
},
{
"name": "Enterprise Asset Administrators",
"type": "organization",
"uuid": "833ac398-5c9a-4e6b-acba-2a9c11399da0"
},
{
"name": "Bingo",
"type": "farmer"
}
]
}
}
}
I started with data like yours, but for this test, I truncated everything after the parties array.
Background: I work for a company that basically sells passes. Every order that is placed by the customer will contain N number of passes.
Issue: I have these JSON event-transaction files coming into a S3 bucket on a daily basis from DocumentDB (MongoDB). This JSON file is associated to the relevant type of event (insert, modify or delete) for every document key (which is an order in my case). The example below illustrates a "Insert" type of event that came through to the S3 bucket:
{
"_id": {
"_data": "11111111111111"
},
"operationType": "insert",
"clusterTime": {
"$timestamp": {
"t": 11111111,
"i": 1
}
},
"ns": {
"db": "abc",
"coll": "abc"
},
"documentKey": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
}
},
"fullDocument": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
},
"orderNumber": "1234567",
"externalOrderId": "12345678",
"orderDateTime": "2020-09-11T08:06:26Z[UTC]",
"attraction": "abc",
"entryDate": {
"$date": 2020-09-13
},
"entryTime": {
"$date": 04000000
},
"requestId": "abc",
"ticketUrl": "abc",
"tickets": [
{
"passId": "1111111",
"externalTicketId": "1234567"
},
{
"passId": "222222222",
"externalTicketId": "122442492"
}
],
"_class": "abc"
}
}
As we see above, every JSON file might contain N number of passes and every pass is - in turn - is associated to an external ticket id, which is a different column (as seen above). I want to use Pentaho Kettle to read these JSON files and load the data into the DW. I am aware of the Json input step and Row Normalizer that could then transpose "PassID 1", "PassID 2", "PassID 3"..."PassID N" columns into 1 unique column "Pass" and I would have to have to apply a similar logic to the other column "External ticket id". The problem with that approach is that it is quite static, as in, I need to "tell" Pentaho how many Passes are coming in advance in the Json input step. However what if tomorrow I have an order with 10 different passes? How can I do this dynamically to ensure the job will not break?
If you want a tabular output like
TicketUrl Pass ExternalTicketID
---------- ------ ----------------
abc PassID1Value1 ExTicketIDvalue1
abc PassID1Value2 ExTicketIDvalue2
abc PassID1Value3 ExTicketIDvalue3
And make incoming value dynamic based on JSON input file values, then you can download this transformation Updated Link
I found everything work dynamic in JSON input.
I'm having trouble using the AWS CLI to delete Route 53 records. I have a list of hundreds of domains and each one needs both 'A' records deleted. I wanted to do this using the CLI to save time, but I can't get the functionality working.
For example, let's say I have the following domain and I want to delete both 'A' records:
I'm using boto3 here, but it is the same AWS CLI API that I can't get working (https://docs.aws.amazon.com/cli/latest/reference/route53/change-resource-record-sets.html). My issue is somewhere in the json filter for this api call:
HostedZoneId='ABC123DEF456',
ChangeBatch={
'Comment': 'deleteing A records for domains',
'Changes': [
{
'Action': 'DELETE',
'ResourceRecordSet': {
'Name': 'example.com',
'Type': 'A',
'Region': 'us-east-1',
'ResourceRecords': [
{
"Value": "1.2.3.4"
}
],
'AliasTarget': {
'HostedZoneId': 'ABC123DEF456',
'DNSName': 'example.com',
'EvaluateTargetHealth': False
}
}
}
]
}
The error I am getting is:
InvalidInput: An error occurred (InvalidInput) when calling the ChangeResourceRecordSets operation: Invalid request: Expected exactly one of [AliasTarget, all of [TTL, and ResourceRecords], or TrafficPolicyInstanceId], but found more than one in Change with [Action=DELETE, Name=example.com, Type=A, SetIdentifier=null]
I think there is some confusion between simple record of A type, and simple record of alias A type. Namely, simple alias record should not don't have ResourceRecords.
To check how they are described in your case, you can use the following command:
aws route53 list-resource-record-sets --hosted-zone-id <your-zone-id>
The output of the above command should be helpful in constructing your DELETE.
Below are examples of outputs from my route53:
simple record
{
"Name": "<simple-a.example.com.>",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "1.2.3.4"
}
]
}
simple record with alias
{
"Name": "<simple-alias.example.com.>",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z06990762X86XLR2ZGTK4",
"DNSName": "<example>.",
"EvaluateTargetHealth": true
}
},
I know this is complex but please bear with me..........
I am building a Restful application. I have a specific requirement where there is a column named handle in my database. I need to store values like*(1,2,3,4)* in that column.
I am using spring mvc ,hibernate and eclipse as my IDE. I am also using PostMan Client. I read about hibernate spatial data type and i decided to go with that however i am not being able to store the data in database.
I have been sending a json request containing data to my application through postman client in order to save those data to database .However i am not being able to do this?? What is the problem??..
This is my Json
{
"operations": [
{
"operationType": 0,
"objectType": 2,
"data": {
"idNum": 212632034672,
"data": {
"radius": 80.11865321153431,
"strokeWidth": 5,
"stroke": "green"
},
"handle": {
"x": 223,
"y": 102
},
"parentId": 1
}
}
],
"projectGuid": "ff8081814521f6ce014521f7756f0000"
}
I know the problem lies in handle part but i have not been able to solve it.......
Following is part of my model:-
#Column(name = "handle",columnDefinition="Geometry")
#Type(type = "org.hibernate.spatial.GeometryType")
private Point handle;
P.S. I am using org.hibernatespatial.mysql.MySQLSpatialInnoDBDialect in my jpaContext.xml file and i already have hibernate-spatial-4.0.jar in my class path already