I'm having trouble using the AWS CLI to delete Route 53 records. I have a list of hundreds of domains and each one needs both 'A' records deleted. I wanted to do this using the CLI to save time, but I can't get the functionality working.
For example, let's say I have the following domain and I want to delete both 'A' records:
I'm using boto3 here, but it is the same AWS CLI API that I can't get working (https://docs.aws.amazon.com/cli/latest/reference/route53/change-resource-record-sets.html). My issue is somewhere in the json filter for this api call:
HostedZoneId='ABC123DEF456',
ChangeBatch={
'Comment': 'deleteing A records for domains',
'Changes': [
{
'Action': 'DELETE',
'ResourceRecordSet': {
'Name': 'example.com',
'Type': 'A',
'Region': 'us-east-1',
'ResourceRecords': [
{
"Value": "1.2.3.4"
}
],
'AliasTarget': {
'HostedZoneId': 'ABC123DEF456',
'DNSName': 'example.com',
'EvaluateTargetHealth': False
}
}
}
]
}
The error I am getting is:
InvalidInput: An error occurred (InvalidInput) when calling the ChangeResourceRecordSets operation: Invalid request: Expected exactly one of [AliasTarget, all of [TTL, and ResourceRecords], or TrafficPolicyInstanceId], but found more than one in Change with [Action=DELETE, Name=example.com, Type=A, SetIdentifier=null]
I think there is some confusion between simple record of A type, and simple record of alias A type. Namely, simple alias record should not don't have ResourceRecords.
To check how they are described in your case, you can use the following command:
aws route53 list-resource-record-sets --hosted-zone-id <your-zone-id>
The output of the above command should be helpful in constructing your DELETE.
Below are examples of outputs from my route53:
simple record
{
"Name": "<simple-a.example.com.>",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "1.2.3.4"
}
]
}
simple record with alias
{
"Name": "<simple-alias.example.com.>",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z06990762X86XLR2ZGTK4",
"DNSName": "<example>.",
"EvaluateTargetHealth": true
}
},
Related
Background: I work for a company that basically sells passes. Every order that is placed by the customer will contain N number of passes.
Issue: I have these JSON event-transaction files coming into a S3 bucket on a daily basis from DocumentDB (MongoDB). This JSON file is associated to the relevant type of event (insert, modify or delete) for every document key (which is an order in my case). The example below illustrates a "Insert" type of event that came through to the S3 bucket:
{
"_id": {
"_data": "11111111111111"
},
"operationType": "insert",
"clusterTime": {
"$timestamp": {
"t": 11111111,
"i": 1
}
},
"ns": {
"db": "abc",
"coll": "abc"
},
"documentKey": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
}
},
"fullDocument": {
"_id": {
"$uuid": "abcabcabcabcabcabc"
},
"orderNumber": "1234567",
"externalOrderId": "12345678",
"orderDateTime": "2020-09-11T08:06:26Z[UTC]",
"attraction": "abc",
"entryDate": {
"$date": 2020-09-13
},
"entryTime": {
"$date": 04000000
},
"requestId": "abc",
"ticketUrl": "abc",
"tickets": [
{
"passId": "1111111",
"externalTicketId": "1234567"
},
{
"passId": "222222222",
"externalTicketId": "122442492"
}
],
"_class": "abc"
}
}
As we see above, every JSON file might contain N number of passes and every pass is - in turn - is associated to an external ticket id, which is a different column (as seen above). I want to use Pentaho Kettle to read these JSON files and load the data into the DW. I am aware of the Json input step and Row Normalizer that could then transpose "PassID 1", "PassID 2", "PassID 3"..."PassID N" columns into 1 unique column "Pass" and I would have to have to apply a similar logic to the other column "External ticket id". The problem with that approach is that it is quite static, as in, I need to "tell" Pentaho how many Passes are coming in advance in the Json input step. However what if tomorrow I have an order with 10 different passes? How can I do this dynamically to ensure the job will not break?
If you want a tabular output like
TicketUrl Pass ExternalTicketID
---------- ------ ----------------
abc PassID1Value1 ExTicketIDvalue1
abc PassID1Value2 ExTicketIDvalue2
abc PassID1Value3 ExTicketIDvalue3
And make incoming value dynamic based on JSON input file values, then you can download this transformation Updated Link
I found everything work dynamic in JSON input.
I want to send different JSONs to an endpoint:
{{URL_API}}/products/{sku}
I need to update several information related to different products so i need to specify the product within the endpoint, i mean, i.e:
If you access this particular endpoint: {{URL_API}}/products/ you will get all the products but i need to specify the product that i want to update:
{{URL_API}}/products/99RE345GT
Take a look at this, i want to send a JSON like this:
{
"sku": "99RE345GT",
"price": "56665.0000",
"status": 1,
"group_prices": [
{
"group": "CLASS A",
"price": 145198.794
},
{
"group": "CLASS B",
"price": 145198.794
},
{
"group": "CLASS C",
"price": 145198.794
}
]
}
AND another one like this (both JSONs share the same structure BUT with different information):
{
"sku": "98PA345GT",
"price": "17534.0000",
"status": 1,
"group_prices": [
{
"group": "CLASS A",
"price": 145198.794
},
{
"group": "CLASS B",
"price": 145198.794
},
{
"group": "CLASS C",
"price": 145198.794
}
]
}
How can i do that?.I have already generated more than 200 JSONs for every product..
So, i have to update 200 products so i generated one JSON for every product, do you get me?
Following my example i would need to edit (somehow) the endpoint for every product and send a JSON, i.e:
since the first JSON has the SKU: 99RE345GT it should perform a http method: PUT over this enpoint:
{{URL_API}}/products/99RE345GT
Then, since the second JSON has the SKU: 98PA345GT it should perform a http method: PUT over this enpoint:
{{URL_API}}/products/98PA345GT
I have never done something like this before.. i read something about CSV + POSTMAN runner but i did not understand the way.
EDIT
I was working on a file (Excel file) and i did this:
So now i have all the different JSON for every product.
EDIT#2. It fails when it validates de Request_URL
I did this:
1)I created a new collection
2)I put this Request_url: {{URL_API}}/products/{{sku}}
3)I saved the changes and then, i went to the Collector Runner:
4)After cliking on the run button. i got this error message:
Invalid URL:
Have you tried adding those data sets to a CSV?
https://learning.postman.com/docs/postman/collection-runs/working-with-data-files/
If you have 2 column headers in a CSV file, one with sku and the other with requestBody - Add that variable value to the request body of the PUT request instead of the JSON.
sku,requestBody
99RE345GT, {JSON Payload}
98PA345GT, {...}
Add a couple of values under those headings to start with, once you prove that it works in the collection Runner.
Once you're happy, add the rest into the file. You may need to do some parsing of the JSON in the Pre-request Script but it should work.
Alternatively, use this template in the PUT request body and this create a CSV withe same column heading as the values in the {{...}} syntax. The values in the datafile will resolve to the values in the request body.
{
"sku": "{{sku}}",
"price": "{{price}}",
"status": {{status}},
"group_prices": [
{
"group": "{{groupA}}",
"price": {{groupAPrice}}
},
{
"group": "{{groupB}}",
"price": {{groupBPrice}}
},
{
"group": "{{groupC}}",
"price": {{groupCPrice}}
}
]
}
The CSV might look like this:
sku,price,status,groupA,groupAPrice,...
99RE345GT,1234,1,Group A, 555
98PA345GT,1235,1,Group A, 666
I am reverse engineering an app that sends queries to
SOMESERVERNAME.analysis.windows.net/public/reports/querydata via an HTTP POST of an JSON-structured query.
Some initial lines of a sample query are at the end of this message.
I can't find any documentation on this anywhere. I don't know if this is some secret API or what. I ultimately would like to just ignore the aggregations altogether and just dump the raw data, which seems to sit in some flat-file type container on the back-end, but without some API documentation I'm stuck with just re-running the super basic handful of queries I've been able to intercept.
Note: this app is an embedded analytics page created with PowerBI, but the only REST API I can find for PowerBI has nothing to do with querying, but just basic object management.
Thanks!
{
"version": "1.0.0",
"queries": [
{
"Query": {
"Commands": [
{
"SemanticQueryDataShapeCommand": {
"Query": {
"Version": 2,
"From": [
{
"Name": "s",
"Entity": "Sheet1"
}
],
"Select": [
{
"Aggregation": {
"Expression": {
"Column": {
"Expression": {
"SourceRef": {
"Source": "s"
}
},
"Property": "Total"
}
},
"Function": 0
},
"Name": "Sum(Sheet1.Total)"
}
],
"Where": [
{
"Condition": {
"In": {
"Expressions": [
{
"Column": {
"Expression": {
"SourceRef": {
"Source": "s"
}
},
"Property": "Year"
}
}
],
"Values": [
[
{
"Literal": {
"Value": "'2018'"
}
}
]
]
}
}
},
............
I have built a client that scrapes data off a specific Power BI report using the same API, but probably you'll be able to adapt it to your use case. Maybe we can even abstract the code into a more generalized Power BI client!
Having tinkered with the API for two days, I realised that there are many ways the data can be formatted:
"nested"/multidimensional data can be unflattened, flattened by 1 degree, etc.
a primary "table" of a result dataset (in data.PH) can reference others (in data.SH)
The basics are as follows:
A dataset is structured like a multidimensional table, with cells containing values.
In a set of cells, the first always has a field S that contains the schema of its and all subsequent cells.
The schema maps a field of each cell's object with a selection from your query, e.g. the G0 field with the queried column age.
My client seems to work only with a specific type of query (SemanticQueryDataShapeCommand), a specific nr of dimensions and a specific column marked as primary (via Binding.Primary). But maybe that helps! https://github.com/derhuerst/fetch-bvg-occupancy/blob/1ebb864b1ff7130f9d2f0ab031c6d78bcabdd633/lib/parse-dataset.js
The only documented way to use this API is through the ADOMD.NET or OleDb provider.
If you want to send a DAX/MDX query and retrieve data programmatically, there's a sample of how to front-end the service with a simple REST API here.
I have routes.json and db.json
Route
"/api/*/_search?*=:searchstring": "/$1/?$2_like=:searchstring",
"/api/*": "/$1"
DB.json
{
"cats": {
"cats": []
},
"bats": [],
"recordList": {
"records": [
{id:1, name: 'abc'},
{id:2, name: 'def'},
{id:3, name: 'ghi'}
]
}
}
Absolutely fine fetching the record list with the above configurations.
Need to understand how to mock for the search filter call below:
http:localhost:3001/api/_search?name=abc
Updated the routes to:
{
"/api/*": "/$1",
"/api/_search?name_like": "/$1"
}
Following this link: https://github.com/typicode/json-server/issues/654#issuecomment-339098881
But not hitting the config URL defined, what am I doing wrong? Am I
missing something here? The search term is dynamic, hence the value
passed should be acceptable from a variable only but in the comment it
is static. Kindly assist with this if anyone had similar issues and
resolved
If 'abc' is searched, it should return
{
records: [{id: 1, name: 'abc'}]
}
You need to write your search route like this:
{
"/api/records/_search?name=:searchstring": "/records/?name_like=:searchstring"
}
Or even better, you can parametrize with * to $1 replacement, thus you will be able to search for any parameter in query, and in any dataset, records or other:
{
"/api/*/_search?*=:searchstring": "/$1/?$2_like=:searchstring",
"/api/*": "/$1"
}
Afterwards your request to http://localhost:3001/api/records/_search?name=ab will be with response:
[
{
"id": 1,
"name": "abc"
}
]
Additional docs on routing.
I want to create dynamodb tables in localhost I have downloaded my remote dynamodb tables using this script.
https://github.com/bchew/dynamodump
This script I got from this answer over here.
How to export an existing dynamo table schema to json?
And I got a local back up of all the tables in my local machine Now I want to create those tables in my dynamodb local system for that reason I am uploading my tables to local DB using this command.
sudo aws dynamodb create-table --cli-input-json file:///home/evbooth/Desktop/dynamo/table/dynamodump/dump/admin/schema.json --endpoint-url http://localhost:8000
But I am getting an error like this.
Parameter validation failed:
Missing required parameter in input: "AttributeDefinitions"
Missing required parameter in input: "TableName"
Missing required parameter in input: "KeySchema"
Missing required parameter in input: "ProvisionedThroughput"
Unknown parameter in input: "Table", must be one of: AttributeDefinitions, TableName, KeySchema, LocalSecondaryIndexes, GlobalSecondaryIndexes, ProvisionedThroughput, StreamSpecification, SSESpecification
The downloaded json file is like this.
{
"Table": {
"TableArn": "arn:aws:dynamodb:us-west-2:xxxx:table/admin",
"AttributeDefinitions": [
{
"AttributeName": "userid",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 1,
"ReadCapacityUnits": 1
},
"TableSizeBytes": 0,
"TableName": "admin",
"TableStatus": "ACTIVE",
"TableId": "fd21aaab-52fe-4f86-aba6-1cc9a7b17417",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userid"
}
],
"ItemCount": 0,
"CreationDateTime": 1403367027.739
}
}
How can i fix this? I really got annoyed with aws dont have much idea about dynamo db as well
#wolfson thank you for your suggestion after working some time removing these stuff from the schema helped me to create a table anyway.
I removed
1)"Table": {
"TableArn": "arn:aws:dynamodb:us-west-2:xxxx:table/admin",
2)"NumberOfDecreasesToday": 0,
3),
"ItemCount": 0,
"CreationDateTime": 1403367027.739
}
4)"TableSizeBytes": 0,
5)
"TableStatus": "ACTIVE",
"TableId": "fd21aaab-52fe-4f86-aba6-1cc9a7b17417",
The resultant json is like but i forced to do this for all the tables and made me to do the create table operation n number of tables.
{
"AttributeDefinitions": [
{
"AttributeName": "userid",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"WriteCapacityUnits": 1,
"ReadCapacityUnits": 1
},
"TableName": "admin",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "userid"
}
]
}
Thank you anyway.
I was able to create the table locally in DynamoDB by removing this completely:
"BillingModeSummary": {
"BillingMode": "PROVISIONED",
"LastUpdateToPayPerRequestDateTime": 0
}
Once the table got created I looked at the table meta information, strangely it contains this:
"BillingModeSummary": {
"BillingMode": "PROVISIONED",
"LastUpdateToPayPerRequestDateTime": "1970-01-01T00:00:00.000Z"
}
I went back and tried to create table using the format as shown above. However it failed. Bottomline just remove "BillingModeSummary" and voila the table got created! :-)