AWS Price list API in Golang - json

I'm trying to use the aws price list API to get the price of a particular instance using getProducts, but it's been really frustrating to even get the price from the "response" given by the getProductsOutput. Everything is a awsJSONValue, and when I try to index into the awsJSONValue, I get a bunch of "map[] interface {}" issues that I have to continiously cast and it ends up looking really long and ugly (the command), and definitely not reproducible.
Online, there seems to be no way to get the price from this output due to the "map[] interface{}" issues that is returned in the AWS JsonValue and I can't even index into the particular values.
Does anyone have any success in getting the "price" from the getProducts api call in aws price list using Golang? If so, can anyone please show me because I've been playing around and I feel like it shouldn't be so complicated.
Is there any way I can parse through the aws.JSONValue via like a marshalling function or some sort?
Simply put, I want something like this (but in Golang, not python): Use boto3 to get current price for given EC2 instance type
Thanks!

Related

Get data from array in JSON API object in Cypress

Relative newbie to Cypress and JSON data. I have an api online that I can access. The api has data similar to this:
{"record":[{"account":"acount_1","team":"Test 1","req_id":12345},{"account":"acount_2","team":"Test 2","req_id":23456}],"metadata":{"id":"abcde","private":false,"createdAt":"2022-12-21T00:00:00.000Z"}}
I am attempting to find a manner to get the amount of records that are in the api, as well as get the first team name.
The closest I have come to getting any kind of data is by using something like this:
cy.get('#testing').then((data) => {
for (let index in data)
cy.log(data[index])
})
However, all that does is show me what is in the API, not the data in the array itself. I have attempted dozens of different options, none of which has worked. I hope someone can please help me!
Assuming your intercept was waited on with the alias and the data is nested as you say in the respon, you can access the response JSON data
cy.get('#testing')
// get records and check length
.its('record')
.should('have.length', 2)
// get first team name
.its('0.team')
.should('eq', 'Test 1')
Here is a working example.

Is Athena a viable/sensible choice for infrequently searching *unstructured* JSON?

I'm recording request and response headers and bodies for all traffic to our API and from our API to 3rd party services into S3 as into tiny objects.
I want to be able to query this data infrequently. For example (pseudo-code):
select $.cars[0].color from "objects" where object_path in (....);
Other info:
Many "objects" in S3 won't have a valid path to $.cars[0].color (it's just one example).
I hope to not use Glue.
Cost is important - this is something that will be queried very infrequently. Configuring some ElasticSearch/similar solution is terribly out of budget for the use case.
I hope to not define my own set of schemas (this is simply not feasible).
Athena says it can search unstructured JSON. I'm having trouble creating a proof-of-concept to show this is true.
Is Athena right fr me? Am I missing a better solution?
I think Athena will work for your case.
Athena handles missing properties in JSON objects. For example, if you define the cars column as array<struct<color:string>>:
the property can be missing ⇒ SELECT cars … will be NULL
it can be an empty list ⇒ SELECT cars[1] … (Athena arrays start at 1) will result in an error, but element_at(cars, 1) and try(cars[1]) will return NULL
the object may be missing the color property ⇒ SELECT cars[1].color … will be NULL
for completely free-form JSON define the column as string and use the JSON functions to query it.
Glue is not necessary. Create the table manually, from your application, or with CloudFormation, and configure it to use partition projection and you will not have to think about using Glue crawlers at all.
Athena doesn't cost anything when you don't use it, and if you will query only infrequently this is key. Make sure to compress your data, and partition it in a way that supports your query patterns (e.g. by date or month if you most often will query recent data).
Not sure what you mean by having to define your "own set of schemas", so perhaps you can clarify that part?

How can I pass the document sub-collections along with the document JSON map in Flutter using Firebase?

I'm trying to get all the documents in the "businesses" collection from Firebase together with their sub-collections.
The problem is when I do the query to Firebase like this :
Stream<List<Business>> getBusinesses() {
return _db.collection('businesses').snapshots().map((snapshot) => snapshot
.docs
.map((document) => Business.fromJson(document.data()))
.toList());
}
, the sub-collections aren't passed with the JSON object document.data(), so in my code, the Business object isn't fully completed, which means there are empty fields (Appointments, ServiceProviders,
Services), instead of getting the data from the sub-collections.
So hopefully I've explained the problem well, my question is how can I fetch all the document data including its sub-collections, and parse it to a Business Object?
Thanks.
What seems to be "the problem" is actually the point of Firestore: Keeping documents shallow so you can only get the data you need. It's then up to you to structure your data the way it will likely be used in the future.
Mind you, subcollections are not fields.
What you can do here, is add a query that fetches the documents in the subcollections (Appointments, ServiceProviders, Services), for each business. You would get the business document Id to use for the query.
It would typically look something like:
_db.collection('businesses').document(documentId).collection('Appointments')
Mind you, this is potentially too much data. It might be better to fetch the docs in those subcollections only when needed/requested by the user.

AWS Lambda output format - JSON

I trying to format my output from a lambda function into JSON. The lambda function queries my Amazon Aurora RDS instance and returns an array of rows in the following format:
[[name,age,town,postcode]]
which gives the an example output:
[["James", 23, "Maidenhead","sl72qw"]]
I understand that mapping templates are designed to translate one format to another but I don't understand how I can take the output above and map in to a JSON format using these mapping templates.
I have checked the documentation and it only covers converting one JSON to another.
Without seeing the code you're specifically using, it's difficult to give you a definitely correct answer, but I suspect what you're after is returning the data from python as a dictionary then converting that to JSON.
It looks like this thread contains the relevant details on how to do that.
More specifically, using the DictCursor
cursor = connection.cursor(pymysql.cursors.DictCursor)

Accessing horizontal JSON data in Javascript

For currency exchange rates, Yahoo has a very rapid alternative to YQL in: http://finance.yahoo.com/webservice/v1/symbols/allcurrencies/quote?format=json
This is great except that the JSON format is a little unfamiliar to me. it seems I have to do something like
parsedjson.list.resources[3].resource.fields.price
to pull out the USD/VND exchange rate. Instead of accessing by index I would like to search by the name field, i.e. some function that when i input "USD/VND" returns the price just as above, but without me having to look up and hardcode the index.
Is this possible?
Thanks in advance for helping a newbie.