how to read/interpret json file to define mysql schema - mysql

I have been tasked with mapping a json file to a mysql database and I am trying to define the appropriate schema a sample of the json file is below
"configurationItems":[
{
"ARN":"",
"availabilityZone":"",
"awsAccountId":"hidden from sight ",
"awsRegion":"",
"configuration":{
"amiLaunchIndex":,
"architecture":"",
"blockDeviceMappings":[
{
"deviceName":"",
"ebs":{
"attachTime":"",
"deleteOnTermination":true,
"status":"attached",
"volumeId":""
}
}
],
"clientToken":"",
"ebsOptimized":,
"hypervisor":"",
"imageId":"",
"instanceId":"",
"instanceType":"",
"kernelId":"aki-",
"keyName":"",
"launchTime":"",
"monitoring":{
"state":""
},
"networkInterfaces":[
{ etc
am I right in thinking that the way to do this is essentially wherever there is a bracket /child element there would be a new table eg; configuration items down to aws region would be in a table then configuration through architecture followed by block device mappings etc etc if that is the case then where would the client token through lanch time belong ? many thanks in advance folks

That certainly is a way to use it.
It gives a more parent child relation approach to the setup.
E.g.
"blockDeviceMappings":[
{
"deviceName":"/dev/sda1",
"ebs":{
"attachTime":"2014-01-06T10:37:40.000Z",
"deleteOnTermination":true,
"status":"attached",
"volumeId":""
}
}
]
Probably could have more than one devices so it would be a 1 to many relation.

Related

How can I create an EMR cluster resource that uses spot instances without hardcoding the bid_price variable?

I'm using Terraform to create an AWS EMR cluster that uses spot instances as core instances.
I know I can use the bid_price variable within the core_instance_group block on a aws_emr_cluster resource, but I don't want to hardcode prices as I'd have to change them manually every time the instance type changes.
Using the AWS Web UI, I'm able to choose the "Use on-demand as max price" option. That's exactly what I'm trying to reproduce, but in Terraform.
Right now I am trying to solve my problem using the aws_pricing_product data source. You can see what I have so far below:
data "aws_pricing_product" "m4_large_price" {
service_code = "AmazonEC2"
filters {
field = "instanceType"
value = "m4.large"
}
filters {
field = "operatingSystem"
value = "Linux"
}
filters {
field = "tenancy"
value = "Shared"
}
filters {
field = "usagetype"
value = "BoxUsage:m4.large"
}
filters {
field = "preInstalledSw"
value = "NA"
}
filters {
field = "location"
value = "US East (N. Virginia)"
}
}
data.aws_pricing_product.m4_large_price.result returns a json containing the details of a single product (you can check the response of the example here). The actual on-demand price is buried somewhere inside this json, but I don't know how can I get it (image generated with http://jsonviewer.stack.hu/):
I know I might be able solve this by using an external data source and piping the output of an aws cli call to something like jq, e.g:
aws pricing get-products --filters "Type=TERM_MATCH,Field=sku,Value=8VCNEHQMSCQS4P39" --format-version aws_v1 --service-code AmazonEC2 | jq [........]
But I'd like to know if there is any way to accomplish what I'm trying to do with pure Terraform. Thanks in advance!
Unfortunately the aws_pricing_product data source docs don't expand on how it should be used effectively but the discussion in the pull request that added it adds some insight.
In Terraform 0.12 you should be able to use the jsondecode function to nicely get at what you want with the following given as an example in the linked pull request:
data "aws_pricing_product" "example" {
service_code = "AmazonRedshift"
filters = [
{
field = "instanceType"
value = "ds1.xlarge"
},
{
field = "location"
value = "US East (N. Virginia)"
},
]
}
# Potential Terraform 0.12 syntax - may change during implementation
# Also, not sure about the exact attribute reference architecture myself :)
output "example" {
values = jsondecode(data.json_query.example.value).terms.OnDemand.*.priceDimensions.*.pricePerUnit.USD
}
If you are stuck on Terraform <0.12 you might struggle to do this natively in Terraform other than the external data source approach you've already suggested.
#cfelipe put that ${jsondecode(data.aws_pricing_product.m4_large_price.value).terms.OnDemand.*.priceDimensions.*.pricePerUnit.USD}" in a Locals

How to add a children data through user exit class in Maximo inbound integration?

I have an object structure with 3 objects. location > lochierarchy > customtable.
On the original source xml -erdata, I get only details for location object. I have derived the information for lochierarchy and the customtable.
If I have at least one column value for lochierarchy and customtable, I am able to use the following code and fill up the derived values.
xml
<LOCATIONS>
<location>1000</location>
<siteid>xyg</siteid>
<LOCHIERARCHY>
<SYSTEMID>abdc</SYSTEMID>
<PARENT></PARENT>
<CUSTOMTABLE>
<DEPT>MECHANICAL</DEPT>
<OWNER></OWNER>
</CUSTOMTABLE>
</LOCHIERARCHY>
List locHierarchyList =irData.getChildrenData("LOCHIERARCHY");
int locHrSize=locHierarchyList.size();
for (int i=0;i<locHrSize;i++)
{
irData.setAsCurrent(locHierarchyList,i);
irData.setCurrentData("PARENT","xyyyyg");
List customTablerList =irData.getChildrenData("CUSTOMTABLE");
int custSize=customTablerList .size();
for (int i=0;i<custSize;i++)
{
//set values
}
But I am getting the source xml with only the location data below and I'm trying to build the children data. I am missing something here.
Incoming XML
<LOCATIONS>
<location>1000</location>
<siteid>xyg</siteid>
</LOCATIONS>
My Code
irData.createChildrenData("LOCHIERARCHY");
irData.setAsCurrent();
irData.setCurrentData("SYSTEMID", SYSTEM);
irData.setCurrentData("PARENT", parentLoc);
irData.createChildrenData("CUSTOMTABLE");
irData.setAsCurrent();
but this is not working. Can anyone help me out?
got it, just had to use another method of createChildrenData.
irData.createChildrenData("LOCHIERARCHY",true);
This one did the trick. It creates the child set and make it as current.

How to parse IP within fields in ELK

I am trying to automate/ease a procedure to review firewall rules within ELK (ElasticSearch, Logstash, Kibana).
I have some data obtained from a CSV, which is structured like this:
Source;Destination;Service;Action;Comment
10.0.0.0/8 172.16.0.0/16 192.168.0.0/24 23.2.20.6;10.0.0.1 10.0.0.2 10.0.0.3;udp:53
tcp:53;accept;No.10: ID: INC0000000001
My objective is to import this data within ELK by parsing each field (for subnet and/or IP address) and, if possible, add a sequential field (IP_Source1,IP_Destination2,etc) containing each one.
Is this possible, to your knowledge? How?
Thanks for any hint you may be able to provide
You can create a logstash configuration with input as file. Then use first csv filter. CSV filter should look like this.
filter {
csv {
columns => ["source", "destination", "service", "action", "comment"]
separator => ";"
}
}
Next filter will need to be ruby filter.
filter {
ruby {
code => "
arr = event.get(source).split('')
arr.each.with_index(1) do |a, index|
event.set(ip_source+index, a)
end
"
}
}
Final will be output to elasticsearch.
I have not tested code. But I am hoping this shuld give you good hints.

Format for storing json in Amazon DynamoDB

I've got JSON file that looks like this
{
"alliance":{
"name_part_1":[
"Ab",
"Aen",
"Zancl"
],
"name_part_2":[
"aca",
"acia",
"ythrae",
"ytos"
],
"name_part_3":[
"Alliance",
"Bond"
]
}
}
I want to store it in dynamoDB.
The thing is that I want a generator that would take random elements from fields like name_part_1, name_part_2 and others (number of name_parts_x is unlimited and overalls number of items in each parts might be several hundreds) and join them to create a complete word. Like
name_part_1[1] + name_part_2[10] + name_part[3]
My question is that what format I should use to do that effectively? Or NoSQL shouldn't be used for that? Should I refactor JSON for something like
{
"name": "alliance",
"parts": [ "name_part_1", "name_part_2", "name_part_3" ],
"values": [
{ "name_part_1" : [ "Ab ... ] }, { "name_part_2": [ "aca" ... ] }
]
}
This is a perfect use case for DynamoDB.
You can structure like this,
NameParts (Table)
namepart (partition key)
range (hash key)
namepart: name_part_1 range: 0 value: "Ab"
This way each name_part will have its own range and scalable. You can extend it to thousands or even millions.
You can do a batch getitem from the sdk of your choice and join those values.
REST API reference:
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
Hope it helps.
You can just put the whole document as it is in DynamoDB and then use document path to access the elements you want.
Document Paths
In an expression, you use a document path to tell DynamoDB where to
find an attribute. For a top-level attribute, the document path is
simply the attribute name. For a nested attribute, you construct the
document path using dereference operators.
The following are some examples of document paths. (Refer to the item
shown in Specifying Item Attributes.)
A top-level scalar attribute: ProductDescription A top-level list
attribute. (This will return the entire list, not just some of the
elements.) RelatedItems The third element from the RelatedItems list.
(Remember that list elements are zero-based.) RelatedItems[2] The
front-view picture of the product. Pictures.FrontView All of the
five-star reviews. ProductReviews.FiveStar The first of the five-star
reviews. ProductReviews.FiveStar[0] Note The maximum depth for a
document path is 32. Therefore, the number of dereferences operators
in a path cannot exceed this limit.
Note that each document requires a unique Partition Key.

reactivemongo - merging two BSONDocuments

I am looking for the most efficient and easy way to merge two BSON Documents. In case of collisions I have already handlers, for example if both documents include Integer, I will sum that, if a string also, if array then will add elements of the other one, etc.
However due to BSONDocument immutable nature it is almost impossible to do something with it. What would be the easiest and fastest way to do merging?
I need to merge the following for example:
{
"2013": {
"09": {
value: 23
}
}
}
{
"2013": {
"09": {
value: 13
},
"08": {
value: 1
}
}
}
And the final document would be:
{
"2013": {
"09": {
value: 36
},
"08": {
value: 1
}
}
}
There is a method in BSONDocument.add, however it doesn't check uniqueness, it means I would have at the end 2 BSON documents with "2013" as a root key, etc.
Thank you!
If I understand you inquiry, you are looking to aggregate field data via composite id. MongoDB has a fairly slick aggregate framework. Part of that framework is the $group pipeline aggregate keyword. This will allow you to specify and _id to group by which could be defined as a field or a document as in your example, as well as perform aggregation using accumulators such as $sum.
Here is a link to the manual for the operators you will probably need to use.
http://docs.mongodb.org/manual/reference/operator/aggregation/group/
Also, please remove the "merge" tag from your original inquiry to reduce confusion. Many MongoDB drivers include a Merge function as part of the BsonDocument representation as a way to consolidate two BsonDocuments into a single BsonDocument linearly or via element overwrites and it has no relation to aggregation.
Hope this helps.
ndh