Converting text to python list - json

I have this text-
text = """<?xml version="1.0"?><mainmodule><module1><heading>Lesson 01: Design Authorization</heading><subheading><item1>Learning Objectives</item1><item2>Choosing an Authorization Approach</item2><item3>Access Management Solution</item3></subheading></module1><module2><heading>Lesson 02: Design a Solution for Logging and Monitoring</heading><subheading><item1>Learning Objectives</item1><item2>Monitoring Tools</item2><item3>Azure Monitor Health and Availability Monitoring</item3><item4>Initiating Automated Response Using Action Groups</item4><item5>Configure and Manage Alerts</item5><item6>Demo Azure Logging and Monitoring</item6><item7>Demo Azure Alerts</item7><item8>Recap</item8></subheading></module2><module3><heading>Lesson 03: Design for High Availability</heading><subheading><item1>Learning Objectives</item1><item2>Architecture Best Practices for Reliability into Categories</item2><item3>Solution for Recovery in Different Regions</item3><item4>Solution for Azure Backup Management</item4><item5>Solution for Data Archiving and Retention</item5></subheading></module3></mainmodule>"""
I would like the output in this format-
output = [{
'heading': 'Lesson 01: Design Authorization',
'subheading': [{'subheading': 'Learning Objectives'},
{'subheading': 'Choosing an Authorization Approach'},
{'subheading': 'Access Management Solution'}]},
{
'heading': 'Lesson 02: Design a Solution for Logging and Monitoring',
'subheading': [{'subheading': 'Learning Objectives'},
{'subheading': 'Monitoring Tools'},
{'subheading': 'Azure Monitor Health and Availability Monitoring'},
{'subheading': 'Initiating Automated Response Using Action Groups'},
{'subheading': 'Configure and Manage Alerts'},
{'subheading': 'Demo Azure Logging and Monitoring'},
{'subheading': 'Demo Azure Alerts'},
{'subheading': 'Recap'}]},
{
'heading': 'Lesson 03: Design for High Availability',
'subheading': [{'subheading': 'Learning Objectives'},
{'subheading': 'Architecture Best Practices for Reliability into Categories'},
{'subheading': 'Solution for Recovery in Different Regions'},
{'subheading': 'Solution for Azure Backup Management'},
{'subheading': 'Solution for Data Archiving and Retention'}]}
]
The text under tag needs to be under "heading" in output. And The text under "subheading" -> "item" needs to be under "subheading" under respective "heading".
I am trying to solve this by creating a list of lists .
Till now I have done this, but I am unable to solve.
output = [
{
'heading':'',
'subheading':[
{
'subheading':''
}
]
}
]
import re
heading_list = re.findall('<heading>.+?</heading>',text)
subheading_list = re.findall('<item\d+?>.+?</item\d+?>',text)
no_of_items = 0
count_item = [[]]*len(heading_list)
for num,sub in enumerate(subheading_list):
if '<item1>' in sub and num!=0:
no_of_items+=1
count_item[no_of_items].append(sub)
else:
count_item[no_of_items].append(sub)
I want to append the items in count_item's list of lists, but somehow every items are getting appended in every list.
How can I solve this?

Solution using beautifulsoup:
from bs4 import BeautifulSoup
with open("your_file.xml", "r") as f_in:
soup = BeautifulSoup(f_in.read(), "xml")
out = []
for module in soup.find_all(lambda tag: tag.name.startswith("module")):
out.append({"heading": module.find("heading").text, "subheading": []})
for item in module.find_all(lambda tag: tag.name.startswith("item")):
out[-1]["subheading"].append({"subheading": item.text})
print(out)
Prints:
[
{
"heading": "Lesson 01: Design Authorization",
"subheading": [
{"subheading": "Learning Objectives"},
{"subheading": "Choosing an Authorization Approach"},
{"subheading": "Access Management Solution"},
],
},
{
"heading": "Lesson 02: Design a Solution for Logging and Monitoring",
"subheading": [
{"subheading": "Learning Objectives"},
{"subheading": "Monitoring Tools"},
{"subheading": "Azure Monitor Health and Availability Monitoring"},
{"subheading": "Initiating Automated Response Using Action Groups"},
{"subheading": "Configure and Manage Alerts"},
{"subheading": "Demo Azure Logging and Monitoring"},
{"subheading": "Demo Azure Alerts"},
{"subheading": "Recap"},
],
},
{
"heading": "Lesson 03: Design for High Availability",
"subheading": [
{"subheading": "Learning Objectives"},
{
"subheading": "Architecture Best Practices for Reliability into Categories"
},
{"subheading": "Solution for Recovery in Different Regions"},
{"subheading": "Solution for Azure Backup Management"},
{"subheading": "Solution for Data Archiving and Retention"},
],
},
]

In your code, count_item = [[]]*len(heading_list) ends up creating len(heading_list) copies of the same empty list. This means that whatever element of count_item you append to, you'll end up appending to the same list object.
Try something like count_item = [[] for _ in heading_list].

Have you tried the library xmltodict? This is a neat library that converts the extensible markup language data to a python dictionary..
# !pip install xmltodict
import xmltodict
text = """<?xml version="1.0"?><mainmodule><module1><heading>Lesson 01: Design Authorization</heading><subheading><item1>Learning Objectives</item1><item2>Choosing an Authorization Approach</item2><item3>Access Management Solution</item3></subheading></module1><module2><heading>Lesson 02: Design a Solution for Logging and Monitoring</heading><subheading><item1>Learning Objectives</item1><item2>Monitoring Tools</item2><item3>Azure Monitor Health and Availability Monitoring</item3><item4>Initiating Automated Response Using Action Groups</item4><item5>Configure and Manage Alerts</item5><item6>Demo Azure Logging and Monitoring</item6><item7>Demo Azure Alerts</item7><item8>Recap</item8></subheading></module2><module3><heading>Lesson 03: Design for High Availability</heading><subheading><item1>Learning Objectives</item1><item2>Architecture Best Practices for Reliability into Categories</item2><item3>Solution for Recovery in Different Regions</item3><item4>Solution for Azure Backup Management</item4><item5>Solution for Data Archiving and Retention</item5></subheading></module3></mainmodule>"""
d = xmltodict.parse(text)
After that you can process the data
output = []
for module in d['mainmodule'].keys():
dic = {}
dic['heading'] = d['mainmodule'][module]['heading']
lis = []
for item in d['mainmodule'][module]['subheading']:
lis.append({'subheading': d['mainmodule'][module]['subheading'][item]})
dic['subheading'] = lis
output.append(dic)
print(output)
Output:
[{'heading': 'Lesson 01: Design Authorization',
'subheading': [{'subheading': 'Learning Objectives'},
{'subheading': 'Choosing an Authorization Approach'},
{'subheading': 'Access Management Solution'}]},
{'heading': 'Lesson 02: Design a Solution for Logging and Monitoring',
'subheading': [{'subheading': 'Learning Objectives'},
{'subheading': 'Monitoring Tools'},
{'subheading': 'Azure Monitor Health and Availability Monitoring'},
{'subheading': 'Initiating Automated Response Using Action Groups'},
{'subheading': 'Configure and Manage Alerts'},
{'subheading': 'Demo Azure Logging and Monitoring'},
{'subheading': 'Demo Azure Alerts'},
{'subheading': 'Recap'}]},
{'heading': 'Lesson 03: Design for High Availability',
'subheading': [{'subheading': 'Learning Objectives'},
{'subheading': 'Architecture Best Practices for Reliability into Categories'},
{'subheading': 'Solution for Recovery in Different Regions'},
{'subheading': 'Solution for Azure Backup Management'},
{'subheading': 'Solution for Data Archiving and Retention'}]}]

Related

packer build debian arm64 img appear no root file system is defined

problem
I need to create a qcow2 image of Debian (arm64) through packer and Debian (arm64) ISO. The community examples of packer are all AMD64. I modified the example of the packer community of AMD 64 HCl file can start QEMU arm64 virtual machine, load preset file and enter automatic installation,
However, no root file system is defined appears in the partition disks step. I checked the QEMU disk (qcow2), and partman auto doesn't work. Here are my relevant codes. Please point out the problems and solutions
Related code
packer .hcl
. HCl is also x86, but by modifying the binary file of QEMU, there is boot_ command qemu_ Args can be installed automatically
The amendments are as follows
boot_command = [
"<wait10>c<wait5><wait10>",
"linux /install.a64/vmlinuz --quiet",
" auto=true ",
" url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/${var.preseed_file} ",
" hostname=${var.vm_name} ", " domain=${var.domain} ", " interface=auto ",
" ---",
"<enter><wait>",
"initrd /install.a64/initrd.gz",
"<enter><wait>",
"boot<enter><wait>"
]
qemuargs = [
[ "-m", "8192" ],
[ "-cpu", "max" ],
[ "-smp", "4" ],
[ "-M", "virt" ],
# iso
[ "-device", "nec-usb-xhci" ],
[ "-device", "usb-storage,drive=install" ],
[ "-drive", "file=/root/packer-build-master/source/debian/debian-11.2.0-arm64-netinst.iso,if=none,id=install,media=cdrom,readonly=on" ],
# hd
[ "-drive", "if=none,file=build/test/arm64/base-bullseye,id=hd,cache=writeback,discard=ignore,format=qcow2" ],
["-device", "virtio-blk-device,drive=hd"],
# [ "-bios", "edk2-aarch64-code.fd" ],
[ "-bios", "/usr/share/qemu-efi-aarch64/QEMU_EFI.fd" ],
[ "-boot", "strict=off" ],
[ "-monitor", "none" ]
The preseed code is directly used by x86 because the community example does not have arm. Another pitfall here is that the packer variable is used in the preset file, but this is a feature community example. This feature has not been implemented yet. My local preset has changed the variable to the actual value
preseed code
Operation results and error reporting contents
Connect the QEMU virtual machine through VNC to check whether the disk partition is reached, and then no root file system is defined appears
My solution ideas and tried methods
Find the partition example of Debian preset about amr64, but it is not found
Official link:
https://www.debian.org/releases/stable/arm64/apbs04.en.html#preseed -Partman#preset partition related
https://salsa.debian.org/installer-team/partman-auto#pressed There are various architecture partition examples in the open source warehouse, but there is no arm64
What I want to achieve
Find the partition about arm64 Debian preset, so that the automatic installation of packer can continue

The connection to the server master.ctwifi.cn:8443 was refused - did you specify the right host or port?\n"

Description
CentOS 7.4
I used Ansible installation OpenShift, and I must be use HTTP_PROXY.
I have such a problem: "stderr": "The connection to the server master.ctwifi.cn:8443 was refused - did you specify the right host or
port?\n",
And there are no 8443 ports on the master host. The number of containers running is only 5 on the master host. I think kubernetes master didn't start, but I don't know why.
But I use the same configuration to install on public cloud In addition to proxy configuration, there is no problem.So I think this problem is related to my proxy configuration. But I didn't find a solution.
ansible-playbook ~/openshift-ansible/playbooks/prerequisites.yml -vvv is ok
Problem
ansible-playbook ~/openshift-ansible/playbooks/deploy_cluster.yml -vvv
FAILED - RETRYING: Wait for control plane pods to appear (57 retries left).Result was: {
"attempts": 4,
"changed": false,
"invocation": {
"module_args": {
"all_namespaces": null,
"content": null,
"debug": false,
"delete_after": false,
"field_selector": null,
"files": null,
"force": false,
"kind": "pod",
"kubeconfig": "/etc/origin/master/admin.kubeconfig",
"name": "master-etcd-master.ctwifi.cn",
"namespace": "kube-system",
"selector": null,
"state": "list"
}
},
"msg": {
"cmd": "/usr/bin/oc get pod master-etcd-master.ctwifi.cn -o json -n kube-system",
"results": [
{}
],
"returncode": 1,
**"stderr": "The connection to the server master.ctwifi.cn:8443 was refused - did you specify the right host or port?\n",**
"stdout": ""
},
"retries": 61
```
Version
ansible 2.6.4
#git describe
openshift-ansible-3.10.51-1-16-g1a911c5
#rpm -q openshift-ansible
openshift-ansible-3.10.51-1.git.0.44a646c.el7.noarch
My hosts file(communication no problem between every host):
10.199.27.49 master.ctwifi.cn
10.199.27.48 node01.ctwifi.cn
10.199.27.46 node02.ctwifi.cn
10.199.27.28 infra-node1.ctwifi.cn
My /etc/sysconfig/docker file:
HTTP_PROXY='http://'username:password'#proxy.ctwifi.cn:8080'
HTTPS_PROXY='http://'username:password'#proxy.ctwifi.cn:8080' NO_PROXY=10.199.27.49,10.199.27.46,10.199.27.48,10.199.27.28,master.ctwifi.com,node01.ctwifi.cn,node02.ctwifi.cn,infra-node1.ctwifi.cn,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1
export HTTP_PROXY HTTPS_PROXY NO_PROXY
OPTIONS=' --selinux-enabled --signature-verification=False'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
My operating system and version: CentOS 7.4
My inventory file
#Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
#Set variables common for all OSEv3 hosts
[OSEv3:vars]
#SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root
#If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true
openshift_deployment_type=origin
#uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
#openshift_release=3.10
openshift_disable_check=memory_availability,docker_image_availability,package_availability,package_version
#host group for masters
[masters]
master.ctwifi.cn
#host group for etcd
[etcd]
master.ctwifi.cn
#host group for nodes, includes region info
[nodes]
master.ctwifi.cn openshift_node_group_name='node-config-master'
#node01.ctwifi.cn openshift_node_group_name='node-config-compute'
node02.ctwifi.cn openshift_node_group_name='node-config-compute'
infra-node1.ctwifi.cn openshift_node_group_name='node-config-infra'
The FAILED - RETRYING: Wait for control plane pods... means there are problem with the API server. The problem it self are various. I don't have enough point to comment but I'll try to answer it.
Sometimes openshift can't find the route to the pods, if there is such condition then
add openshift_set_node_ip=true in [OSEv3:vars] and put osnode1 openshift_ip=xxx.xxx.xxx.xxx .... to every nodes.
Sometimes there will be a problem if there are no users created then you have to add
openshift_master_htpasswd_users={'admin': '$apr1$RbOvaj8r$LEqJqG6V/O/i7Pf...'}
Also, it is better that API and ETCD are in different place.
Below is your inventory file, I edited a bit, please take a look:
#Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
#Set variables common for all OSEv3 hosts
[OSEv3:vars]
#SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root
#If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true
openshift_deployment_type=origin
openshift_set_node_ip=true
#uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'admin': '$apr1$RbOvaj8r$LEqJqG6V/O/i7Pf...'}
#openshift_release=3.10
openshift_check_min_host_memory_gb=8
openshift_check_min_host_disk_gb=10
openshift_disable_check=docker_image_availability
#host group for masters
[masters]
master.ctwifi.cn
#host group for etcd
[etcd]
node02.ctwifi.cn
#host group for nodes, includes region info
[nodes]
master.ctwifi.cn openshift_ip=10.10.0.231 openshift_schedulable=true openshift_node_group_name='node-config-master'
#node01.ctwifi.cn openshift_ip=10.10.0.232 openshift_schedulable=true openshift_node_group_name='node-config-compute'
node02.ctwifi.cn openshift_ip=10.10.0.233 openshift_schedulable=true openshift_node_group_name='node-config-compute'
infra-node1.ctwifi.cn openshift_ip=10.10.0.234 openshift_schedulable=true openshift_node_group_name='node-config-infra'
I'm using centos7.5 and the release version of openshift-ansible
https://github.com/openshift/openshift-ansible/archive/openshift-ansible-3.10.52-1.tar.gz. It is running well without a problems on my VM.
And this is my minimum inventory file, in case you want to compare it:
[OSEv3:children]
masters
nodes
etcd
[masters]
osnode1
[etcd]
osnode3
[nodes]
osnode1 openshift_ip=10.10.0.231 openshift_schedulable=true openshift_node_group_name='node-config-master'
osnode2 openshift_ip=10.10.0.232 openshift_schedulable=true openshift_node_group_name='node-config-infra'
osnode3 openshift_ip=10.10.0.233 openshift_schedulable=true openshift_node_group_name='node-config-compute'
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_release="3.10"
openshift_image_tag="v3.10"
openshift_set_node_ip=true
openshift_master_default_subdomain=apps.k8s.labs
openshift_docker_selinux_enabled=True
openshift_check_min_host_memory_gb=16
openshift_check_min_host_disk_gb=50
openshift_disable_check=docker_image_availability
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true','challenge':'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'admin': '$apr1$RbOvaj8r$LEqJqG6V/O/i7Pfyyyyyy.', 'user': '$apr1$MfsFK97I$enQjqHCh2LL8w4EBwNrrrr'}

Autoscaling Deployment with custom metrics on Openshift 1.5.0

Is there any possibility to autoscale deployment with Openshift Origin 1.5.0 (kubernetes 1.5.2) and use custom metrics for this purpose?
Kubernetes documentation states that autoscaling with custom metrics are being supported from version 1.2. It looks true, just because Openshift horizontal pod autoscaler (HPA) tries to gain some metrics and calculate desired metrics. But my configuration fails to succeed to perform this. Guys, please help me with finding what I am doing wrong with this.
So, what happens:
I have set up a metrics as it is recommended in Origin latest docs (all steps are passed): https://docs.openshift.org/latest/install_config/cluster_metrics.html;
I have an app, which is being deployed with Deployment kind object;
this app exposes custom metrics with http json endpoint;
custom metrics are being collected and stored - this is shown in Openshift origin UI in Metrics tab of corresponding pod;
after I create HPA - some warning about collecting custom metrics appear, it writes something like 'Failed collecting custom metrics, did not recieve metrics for any ready pods';
I create HPA with API version 1 and include an annotation alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"requests_count", "value": "10"}]}';
if I request a deployed heapster app through master-proxy, I receive something like this
{
"metadata": {},
"items": [
{
"metadata": {
"name": "resty-1722683747-kmbw0",
"namespace": "availability-demo",
"creationTimestamp": "2017-05-24T09:50:24Z"
},
"timestamp": "2017-05-24T09:50:00Z",
"window": "1m0s",
"containers": [
{
"name": "resty",
"usage": {
"cpu": "0",
"memory": "2372Ki"
}
}
]
}
]
}
as you can see, there is really no custom metrics, and my custom metrics is named requests_count.
What steps should I take to succeed in implementing custom metrics autoscaling?
Screenshot with custom metrics being collected and exposed via Openshift Console UI
UPDATE:
In openshift master log warning looks like this:
I0524 10:17:47.537985 1 panics.go:76GET /apis/extensions/v1beta1/namespaces/availability-demo/deployments/resty/scale: (3.379724ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.543354 1 panics.go:76] GET /api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/apis/metrics/v1alpha1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (4.830135ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.553255 1 panics.go:76] GET /api/v1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (8.864864ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.559909 1 panics.go:76] GET /api/v1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (5.725342ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.560977 1 panics.go:76] PATCH /api/v1/namespaces/availability-demo/events/resty.14c14bbf8b89534c: (6.385846ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.565418 1 panics.go:76] GET /api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/api/v1/model/namespaces/availability-demo/pod-list/resty-1722683747-kmbw0/metrics/custom/requests_count?start=2017-05-24T10%3A12%3A47Z: (5.015336ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.569843 1 panics.go:76] GET /api/v1/namespaces/availability-demo/pods?labelSelector=app%3Dresty: (4.040029ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.575530 1 panics.go:76] PUT /apis/autoscaling/v1/namespaces/availability-demo/horizontalpodautoscalers/resty/status: (4.894835ms) 200 [[openshift/v1.5.2+43a9be4 (linux/amd64) kubernetes/43a9be4 system:serviceaccount:openshift-infra:hpa-controller] 10.105.8.81:33945]
I0524 10:17:47.575856 1 horizontal.go:438] Successfully updated status for resty
W0524 10:17:47.575890 1 horizontal.go:104] Failed to reconcile resty: failed to compute desired number of replicas based on Custom Metrics for Deployment/availability-demo/resty: failed to get custom metric value: did not recieve metrics for any ready pods
UPDATE: Found what request HPA issues to heapster through proxy to gather custom metrics. This requests always return empty metrics array:
GET /api/v1/proxy/namespaces/openshift-infra/services/https:heapster:/api/v1/model/namespaces/availability-demo/pod-list/availability-example-1694583826-55hqh/metrics/custom/requests_count?start=2017-05-25T13%3A14%3A24Z HTTP/1.1
Host: kubernetes-master:8443
Authorization: Bearer hpa-agent-token
And it returns
{"items":[{"metrics":[],"latestTimestamp":"0001-01-01T00:00:00Z"}]}
UPDATE: It turns out, that HPA requests heapster through proxy, and heapster - in its turn - request "summary" kubernetes api. Then the question is - why kubernetes "summary" api does not answer with metrics for above mentioned request, though the metrics exist?
Might be a wild guess but I had the issue myself on a self made cluster, the 2 things I ran into were token issues where certificate of my HA master setup was not set-up correctly and another issue was regarding my kubedns. Not sure if this is applicable for openshitf.

MalformedPolicyDocument error while creating an IAM Policy

I'm trying to create a managed policy by AWS CLI:
POLICY='
{
"Version":"2012-10-17",
"Statement":
[{
"Effect":"Allow",
"Action":
[
"cloudformation:*"
],
"Resource":"*"
},
{
"Effect":"Deny",
"Action":
[
"cloudformation:UpdateStack",
"cloudformation:DeleteStack"
],
"Resource": "'${arn}'"
}]
}'
# Create policy if not already created
[ $(aws iam list-policies | grep -ce CloudFormation-policy-${StackName}) -eq 0 ] && (aws iam create-policy --policy-name CloudFormation-policy-${StackName} --policy-document "'${POLICY}'")
When I run the script I get this error:
An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: Syntax errors in policy.
I can't figure out where the error is.
Any idea?
Each operating systems has its own way of treating single quote vs double quote escaping and as per AWS CLI documentation:
When passing in large blocks of data, you might find it easier to save
the JSON to a file and reference it from the command line. JSON data
in a file is easier to read, edit, and share with others.
Quoting Strings approach might not be best choice while passing Json data, instead use Loading parameters from file approach.

Create entity in a service using IDAS and ContextBroker

So I'm having some problems connection virtual devices to the contextBroker and i thing it's because of the Fiware-Service. I don't want to use the OpenIoT (even though that didn't worked for me either). I didn't manage to find any documentation about service creation and maybe i'm creating it wrong.
I did Python CreateService bus_auto 4jggokgpepnvsb2uv4s40d59ovand i'm not sure it returns me 201. I updated the config.ini file to work on MY service but when i send the observations it doesn't change the value of the entity on the contextBroker
I'm now running it in
My config.ini file:
[user]
# Please, configure here your username at FIWARE Cloud and a valid Oauth2.0 TOKEN for your user (you can use get_token.py to obtain a valid TOKEN).
username=
token=NULL
[contextbroker]
host=127.0.0.1
port=1026
OAuth=no
# Here you need to specify the ContextBroker database you are querying.
# Leave it blank if you want the general database or the IDAS service if you are looking for IoT devices connected by you.
fiware_service=bus_auto
[idas]
host=130.206.80.40
adminport=5371
ul20port=5371
OAuth=no
# Here you need to configure the IDAS service your devices will be sending data to.
# By default the OpenIoT service is provided.
fiware-service=bus_auto
fiware-service-path=/
apikey=4jggokgpepnvsb2uv4s40d59ov
[local]
#Choose here your System type. Examples: RaspberryPI, MACOSX, Linux, ...
host_type=CentOS
# Here please add a unique identifier for you. Suggestion: the 3 lower hexa bytes of your Ethernet MAC. E.g. 79:ed:af
# Also you may use your e-mail address.
host_id=db:00:ff
I'm using the python script GetEntity.py:
python2.7 GetEntity.py bus_auto_2
I also tried using a python script that i created:
import json
import urllib
import urllib2
BASE_URL = 'http://127.0.0.1:1026'
QUERY_URL = BASE_URL+'/v1/queryContext'
HEADERS = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
QUERY_EXAMPLE = {
"entities": [
{
"type": "bus_auto_2",
"isPattern": "false",
"id": "Room1"
}
]
}
def post(url, data):
""""""
req = urllib2.Request(url, data, HEADERS)
f = urllib2.urlopen(req)
result = json.loads(f.read())
f.close()
return result
if __name__ == "__main__":
print post(UPDATE_URL, json.dumps(UPDATE_EXAMPLE))
print post(QUERY_URL, json.dumps(QUERY_EXAMPLE))
I see the service is well created and actually I see one device defined within it.
I have even successfully sent an observation (t|23) bus_auto_2 device
Later, I check in the ContextBroker this entity: "thing:bus_auto_2" and I see the latest observation I sent.
Did you update in the config.ini file the FIWARE_SERVICE both at ContextBroker and IDAS sections ?
Cheers,
Looking to your script, it seems you are not including the Fiware-Service header in you queryContext request. Thus, the query is resolved in the "default service" and not in bus_auto service.
Probably changing the HEADERS map in the following way would solve the issue:
HEADERS = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Fiware-Service: 'bus_auto'
}
EDIT: In addition to the above change, note that the BASE_URL is pointint to a local Orion instance, not the one connected with IDAS (which run in the same machine that IDAS). Thus, I think you also need to modify BASE_URL in the following way:
BASE_URL = 'http://130.206.80.40:1026'