qemu-system-aarch64 can be used to emulate aarch64, the specific command is as follows:
qemu-system-aarch64 -M virt -cpu cortex-a53 ...(other options)
and we can use -M virt,dumpdtb=DTBFILE to get the internal device tree blob.
My question is that, how can we get the PERIPHBASE of the virtual machine virt?
Can we do that from the device tree blob file using the dtc tool?
The dtc command would be:
dtc -I dtb -O dts virt.dtb > virt.dts
The node you are looking for should be /intc:
intc {
phandle = <0x8001>;
reg = <0x0 0x8000000 0x0 0x10000 0x0 0x8010000 0x0 0x10000>;
compatible = "arm,cortex-a15-gic";
ranges;
#size-cells = <0x2>;
#address-cells = <0x2>;
interrupt-controller;
#interrupt-cells = <0x3>;
v2m {
phandle = <0x8002>;
reg = <0x0 0x8020000 0x0 0x1000>;
msi-controller;
compatible = "arm,gic-v2m-frame";
};
};
A more straightforward option would be to use fdtget:
fdtget -t i -t x virt.dtb /intc reg
0 8000000 0 10000 0 8010000 0 10000
I agree with Peter Maydell that the DTB should preferably be used at run-time for retrieving the addresses for the GIC CPU and distributor interfaces if you are running Linux in QEMU.
But the non-DTB approach is still easier to implement in an emulated bare-metal environment - in my humble opinion.
Maybe this is what you want. https://github.com/qemu/qemu/blob/master/hw/arm/virt.c
static const MemMapEntry a15memmap[] = {
/* Space up to 0x8000000 is reserved for a boot ROM */
[VIRT_FLASH] = { 0, 0x08000000 },
[VIRT_CPUPERIPHS] = { 0x08000000, 0x00020000 },
/* GIC distributor and CPU interfaces sit inside the CPU peripheral space */
[VIRT_GIC_DIST] = { 0x08000000, 0x00010000 },
[VIRT_GIC_CPU] = { 0x08010000, 0x00010000 },
[VIRT_GIC_V2M] = { 0x08020000, 0x00001000 },
/* The space in between here is reserved for GICv3 CPU/vCPU/HYP */
[VIRT_GIC_ITS] = { 0x08080000, 0x00020000 },
/* This redistributor space allows up to 2*64kB*123 CPUs */
[VIRT_GIC_REDIST] = { 0x080A0000, 0x00F60000 },
[VIRT_UART] = { 0x09000000, 0x00001000 },
[VIRT_RTC] = { 0x09010000, 0x00001000 },
[VIRT_FW_CFG] = { 0x09020000, 0x00000018 },
[VIRT_GPIO] = { 0x09030000, 0x00001000 },
[VIRT_SECURE_UART] = { 0x09040000, 0x00001000 },
[VIRT_SMMU] = { 0x09050000, 0x00020000 },
[VIRT_MMIO] = { 0x0a000000, 0x00000200 },
/* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
[VIRT_PLATFORM_BUS] = { 0x0c000000, 0x02000000 },
[VIRT_SECURE_MEM] = { 0x0e000000, 0x01000000 },
[VIRT_PCIE_MMIO] = { 0x10000000, 0x2eff0000 },
[VIRT_PCIE_PIO] = { 0x3eff0000, 0x00010000 },
[VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 },
[VIRT_MEM] = { 0x40000000, RAMLIMIT_BYTES },
/* Additional 64 MB redist region (can contain up to 512 redistributors) */
[VIRT_GIC_REDIST2] = { 0x4000000000ULL, 0x4000000 },
[VIRT_PCIE_ECAM_HIGH] = { 0x4010000000ULL, 0x10000000 },
/* Second PCIe window, 512GB wide at the 512GB boundary */
[VIRT_PCIE_MMIO_HIGH] = { 0x8000000000ULL, 0x8000000000ULL },
};
PERIPHBASE will be the address of the GIC distributor register bank in the device tree blob.
That said, I'm not sure why you want to know this information. Guest code for the 'virt' board should only hard code the base address of RAM, and should get all other information from the dtb at runtime. Some day in the future we may rearrange the virt memory map, and if you have hardcoded addresses from it then your guest will stop working...
Related
I have trying to convert the AWS public IP ranges into a format that can be used with the Terraform external data provider so I can create a security group rule based off the AWS public CIDRs. The provider requires a single JSON object with this format:
{"string": "string"}
Here is a snippet of the public ranges JSON document:
{
"syncToken": "1589917992",
"createDate": "2020-05-19-19-53-12",
"prefixes": [
{
"ip_prefix": "35.180.0.0/16",
"region": "eu-west-3",
"service": "AMAZON",
"network_border_group": "eu-west-3"
},
{
"ip_prefix": "52.94.76.0/22",
"region": "us-west-2",
"service": "AMAZON",
"network_border_group": "us-west-2"
},
// ...
]
I can successfully extract the ranges I care about with this, [.prefixes[] | select(.region == "us-west-2") | .ip_prefix] | sort | unique, and it gives me this:
[
"100.20.0.0/14",
"108.166.224.0/21",
"108.166.240.0/21",
"13.248.112.0/24",
...
]
I can't figure out how to convert this to an arbitrarily-keyed object with jq. In order to properly use the array object, I need to convert it to a dictionary, something like {"arbitrary-key": "100.20.0.0/14"}, so that I can use it in Terraform like this:
data "external" "amazon-ranges" {
program = [
"cat",
"${path.cwd}/aws-ranges.json"
]
}
resource "aws_default_security_group" "allow-mysql" {
vpc_id = aws_vpc.main.id
ingress {
description = "MySQL"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [
values(data.external.amazon-ranges.result)
]
}
}
What is the most effective way to extract the the AWS public IP ranges document into a single object with arbitrary keys?
The following script uses the .ip_prefix as the key, thus perhaps avoiding the need for the sort|unique. It yields:
{
"35.180.0.0/16": "35.180.0.0/16",
"52.94.76.0/22": "52.94.76.0/22"
}
Script
#!/bin/bash
function data {
cat <<EOF
{
"syncToken": "1589917992",
"createDate": "2020-05-19-19-53-12",
"prefixes": [
{
"ip_prefix": "35.180.0.0/16",
"region": "eu-west-3",
"service": "AMAZON",
"network_border_group": "eu-west-3"
},
{
"ip_prefix": "52.94.76.0/22",
"region": "us-west-2",
"service": "AMAZON",
"network_border_group": "us-west-2"
}
]
}
EOF
}
data | jq '
.prefixes
| map(select(.region | test("west"))
| {(.ip_prefix): .ip_prefix} )
| add '
There's a better option to get at the AWS IP ranges data in Terraform, which is to use the aws_ip_ranges data source, instead of trying to mangle things with the external data source and jq.
The example in the above linked documentation shows a similar, but also slightly more complex, thing to what you're trying to do here:
data "aws_ip_ranges" "european_ec2" {
regions = ["eu-west-1", "eu-central-1"]
services = ["ec2"]
}
resource "aws_security_group" "from_europe" {
name = "from_europe"
ingress {
from_port = "443"
to_port = "443"
protocol = "tcp"
cidr_blocks = data.aws_ip_ranges.european_ec2.cidr_blocks
ipv6_cidr_blocks = data.aws_ip_ranges.european_ec2.ipv6_cidr_blocks
}
tags = {
CreateDate = data.aws_ip_ranges.european_ec2.create_date
SyncToken = data.aws_ip_ranges.european_ec2.sync_token
}
}
To do your exact thing you would do something like this:
data "aws_ip_ranges" "us_west_2_amazon" {
regions = ["us_west_2"]
services = ["amazon"]
}
resource "aws_default_security_group" "allow-mysql" {
vpc_id = aws_vpc.main.id
ingress {
description = "MySQL"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = data.aws_ip_ranges.us_west_2_amazon.cidr_blocks
}
}
However, there are 2 things that are bad here.
The first, and most important, is that you're allowing access to your database from every IP address that AWS has in US-West-2 across all services. That means that anyone in the world is able to spin up an EC2 instance or Lambda function in US-West-2 and then have network access to your database. This is a very bad idea.
The second is that if that returns more than 60 CIDR blocks you are going to end up with more than 60 rules in your security group. AWS security groups have a limit of 60 security group rules per IP address type (IPv4 vs IPv6) and per ingress/egress:
You can have 60 inbound and 60 outbound rules per security group (making a total of 120 rules). This quota is enforced separately for IPv4 rules and IPv6 rules; for example, a security group can have 60 inbound rules for IPv4 traffic and 60 inbound rules for IPv6 traffic. A rule that references a security group or prefix list ID counts as one rule for IPv4 and one rule for IPv6.
From https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-security-groups
This is technically a soft cap and you can ask AWS to raise this limit in exchange for reducing the amount of security groups that can be applied to a network interface to keep the maximum amount of security group rules at or below 1000 per network interface. It's probably not something you want to mess around with though.
Hello i'm trying to use Json from my washer with lua. It's for visualizing the samsung in Domoitcz.
A part of the Json what i get from https://api.smartthings.com/v1/devices/abcd-1234-abcd is:
"main": {
"washerJobState": {
"value": "wash"
},
"mnhw": {
"value": "1.0"
},
"data": {
"value": "{
\"payload\":{
\"x.com.samsung.da.state\":\"Run\",\"x.com.samsung.da.delayEndTime\":\"00:00:00\",\"x.com.samsung.da.remainingTime\":\"01:34:00\",\"if\":[\"oic.if.baseline\",\"oic.if.a\"],\"x.com.samsung.da.progressPercentage\":\"2\",\"x.com.samsung.da.supportedProgress\":[\"None\",\"Wash\",\"Rinse\",\"Spin\",\"Finish\"],\"x.com.samsung.da.progress\":\"Wash\",\"rt\":[\"x.com.samsung.da.operation\"]}}"
},
"washerRinseCycles": {
"value": "3"
},
"switch": {
"value": "on"
},
if i use in my script
local switch = item.json.main.switch.value
I got the valua on or off and i can use it for showing the status of the washer.
i'm trying to find out how to get the "data"value in my script, there are more items with dots en backslhases:
local remainingTime = rt.data.value.payload['x.com.samsung.da.remainingTime']
or
local remainingTime = rt.data.value['\payload']['\x.com.samsung.da.remainingTime']
i tried some more opions with 'or // , "" but always got a nill value.
Can someone explain me how to get:
\"x.com.samsung.da.remainingTime\":\"01:34:00\"
\"x.com.samsung.da.progressPercentage\":\"2\",
All the " , \, x., ar confusing me
Below is my script to test where i only left the Json log (Dzvents Lua Based) i get an error:
dzVents/generated_scripts/Samsung_v3.lua:53: attempt to index a nil value (global 'json') i don't heave any idea how te use/adjust my code for decode the string.
local json = require"json" -- the JSON library
local outer = json.decode(your_JSON_string)
local rt = outer.main
local inner = json.decode(rt.data.value)
local remainingTime = inner.payload['x.com.samsung.da.remainingTime']
local API = 'API'
local Device = 'Device'
local LOGGING = true
--Define dz Switches
local WM_STATUS = 'WM Status' --Domoitcz virtual switch ON/Off state Washer
return
{
on =
{
timer =
{
'every 1 minutes', -- just an example to trigger the request
},
httpResponses =
{
'trigger', -- must match with the callback passed to the openURL command
},
},
logging =
{
level = domoticz.LOG_DEBUG ,
},
execute = function(dz, item)
local wm_status = dz.devices(WM_STATUS)
if item.isTimer then
dz.openURL({
url = 'https://api.smartthings.com/v1/devices/'.. Device .. '/states',
headers = { ['Authorization'] = 'Bearer '.. API },
method = 'GET',
callback = 'trigger', -- see httpResponses above.
})
end
if (item.isHTTPResponse) then
if item.ok then
if (item.isJSON) then
rt = item.json.main
-- outer = json.decode'{"payload":{"x.com.samsung.da.state":"Run","x.com.samsung.da.delayEndTime":"00:00:00","x.com.samsung.da.remainingTime":"00:40:00","if":["oic.if.baseline","oic.if.a"],"x.com.samsung.da.progressPercentage":"81","x.com.samsung.da.supportedProgress":["None","Weightsensing","Wash","Rinse","Spin","Finish"],"x.com.samsung.da.progress":"Rinse","rt":["x.com.samsung.da.operation"]}}
inner = json.decode(rt.data.value)
-- local remainingTime = inner.payload['x.com.samsung.da.remainingTime']
dz.utils.dumpTable(rt) -- this will show how the table is structured
-- dz.utils.dumpTable(inner)
local washerSpinLevel = rt.washerSpinLevel.value
-- local remainingTime = inner.payload['x.com.samsung.da.remainingTime']
dz.log('Debuggg washerSpinLevel:' .. washerSpinLevel, dz.LOG_DEBUG)
dz.log('Debuggg remainingTime:' .. remainingTime, dz.LOG_DEBUG)
-- dz.log('Resterende tijd:' .. remainingTime, dz.LOG_INFO)
-- dz.log(dz.utils.fromJSON(item.data))
-- end
elseif LOGGING == true then
dz.log('There was a problem handling the request', dz.LOG_ERROR)
dz.log(item, dz.LOG_ERROR)
end
end
end
end
}
This is a weird construction: a serialized JSON inside a normal JSON.
This means you have to invoke deserialization twice:
local json = require"json" -- the JSON library
local outer = json.decode(your_JSON_string)
local rt = outer.main
local inner = json.decode(rt.data.value)
local remainingTime = inner.payload['x.com.samsung.da.remainingTime']
Hellow everyone,
I am using terraform to create the glue job. Now AWS Glue now supports the ability to run ETL jobs on Apache Spark 2.4.3 (with Python 3).
I want to use this feature. but whenever i am making changes it is throwing error.
I am using
aws-cli/1.16.184.
Terraform v0.12.6
aws provider 2.29
resource "aws_glue_job" "aws_glue_job_foo" {
glue_version = "1"
name = "job-name"
description = "job-desc"
role_arn = data.aws_iam_role.aws_glue_iam_role.arn
max_capacity = 1
max_retries = 1
connections = [aws_glue_connection.connection.name]
timeout = 5
command {
name = "pythonshell"
script_location = "s3://bucket/script.py"
python_version = "3"
}
default_arguments = {
"--job-language" = "python"
"--ENV" = "env"
"--ROLE_ARN" = data.aws_iam_role.aws_glue_iam_role.arn
}
execution_property {
max_concurrent_runs = 1
}
}
But it is throwing error to me,
Error: Unsupported argument
An argument named "glue_version" is not expected here.
This Terraform issue has been resolved.
Terraform aws_glue_job now accepts a glue_version argument.
Previous Answer
With or without python_version in the Terraform command block, I must go to the AWS console to edit the job and set "Glue version". My job fails without this manual step.
Workaround #1
This issue has been reported and debated and includes a workaround.
resource "aws_glue_job" "etl" {
name = "${var.job_name}"
role_arn = "${var.iam_role_arn}"
command {
script_location = "s3://${var.bucket_name}/${aws_s3_bucket_object.script.key}"
}
default_arguments = {
"--enable-metrics" = ""
"--job-language" = "python"
"--TempDir" = "s3://${var.bucket_name}/TEMP"
}
# Manually set python 3 and glue 1.0
provisioner "local-exec" {
command = "aws glue update-job --job-name ${var.job_name} --job-update 'Command={ScriptLocation=s3://${var.bucket_name}/${aws_s3_bucket_object.script.key},PythonVersion=3,Name=glueetl},GlueVersion=1.0,Role=${var.iam_role_arn},DefaultArguments={--enable-metrics=\"\",--job-language=python,--TempDir=\"s3://${var.bucket_name}/TEMP\"}'"
}
}
Workaround #2
Here is a different workaround.
resource "aws_cloudformation_stack" "network" {
name = "${local.name}-glue-job"
template_body = <<STACK
{
"Resources" : {
"MyJob": {
"Type": "AWS::Glue::Job",
"Properties": {
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://${local.bucket_name}/jobs/${var.job}"
},
"ExecutionProperty": {
"MaxConcurrentRuns": 2
},
"MaxRetries": 0,
"Name": "${local.name}",
"Role": "${var.role}"
}
}
}
}
STACK
}
This has been released in version 2.34.0 of the Terraform AWS provider.
It looks like terraform uses python_version instead of glue_version
By using python_version = "3", you should be using glue version 1.0. Glue version 0.9 doesn't support python 3.
There is some similiarity between my question and How to measure common coverage for Polymer components + .js files?. Nevertheless, it is accepted as answer "split to .js files and include it to components" in order to use wct-istanbul and all my web components and tests are in .html files (the javascript is inside of each .html file).
My straight question is: can I still use wct-istambul to check how much from my code is covered by tests? If so, what is wrong in configuration described bellow? If not, is wct-istanbub planned to replace wct-istanbul for polymer projects?
package.json
"polyserve": "^0.18.0",
"web-component-tester": "^6.0.0",
"web-component-tester-istanbul": "^0.10.0",
...
wct.conf.js
var path = require('path');
var ret = {
'suites': ['test'],
'webserver': {
'pathMappings': []
},
'plugins': {
'local': {
'browsers': ['chrome']
},
'sauce': {
'disabled': true
},
"istanbul": {
"dir": "./coverage",
"reporters": ["text-summary", "lcov"],
"include": [
"/*.html"
],
"exclude": [
],
thresholds: {
global: {
statements: 100
}
}
}
}
};
var mapping = {};
var rootPath = (__dirname).split(path.sep).slice(-1)[0];
mapping['/components/' + rootPath + '/bower_components'] = 'bower_components';
ret.webserver.pathMappings.push(mapping);
module.exports = ret;
Well, I tried WCT-istanbub (https://github.com/Bubbit/wct-istanbub) which seams to be a temporary workaround (Code coverage of Polymer Application with WCT), it works.
wct.conf.js
"istanbub": {
"dir": "./coverage",
"reporters": ["text-summary", "lcov"],
"include": [
"**/*.html"
],
"exclude": [
"**/test/**",
"*/*.js"
],
thresholds: {
global: {
statements: 100
}
}
}
...
and the result is
...
chrome 66 RESPONSE quit()
chrome 66 BrowserRunner complete
Test run ended with great success
chrome 66 (2/0/0)
=============================== Coverage summary ===============================
Statements : 21.18% ( 2011/9495 )
Branches : 15.15% ( 933/6160 )
Functions : 18.08% ( 367/2030 )
Lines : 21.14% ( 2001/9464 )
================================================================================
Coverage for statements (21.18%) does not meet configured threshold (100%)
Error: Coverage failed
I am using the qemu emulator to emulate a MIPS system. I wrote a very simple boot code and main function. However when I used the following linker script the qemu emulator gave the message "qemu-system-mipsel: Could not load MIPS bios 'bin/img.bin', and no -kernel argument was specified":
ENTRY(_Reset)
SECTIONS
{
.boottext 0xBFC00000 : { obj/startup.o(.text) }
.text 0xA0000000 : { *(.text) }
.data : { *(.data) }
.bss : { *(.bss) }
. = . + 0x1000; /* 4kB of stack memory */
.stack ALIGN( 16 ) :{ *(.stack) }
_stacktop = ALIGN(16);
}
When I changed the linkerscript to the following script, qemu ran the code perfectly:
ENTRY(_Reset)
SECTIONS
{
.text 0xA0000000 : { *(.text) }
.data : { *(.data) }
.bss : { *(.bss) }
. = . + 0x1000; /* 4kB of stack memory */
.stack ALIGN( 16 ) :{ *(.stack) }
_stacktop = ALIGN(16);
.boottext 0xBFC00000 : { obj/startup.o(.text) }
}
So, my question is: what is the impact of the order of the sections in the script in the final elf and binary files? Why does qemu runs one but not the other?
Thank you in advance
PS: ran qemu with "qemu-system-mipsel -s -M malta -m 512M -bios bin/img.bin"