I am using the qemu emulator to emulate a MIPS system. I wrote a very simple boot code and main function. However when I used the following linker script the qemu emulator gave the message "qemu-system-mipsel: Could not load MIPS bios 'bin/img.bin', and no -kernel argument was specified":
ENTRY(_Reset)
SECTIONS
{
.boottext 0xBFC00000 : { obj/startup.o(.text) }
.text 0xA0000000 : { *(.text) }
.data : { *(.data) }
.bss : { *(.bss) }
. = . + 0x1000; /* 4kB of stack memory */
.stack ALIGN( 16 ) :{ *(.stack) }
_stacktop = ALIGN(16);
}
When I changed the linkerscript to the following script, qemu ran the code perfectly:
ENTRY(_Reset)
SECTIONS
{
.text 0xA0000000 : { *(.text) }
.data : { *(.data) }
.bss : { *(.bss) }
. = . + 0x1000; /* 4kB of stack memory */
.stack ALIGN( 16 ) :{ *(.stack) }
_stacktop = ALIGN(16);
.boottext 0xBFC00000 : { obj/startup.o(.text) }
}
So, my question is: what is the impact of the order of the sections in the script in the final elf and binary files? Why does qemu runs one but not the other?
Thank you in advance
PS: ran qemu with "qemu-system-mipsel -s -M malta -m 512M -bios bin/img.bin"
Related
My current working directory has the following sub-directories
My Bash script
Hi there
I have compiled the above Bash script to do the following tasks:
rename the sub-directories (barcode01-12) taking information from the metadata.csv
concatenate the individual reads within a sub-directory and move them up in the $PWD
then I use these concatenated reads (one per barcode) for my Nextflow script below:
Query:
How can I get the above pre-processing tasks (renaming and concatenating) or the Bash script added at the beginning of my following Nextflow script?
In my experience, FASTQ files can get quite large. Without knowing too much of the specifics, my recommendation would be to move the concatenation (and renaming) to a separate process. In this way, all of the 'work' can be done inside Nextflow's working directory. Here's a solution that uses the new DSL 2. It uses the splitCsv operator to parse the metadata and identify the FASTQ files. The collection can then be passed into our 'concat_reads' process. To handle optionally gzipped files, you could try the following:
params.metadata = './metadata.csv'
params.outdir = './results'
process concat_reads {
tag { sample_name }
publishDir "${params.outdir}/concat_reads", mode: 'copy'
input:
tuple val(sample_name), path(fastq_files)
output:
tuple val(sample_name), path("${sample_name}.${extn}")
script:
if( fastq_files.every { it.name.endsWith('.fastq.gz') } )
extn = 'fastq.gz'
else if( fastq_files.every { it.name.endsWith('.fastq') } )
extn = 'fastq'
else
error "Concatentation of mixed filetypes is unsupported"
"""
cat ${fastq_files} > "${sample_name}.${extn}"
"""
}
process pomoxis {
tag { sample_name }
publishDir "${params.outdir}/pomoxis", mode: 'copy'
cpus 18
input:
tuple val(sample_name), path(fastq)
"""
mini_assemble \\
-t ${task.cpus} \\
-i "${fastq}" \\
-o results \\
-p "${sample_name}"
"""
}
workflow {
fastq_extns = [ '.fastq', '.fastq.gz' ]
Channel.fromPath( params.metadata )
| splitCsv()
| map { dir, sample_name ->
all_files = file(dir).listFiles()
fastq_files = all_files.findAll { fn ->
fastq_extns.find { fn.name.endsWith( it ) }
}
tuple( sample_name, fastq_files )
}
| concat_reads
| pomoxis
}
Hellow everyone,
I am using terraform to create the glue job. Now AWS Glue now supports the ability to run ETL jobs on Apache Spark 2.4.3 (with Python 3).
I want to use this feature. but whenever i am making changes it is throwing error.
I am using
aws-cli/1.16.184.
Terraform v0.12.6
aws provider 2.29
resource "aws_glue_job" "aws_glue_job_foo" {
glue_version = "1"
name = "job-name"
description = "job-desc"
role_arn = data.aws_iam_role.aws_glue_iam_role.arn
max_capacity = 1
max_retries = 1
connections = [aws_glue_connection.connection.name]
timeout = 5
command {
name = "pythonshell"
script_location = "s3://bucket/script.py"
python_version = "3"
}
default_arguments = {
"--job-language" = "python"
"--ENV" = "env"
"--ROLE_ARN" = data.aws_iam_role.aws_glue_iam_role.arn
}
execution_property {
max_concurrent_runs = 1
}
}
But it is throwing error to me,
Error: Unsupported argument
An argument named "glue_version" is not expected here.
This Terraform issue has been resolved.
Terraform aws_glue_job now accepts a glue_version argument.
Previous Answer
With or without python_version in the Terraform command block, I must go to the AWS console to edit the job and set "Glue version". My job fails without this manual step.
Workaround #1
This issue has been reported and debated and includes a workaround.
resource "aws_glue_job" "etl" {
name = "${var.job_name}"
role_arn = "${var.iam_role_arn}"
command {
script_location = "s3://${var.bucket_name}/${aws_s3_bucket_object.script.key}"
}
default_arguments = {
"--enable-metrics" = ""
"--job-language" = "python"
"--TempDir" = "s3://${var.bucket_name}/TEMP"
}
# Manually set python 3 and glue 1.0
provisioner "local-exec" {
command = "aws glue update-job --job-name ${var.job_name} --job-update 'Command={ScriptLocation=s3://${var.bucket_name}/${aws_s3_bucket_object.script.key},PythonVersion=3,Name=glueetl},GlueVersion=1.0,Role=${var.iam_role_arn},DefaultArguments={--enable-metrics=\"\",--job-language=python,--TempDir=\"s3://${var.bucket_name}/TEMP\"}'"
}
}
Workaround #2
Here is a different workaround.
resource "aws_cloudformation_stack" "network" {
name = "${local.name}-glue-job"
template_body = <<STACK
{
"Resources" : {
"MyJob": {
"Type": "AWS::Glue::Job",
"Properties": {
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://${local.bucket_name}/jobs/${var.job}"
},
"ExecutionProperty": {
"MaxConcurrentRuns": 2
},
"MaxRetries": 0,
"Name": "${local.name}",
"Role": "${var.role}"
}
}
}
}
STACK
}
This has been released in version 2.34.0 of the Terraform AWS provider.
It looks like terraform uses python_version instead of glue_version
By using python_version = "3", you should be using glue version 1.0. Glue version 0.9 doesn't support python 3.
There is some similiarity between my question and How to measure common coverage for Polymer components + .js files?. Nevertheless, it is accepted as answer "split to .js files and include it to components" in order to use wct-istanbul and all my web components and tests are in .html files (the javascript is inside of each .html file).
My straight question is: can I still use wct-istambul to check how much from my code is covered by tests? If so, what is wrong in configuration described bellow? If not, is wct-istanbub planned to replace wct-istanbul for polymer projects?
package.json
"polyserve": "^0.18.0",
"web-component-tester": "^6.0.0",
"web-component-tester-istanbul": "^0.10.0",
...
wct.conf.js
var path = require('path');
var ret = {
'suites': ['test'],
'webserver': {
'pathMappings': []
},
'plugins': {
'local': {
'browsers': ['chrome']
},
'sauce': {
'disabled': true
},
"istanbul": {
"dir": "./coverage",
"reporters": ["text-summary", "lcov"],
"include": [
"/*.html"
],
"exclude": [
],
thresholds: {
global: {
statements: 100
}
}
}
}
};
var mapping = {};
var rootPath = (__dirname).split(path.sep).slice(-1)[0];
mapping['/components/' + rootPath + '/bower_components'] = 'bower_components';
ret.webserver.pathMappings.push(mapping);
module.exports = ret;
Well, I tried WCT-istanbub (https://github.com/Bubbit/wct-istanbub) which seams to be a temporary workaround (Code coverage of Polymer Application with WCT), it works.
wct.conf.js
"istanbub": {
"dir": "./coverage",
"reporters": ["text-summary", "lcov"],
"include": [
"**/*.html"
],
"exclude": [
"**/test/**",
"*/*.js"
],
thresholds: {
global: {
statements: 100
}
}
}
...
and the result is
...
chrome 66 RESPONSE quit()
chrome 66 BrowserRunner complete
Test run ended with great success
chrome 66 (2/0/0)
=============================== Coverage summary ===============================
Statements : 21.18% ( 2011/9495 )
Branches : 15.15% ( 933/6160 )
Functions : 18.08% ( 367/2030 )
Lines : 21.14% ( 2001/9464 )
================================================================================
Coverage for statements (21.18%) does not meet configured threshold (100%)
Error: Coverage failed
qemu-system-aarch64 can be used to emulate aarch64, the specific command is as follows:
qemu-system-aarch64 -M virt -cpu cortex-a53 ...(other options)
and we can use -M virt,dumpdtb=DTBFILE to get the internal device tree blob.
My question is that, how can we get the PERIPHBASE of the virtual machine virt?
Can we do that from the device tree blob file using the dtc tool?
The dtc command would be:
dtc -I dtb -O dts virt.dtb > virt.dts
The node you are looking for should be /intc:
intc {
phandle = <0x8001>;
reg = <0x0 0x8000000 0x0 0x10000 0x0 0x8010000 0x0 0x10000>;
compatible = "arm,cortex-a15-gic";
ranges;
#size-cells = <0x2>;
#address-cells = <0x2>;
interrupt-controller;
#interrupt-cells = <0x3>;
v2m {
phandle = <0x8002>;
reg = <0x0 0x8020000 0x0 0x1000>;
msi-controller;
compatible = "arm,gic-v2m-frame";
};
};
A more straightforward option would be to use fdtget:
fdtget -t i -t x virt.dtb /intc reg
0 8000000 0 10000 0 8010000 0 10000
I agree with Peter Maydell that the DTB should preferably be used at run-time for retrieving the addresses for the GIC CPU and distributor interfaces if you are running Linux in QEMU.
But the non-DTB approach is still easier to implement in an emulated bare-metal environment - in my humble opinion.
Maybe this is what you want. https://github.com/qemu/qemu/blob/master/hw/arm/virt.c
static const MemMapEntry a15memmap[] = {
/* Space up to 0x8000000 is reserved for a boot ROM */
[VIRT_FLASH] = { 0, 0x08000000 },
[VIRT_CPUPERIPHS] = { 0x08000000, 0x00020000 },
/* GIC distributor and CPU interfaces sit inside the CPU peripheral space */
[VIRT_GIC_DIST] = { 0x08000000, 0x00010000 },
[VIRT_GIC_CPU] = { 0x08010000, 0x00010000 },
[VIRT_GIC_V2M] = { 0x08020000, 0x00001000 },
/* The space in between here is reserved for GICv3 CPU/vCPU/HYP */
[VIRT_GIC_ITS] = { 0x08080000, 0x00020000 },
/* This redistributor space allows up to 2*64kB*123 CPUs */
[VIRT_GIC_REDIST] = { 0x080A0000, 0x00F60000 },
[VIRT_UART] = { 0x09000000, 0x00001000 },
[VIRT_RTC] = { 0x09010000, 0x00001000 },
[VIRT_FW_CFG] = { 0x09020000, 0x00000018 },
[VIRT_GPIO] = { 0x09030000, 0x00001000 },
[VIRT_SECURE_UART] = { 0x09040000, 0x00001000 },
[VIRT_SMMU] = { 0x09050000, 0x00020000 },
[VIRT_MMIO] = { 0x0a000000, 0x00000200 },
/* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
[VIRT_PLATFORM_BUS] = { 0x0c000000, 0x02000000 },
[VIRT_SECURE_MEM] = { 0x0e000000, 0x01000000 },
[VIRT_PCIE_MMIO] = { 0x10000000, 0x2eff0000 },
[VIRT_PCIE_PIO] = { 0x3eff0000, 0x00010000 },
[VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 },
[VIRT_MEM] = { 0x40000000, RAMLIMIT_BYTES },
/* Additional 64 MB redist region (can contain up to 512 redistributors) */
[VIRT_GIC_REDIST2] = { 0x4000000000ULL, 0x4000000 },
[VIRT_PCIE_ECAM_HIGH] = { 0x4010000000ULL, 0x10000000 },
/* Second PCIe window, 512GB wide at the 512GB boundary */
[VIRT_PCIE_MMIO_HIGH] = { 0x8000000000ULL, 0x8000000000ULL },
};
PERIPHBASE will be the address of the GIC distributor register bank in the device tree blob.
That said, I'm not sure why you want to know this information. Guest code for the 'virt' board should only hard code the base address of RAM, and should get all other information from the dtb at runtime. Some day in the future we may rearrange the virt memory map, and if you have hardcoded addresses from it then your guest will stop working...
I created the followind application.conf:
akka {
actor {
prio-dispatcher {
type = "Dispatcher"
mailbox-type = "my.package.PrioritizedMailbox"
}
}
}
when dumping configuration with
actorSystem = ActorSystem.create()
println(actorSystem.settings)
I'm getting the output:
# application.conf: 5
"prio-dispatcher" : {
# application.conf: 7
"mailbox-type" : "my.package.PrioritizedMailbox",
# application.conf: 6
"type" : "Dispatcher"
},
and later on
[WARN] [08/30/2012 22:44:54.362] [default-akka.actor.default-dispatcher-3] [Dispatchers] Dispatcher [prio-dispatcher] not configured, using default-dispatcher
What am I missing here?
UPD Found the solution here, had to use the name "akka.actor.prio-dispatcher"
The configuration above dictates that name of mailbox is akka.actor.prio-dispatcher
Description of the problem: http://groups.google.com/group/akka-user/browse_thread/thread/678f2ae1c068e0fa