qemu displays and hangs with message 'booting from harddisk ' - qemu

I'm using Debian image with QEMU. Installation works well with below
command.
ISO_FILE="debian-buster-DI-alpha5-amd64-xfce-CD-1.iso"
DISK_IMAGE="debian.img"
SPICE_PORT=5924
qemu-system-x86_64 \
-cdrom "${ISO_FILE}" \
-drive format=raw,if=pflash,file=/usr/share/ovmf/OVMF.fd,readonly \
-drive file=${DISK_IMAGE:?} \
-enable-kvm \
-m 2G \
-smp 2 \
-cpu host \
-vga qxl \
-serial mon:stdio \
-net user,hostfwd=tcp::2222-:22 \
-net nic \
-spice port=${SPICE_PORT:?},disable-ticketing \
-device virtio-serial-pci \
-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-chardev spicevmc,id=spicechannel0,name=vdagent
But when I start the image again with command like
DISK_IMAGE="debian.img"
SPICE_PORT=5924
qemu-system-x86_64 \
-drive format=raw,if=pflash,file=/usr/share/ovmf/OVMF.fd,readonly \
-drive file=${DISK_IMAGE:?}
-enable-kvm \
-m 2G \
-smp 2 \
-cpu host \
-vga qxl \
-serial mon:stdio \
-net user,hostfwd=tcp::2222-:22 \
-net nic \
-spice port=${SPICE_PORT:?},disable-ticketing \
-device virtio-serial-pci \
-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-chardev spicevmc,id=spicechannel0,name=vdagent \
-snapshot
QEMU stuck at booting from harddisk . Any thoughts on how resolve this and move
forward?

Related

Error "unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory" when deploying to elastic beanstalk through bitbucket

I am trying to upload to elastic beanstalk with bitbucket and I am using the following yml file:
image: atlassian/default-image:2
pipelines:
branches:
development:
- step:
name: "Install Server"
image: node:10.19.0
caches:
- node
script:
- npm install
- step:
name: "Install and Build Client"
image: node:14.17.3
caches:
- node
script:
- cd ./client && npm install
- npm run build
- step:
name: "Build zip"
script:
- cd ./client
- shopt -s extglob
- rm -rf !(build)
- ls
- cd ..
- apt-get update && apt-get install -y zip
- zip -r application.zip . -x "node_modules/**"
- step:
name: "Deployment to Development"
deployment: staging
script:
- ls
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
APPLICATION_NAME: $APPLICATION_NAME
ENVIRONMENT_NAME: $ENVIRONMENT_NAME
ZIP_FILE: "application.zip"
All goes well until I reach the AWS deployment and I get this error:
+ docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
--env=BITBUCKET_STEP_TRIGGERER_UUID="$BITBUCKET_STEP_TRIGGERER_UUID" \
--env=BITBUCKET_REPO_FULL_NAME="$BITBUCKET_REPO_FULL_NAME" \
--env=BITBUCKET_GIT_HTTP_ORIGIN="$BITBUCKET_GIT_HTTP_ORIGIN" \
--env=BITBUCKET_PROJECT_UUID="$BITBUCKET_PROJECT_UUID" \
--env=BITBUCKET_REPO_IS_PRIVATE="$BITBUCKET_REPO_IS_PRIVATE" \
--env=BITBUCKET_WORKSPACE="$BITBUCKET_WORKSPACE" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID="$BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID" \
--env=BITBUCKET_SSH_KEY_FILE="$BITBUCKET_SSH_KEY_FILE" \
--env=BITBUCKET_REPO_OWNER_UUID="$BITBUCKET_REPO_OWNER_UUID" \
--env=BITBUCKET_BRANCH="$BITBUCKET_BRANCH" \
--env=BITBUCKET_REPO_UUID="$BITBUCKET_REPO_UUID" \
--env=BITBUCKET_PROJECT_KEY="$BITBUCKET_PROJECT_KEY" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT="$BITBUCKET_DEPLOYMENT_ENVIRONMENT" \
--env=BITBUCKET_REPO_SLUG="$BITBUCKET_REPO_SLUG" \
--env=CI="$CI" \
--env=BITBUCKET_REPO_OWNER="$BITBUCKET_REPO_OWNER" \
--env=BITBUCKET_STEP_RUN_NUMBER="$BITBUCKET_STEP_RUN_NUMBER" \
--env=BITBUCKET_BUILD_NUMBER="$BITBUCKET_BUILD_NUMBER" \
--env=BITBUCKET_GIT_SSH_ORIGIN="$BITBUCKET_GIT_SSH_ORIGIN" \
--env=BITBUCKET_PIPELINE_UUID="$BITBUCKET_PIPELINE_UUID" \
--env=BITBUCKET_COMMIT="$BITBUCKET_COMMIT" \
--env=BITBUCKET_CLONE_DIR="$BITBUCKET_CLONE_DIR" \
--env=PIPELINES_JWT_TOKEN="$PIPELINES_JWT_TOKEN" \
--env=BITBUCKET_STEP_UUID="$BITBUCKET_STEP_UUID" \
--env=BITBUCKET_DOCKER_HOST_INTERNAL="$BITBUCKET_DOCKER_HOST_INTERNAL" \
--env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--env=BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes" \
--env=BITBUCKET_PIPE_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy" \
--env=APPLICATION_NAME="$APPLICATION_NAME" \
--env=AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
--env=AWS_DEFAULT_REGION="$AWS_REGION" \
--env=AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
--env=ENVIRONMENT_NAME="$ENVIRONMENT_NAME" \
--env=ZIP_FILE="application.zip" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \
bitbucketpipelines/aws-elasticbeanstalk-deploy:1.0.2
unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory
I'm unsure how to approach this as I've followed the documentation bitbucket lies out exactly and it doesn't look like there's any place to add a .pem file.

how to add two NICs in kvm-qemu paramter

I can work with one NIC using below scripts in qemu
qemu-system-x86_64 \
-name Android11 \
-enable-kvm \
-cpu host,-hypervisor \
-smp 8,sockets=1,cores=8,threads=1 \
-m 4096 \
-net nic,macaddr=52:54:aa:12:35:02,model=virtio \
-net tap,ifname=tap1,script=no,downscript=no,vhost=on \
--display gtk,gl=on \
-full-screen \
-usb \
-device usb-tablet \
-drive file=bliss.qcow2,format=qcow2,if=virtio \
...
but now, i want to add another NIC to VM for some purpose, when i use below script, it does not work:
qemu-system-x86_64 \
-name Android11 \
-enable-kvm \
-cpu host,-hypervisor \
-smp 8,sockets=1,cores=8,threads=1 \
-m 4096 \
-net nic,macaddr=52:54:aa:12:35:02,model=virtio \
-net tap,ifname=tap1,script=no,downscript=no,vhost=on \
-net nic,macaddr=52:54:aa:12:35:03,model=e1000 \
-net tap,ifname=tap2,script=no,downscript=no \
--display gtk,gl=on \
-full-screen \
-usb \
-device usb-tablet \
-drive file=bliss.qcow2,format=qcow2,if=virtio \
...
how can i do when i want to work with the two Nics? is these something wrong in qemu parameters? thanks
PS: i have create tap1 tap2 in linux bridge before using above command

Sawtooth transaction not committed

I want to set up sawtooth with multiple validators and PoET engines.
Initially I am trying to set up with just 1 Validator, 1 PoET engine, 1 PoET registry, and the intkey Transaction processor(set up using NodeJs SDK). When I launch the network and connect the TP, it registers. But when I submit a transaction to the rest API, the response URL says PENDING. the validator logs say block validation passed but the block is not created.
version: "2.1"
volumes:
poet-shared:
services:
shell:
image: hyperledger/sawtooth-all:1.1
container_name: sawtooth-shell-default
entrypoint: "bash -c \"\
sawtooth keygen && \
tail -f /dev/null \
\""
validator-0:
image: hyperledger/sawtooth-validator:1.1
container_name: sawtooth-validator-default-0
expose:
- 4004
- 5050
- 8800
ports:
- "4004:4004"
volumes:
- poet-shared:/poet-shared
command: "bash -c \"\
sawadm keygen --force && \
mkdir -p /poet-shared/validator-0 || true && \
cp -a /etc/sawtooth/keys /poet-shared/validator-0/ && \
while [ ! -f /poet-shared/poet-enclave-measurement ]; do sleep 1; done && \
while [ ! -f /poet-shared/poet-enclave-basename ]; do sleep 1; done && \
while [ ! -f /poet-shared/poet.batch ]; do sleep 1; done && \
cp /poet-shared/poet.batch / && \
sawset genesis \
-k /etc/sawtooth/keys/validator.priv \
-o config-genesis.batch && \
sawset proposal create \
-k /etc/sawtooth/keys/validator.priv \
sawtooth.consensus.algorithm=poet \
sawtooth.poet.report_public_key_pem=\
\\\"$$(cat /poet-shared/simulator_rk_pub.pem)\\\" \
sawtooth.poet.valid_enclave_measurements=$$(cat /poet-shared/poet-enclave-measurement) \
sawtooth.poet.valid_enclave_basenames=$$(cat /poet-shared/poet-enclave-basename) \
-o config.batch && \
sawset proposal create \
-k /etc/sawtooth/keys/validator.priv \
sawtooth.poet.target_wait_time=5 \
sawtooth.poet.initial_wait_time=25 \
sawtooth.publisher.max_batches_per_block=100 \
-o poet-settings.batch && \
sawadm genesis \
config-genesis.batch config.batch poet.batch poet-settings.batch && \
sawtooth-validator -v \
--bind network:tcp://eth0:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--peering dynamic \
--endpoint tcp://validator-0:8800 \
--scheduler serial \
--network-auth trust
\""
environment:
PYTHONPATH: "/project/sawtooth-core/consensus/poet/common:\
/project/sawtooth-core/consensus/poet/simulator:\
/project/sawtooth-core/consensus/poet/core"
stop_signal: SIGKILL
rest-api-0:
image: hyperledger/sawtooth-rest-api:1.1
container_name: sawtooth-rest-api-default-0
expose:
- 8008
ports:
- "8008:8008"
command: |
bash -c "
sawtooth-rest-api \
--connect tcp://validator-0:4004 \
--bind rest-api-0:8008
"
stop_signal: SIGKILL
settings-tp-0:
image: hyperledger/sawtooth-settings-tp:1.1
container_name: sawtooth-settings-tp-default-0
expose:
- 4004
command: settings-tp -C tcp://validator-0:4004
stop_signal: SIGKILL
poet-engine-0:
image: hyperledger/sawtooth-poet-engine:1.1
container_name: sawtooth-poet-engine-0
volumes:
- poet-shared:/poet-shared
command: "bash -c \"\
if [ ! -f /poet-shared/poet-enclave-measurement ]; then \
poet enclave measurement >> /poet-shared/poet-enclave-measurement; \
fi && \
if [ ! -f /poet-shared/poet-enclave-basename ]; then \
poet enclave basename >> /poet-shared/poet-enclave-basename; \
fi && \
if [ ! -f /poet-shared/simulator_rk_pub.pem ]; then \
cp /etc/sawtooth/simulator_rk_pub.pem /poet-shared; \
fi && \
while [ ! -f /poet-shared/validator-0/keys/validator.priv ]; do sleep 1; done && \
cp -a /poet-shared/validator-0/keys /etc/sawtooth && \
poet registration create -k /etc/sawtooth/keys/validator.priv -o /poet-shared/poet.batch && \
poet-engine -C tcp://validator-0:5050 --component tcp://validator-0:4004 \
\""
poet-validator-registry-tp-0:
image: hyperledger/sawtooth-poet-validator-registry-tp:1.1
container_name: sawtooth-poet-validator-registry-tp-0
expose:
- 4004
command: poet-validator-registry-tp -C tcp://validator-0:4004
environment:
PYTHONPATH: /project/sawtooth-core/consensus/poet/common
stop_signal: SIGKILL
I found some errors here, but how to fix these?
You will need a minimum of two Transaction Processors per node to make the PoET work.
I have the same type of setup; but I am running TP's in Kubernetes and have three replica set(s) per Node in the network.
Besides look at the validator.toml and make sure that you have the bind to network 0.0.0.0, I have tried to bind it to a specific NIC ip and it does not work. Only 0.0.0.0 (all ips) seems to work fine.

Sqoop import all tables into hive gets stuck with below statement

By by default tables are moving to HDFS not to warehouse directory(user/hive/warehouse)
sqoop import-all-tables \
--num-mappers 1 \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--username=retail_dba \
--password=cloudera \
--hive-import \
--hive-overwrite \
--create-hive-table \
--compress \
--compression-codec org.apache.hadoop.io.compress.SnappyCodec \
--outdir java_files
Tried with --hive-home by overriding $HIVE_HOME- No use
Can any one suggest me the reason?

How to interpret Parse REST API without --data-url-encode?

For Prase REST API, we can have
curl -X GET \
-H "X-Parse-Application-Id: Y6i5v9PQOAAGlnKnULJJu5odT72ffSCpOnqqPhx9" \
-H "X-Parse-REST-API-Key: T6STkwY6XqVMySTbqeSZfmli3naZZK9KoxnAcEhR" \
-G \
--data-url-encode 'where={"username":"someUser"}' \
https://api.parse.com/1/users
Now I'm trying to send the request without --data-url-encode, but to append the related query into the URL https://api.parse.com/1/users, what shall I do?
I tried
curl -X GET \
-H "X-Parse-Application-Id: Y6i5v9PQOAAGlnKnULJJu5odT72ffSCpOnqqPhx9" \
-H "X-Parse-REST-API-Key: T6STkwY6XqVMySTbqeSZfmli3naZZK9KoxnAcEhR" \
-G \
https://api.parse.com/1/users?where={"username":"someUser"}
but it doesn't work.
Thank you.
First encode where={"username":"someUser"} to where%3D%7B%22username%22%3A%22someUser%22%7D, then
curl -X GET \
-H "X-Parse-Application-Id: Y6i5v9PQOAAGlnKnULJJu5odT72ffSCpOnqqPhx9" \
-H "X-Parse-REST-API-Key: T6STkwY6XqVMySTbqeSZfmli3naZZK9KoxnAcEhR" \
-G \
https://api.parse.com/1/users?where%3D%7B%22username%22%3A%22someUser%22%7D
works