how to add two NICs in kvm-qemu paramter - qemu

I can work with one NIC using below scripts in qemu
qemu-system-x86_64 \
-name Android11 \
-enable-kvm \
-cpu host,-hypervisor \
-smp 8,sockets=1,cores=8,threads=1 \
-m 4096 \
-net nic,macaddr=52:54:aa:12:35:02,model=virtio \
-net tap,ifname=tap1,script=no,downscript=no,vhost=on \
--display gtk,gl=on \
-full-screen \
-usb \
-device usb-tablet \
-drive file=bliss.qcow2,format=qcow2,if=virtio \
...
but now, i want to add another NIC to VM for some purpose, when i use below script, it does not work:
qemu-system-x86_64 \
-name Android11 \
-enable-kvm \
-cpu host,-hypervisor \
-smp 8,sockets=1,cores=8,threads=1 \
-m 4096 \
-net nic,macaddr=52:54:aa:12:35:02,model=virtio \
-net tap,ifname=tap1,script=no,downscript=no,vhost=on \
-net nic,macaddr=52:54:aa:12:35:03,model=e1000 \
-net tap,ifname=tap2,script=no,downscript=no \
--display gtk,gl=on \
-full-screen \
-usb \
-device usb-tablet \
-drive file=bliss.qcow2,format=qcow2,if=virtio \
...
how can i do when i want to work with the two Nics? is these something wrong in qemu parameters? thanks
PS: i have create tap1 tap2 in linux bridge before using above command

Related

Error "unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory" when deploying to elastic beanstalk through bitbucket

I am trying to upload to elastic beanstalk with bitbucket and I am using the following yml file:
image: atlassian/default-image:2
pipelines:
branches:
development:
- step:
name: "Install Server"
image: node:10.19.0
caches:
- node
script:
- npm install
- step:
name: "Install and Build Client"
image: node:14.17.3
caches:
- node
script:
- cd ./client && npm install
- npm run build
- step:
name: "Build zip"
script:
- cd ./client
- shopt -s extglob
- rm -rf !(build)
- ls
- cd ..
- apt-get update && apt-get install -y zip
- zip -r application.zip . -x "node_modules/**"
- step:
name: "Deployment to Development"
deployment: staging
script:
- ls
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
APPLICATION_NAME: $APPLICATION_NAME
ENVIRONMENT_NAME: $ENVIRONMENT_NAME
ZIP_FILE: "application.zip"
All goes well until I reach the AWS deployment and I get this error:
+ docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
--env=BITBUCKET_STEP_TRIGGERER_UUID="$BITBUCKET_STEP_TRIGGERER_UUID" \
--env=BITBUCKET_REPO_FULL_NAME="$BITBUCKET_REPO_FULL_NAME" \
--env=BITBUCKET_GIT_HTTP_ORIGIN="$BITBUCKET_GIT_HTTP_ORIGIN" \
--env=BITBUCKET_PROJECT_UUID="$BITBUCKET_PROJECT_UUID" \
--env=BITBUCKET_REPO_IS_PRIVATE="$BITBUCKET_REPO_IS_PRIVATE" \
--env=BITBUCKET_WORKSPACE="$BITBUCKET_WORKSPACE" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID="$BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID" \
--env=BITBUCKET_SSH_KEY_FILE="$BITBUCKET_SSH_KEY_FILE" \
--env=BITBUCKET_REPO_OWNER_UUID="$BITBUCKET_REPO_OWNER_UUID" \
--env=BITBUCKET_BRANCH="$BITBUCKET_BRANCH" \
--env=BITBUCKET_REPO_UUID="$BITBUCKET_REPO_UUID" \
--env=BITBUCKET_PROJECT_KEY="$BITBUCKET_PROJECT_KEY" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT="$BITBUCKET_DEPLOYMENT_ENVIRONMENT" \
--env=BITBUCKET_REPO_SLUG="$BITBUCKET_REPO_SLUG" \
--env=CI="$CI" \
--env=BITBUCKET_REPO_OWNER="$BITBUCKET_REPO_OWNER" \
--env=BITBUCKET_STEP_RUN_NUMBER="$BITBUCKET_STEP_RUN_NUMBER" \
--env=BITBUCKET_BUILD_NUMBER="$BITBUCKET_BUILD_NUMBER" \
--env=BITBUCKET_GIT_SSH_ORIGIN="$BITBUCKET_GIT_SSH_ORIGIN" \
--env=BITBUCKET_PIPELINE_UUID="$BITBUCKET_PIPELINE_UUID" \
--env=BITBUCKET_COMMIT="$BITBUCKET_COMMIT" \
--env=BITBUCKET_CLONE_DIR="$BITBUCKET_CLONE_DIR" \
--env=PIPELINES_JWT_TOKEN="$PIPELINES_JWT_TOKEN" \
--env=BITBUCKET_STEP_UUID="$BITBUCKET_STEP_UUID" \
--env=BITBUCKET_DOCKER_HOST_INTERNAL="$BITBUCKET_DOCKER_HOST_INTERNAL" \
--env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--env=BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes" \
--env=BITBUCKET_PIPE_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy" \
--env=APPLICATION_NAME="$APPLICATION_NAME" \
--env=AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
--env=AWS_DEFAULT_REGION="$AWS_REGION" \
--env=AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
--env=ENVIRONMENT_NAME="$ENVIRONMENT_NAME" \
--env=ZIP_FILE="application.zip" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \
bitbucketpipelines/aws-elasticbeanstalk-deploy:1.0.2
unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory
I'm unsure how to approach this as I've followed the documentation bitbucket lies out exactly and it doesn't look like there's any place to add a .pem file.

qemu displays and hangs with message 'booting from harddisk '

I'm using Debian image with QEMU. Installation works well with below
command.
ISO_FILE="debian-buster-DI-alpha5-amd64-xfce-CD-1.iso"
DISK_IMAGE="debian.img"
SPICE_PORT=5924
qemu-system-x86_64 \
-cdrom "${ISO_FILE}" \
-drive format=raw,if=pflash,file=/usr/share/ovmf/OVMF.fd,readonly \
-drive file=${DISK_IMAGE:?} \
-enable-kvm \
-m 2G \
-smp 2 \
-cpu host \
-vga qxl \
-serial mon:stdio \
-net user,hostfwd=tcp::2222-:22 \
-net nic \
-spice port=${SPICE_PORT:?},disable-ticketing \
-device virtio-serial-pci \
-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-chardev spicevmc,id=spicechannel0,name=vdagent
But when I start the image again with command like
DISK_IMAGE="debian.img"
SPICE_PORT=5924
qemu-system-x86_64 \
-drive format=raw,if=pflash,file=/usr/share/ovmf/OVMF.fd,readonly \
-drive file=${DISK_IMAGE:?}
-enable-kvm \
-m 2G \
-smp 2 \
-cpu host \
-vga qxl \
-serial mon:stdio \
-net user,hostfwd=tcp::2222-:22 \
-net nic \
-spice port=${SPICE_PORT:?},disable-ticketing \
-device virtio-serial-pci \
-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 \
-chardev spicevmc,id=spicechannel0,name=vdagent \
-snapshot
QEMU stuck at booting from harddisk . Any thoughts on how resolve this and move
forward?

Forge request fails on postman and curl

Hello i'm trying the introduction to the design automation ofr inventor but i'm stuck on the upload parameters.
this is the article:
https://forge.autodesk.com/blog/simple-introduction-design-automation-inventor
my last curl request:
curl -X POST \
https://dasprod-store.s3.amazonaws.com \
-H 'content-type: application/octet-stream' \
-F key=apps/Hf3jB7SzAGmZnHhBdRFvVHJEaCa7xPzN/ThumbnailBundle/1 \
-F policy=eyJleHBpcmF0aW9uIjoiMjAxOS0wMi0xMlQxNzowNDozMi43NTIxNDc0WiIsImNvbmRpdGlvbnMiOlt7ImtleSI6ImFwcHMvSGYzakI3U3pBR21abkhoQmRSRnZWSEpFYUNhN3hQek4vVGh1bWJuYWlsQnVuZGxlLzEifSx7ImJ1Y2tldCI6ImRhc3Byb2Qtc3RvcmUifSx7InN1Y2Nlc3NfYWN0aW9uX3N0YXR1cyI6IjIwMCJ9LFsic3RhcnRzLXdpdGgiLCIkc3VjY2Vzc19hY3Rpb25fcmVkaXJlY3QiLCIiXSxbInN0YXJ0cy13aXRoIiwiJGNvbnRlbnQtVHlwZSIsImFwcGxpY2F0aW9uL29jdGV0LXN0cmVhbSJdLHsieC1hbXotc2VydmVyLXNpZGUtZW5jcnlwdGlvbiI6IkFFUzI1NiJ9LFsiY29udGVudC1sZW5ndGgtcmFuZ2UiLCIwIiwiMTA0ODU3NjAwIl0seyJ4LWFtei1jcmVkZW50aWFsIjoiQVNJQVRHVkpaS00zQUtFWk9ER0gvMjAxOTAyMTIvdXMtZWFzdC0xL3MzL2F3czRfcmVxdWVzdC8ifSx7IngtYW16LWFsZ29yaXRobSI6IkFXUzQtSE1BQy1TSEEyNTYifSx7IngtYW16LWRhdGUiOiIyMDE5MDIxMlQxNjA0MzJaIn0seyJ4LWFtei1zZWN1cml0eS10b2tlbiI6IkZRb0daWEl2WVhkekVHa2FERS9kN1ZqY09aQWU2UXBrdVNMdkFWeUxPUkJaditZSHhGUGZUZi85THVBMnQ4NlZYYlhYeGhDV2QxZTgzbGNVSnhmSGQ4bjNkRmlzc291dGpIWG5pcGxwcW5rR2l2bXdFK0w1NkJMQ1JCcDRzYnBXVkVVNVFZcEQ0MkE1VC81Nlgvc2Z5UU5WRzFwMlY5VDBpMTZqWS9ybVpWZ2FLeXU2a0Job2duRGlia0dxb2EvK3FDYmZGcklTVEVJNGdYOWtNTC8wS2xZTFZ0endsbTN0MFpMZlV6Wit1cHlBMGhBaEh3aVlkcHk0TWFkTEU0N245Qi91VHpPMUo0cVdiY1dLaXMrbkMvTy96SlhuTnVhZ2JnV3FoK1NuZjJRM2hyaGlWQTQySVVxZG0xZmtoMVhsdHRCOEZrMDU3dnlUaVFsT1FjY21zczh4cEUrbUNHR0FEMFRYS0tuVmkrTUYifV19 \
-F content-type=application/octet-stream \
-F success_action_status=200 \
-F success_action_redirect= \
-F x-amz-signature=2583b27e19fdb5ff23950d6866dc3765c1f1bcd9c9b43d2fb2cbeb252e9e6507 \
-F x-amz-credential=ASIATGVJZKM3AKEZODGH/20190212/us-east-1/s3/aws4_request/ \
-F x-amz-algorithm=AWS4-HMAC-SHA256 \
-F x-amz-date=20190212T160432Z \
-F x-amz-server-side-encryption=AES256 \
-F x-amz-security-token=FQoGZXIvYXdzEGkaDE/d7VjcOZAe6QpkuSLvAVyLORBZv+YHxFPfTf/9LuA2t86VXbXXxhCWd1e83lcUJxfHd8n3dFissoutjHXniplpqnkGivmwE+L56BLCRBp4sbpWVEU5QYpD42A5T/56X/sfyQNVG1p2V9T0i16jY/rmZVgaKyu6kBhognDibkGqoa/+qCbfFrISTEI4gX9kML/0KlYLVtzwlm3t0ZLfUzZ+upyA0hAhHwiYdpy4MadLE47n9B/uTzO1J4qWbcWKis+nC/O/zJXnNuagbgWqh+Snf2Q3hrhiVA42IUqdm1fkh1XlttB8Fk057vyTiQlOQccmss8xpE+mCGGAD0TXKKnVi+MF \
-F file=#/C:\Users\sejjilali\Desktop\InventorForgeAddInServer.zip
From your postman request, it look like you have a space at the end of your x-amz-algorithm form-data value. Try removing it after "AWS4-HMAC-SHA256".
You are using Policy to specify an authorization token, the token needs to be a Header both in your Postman and cURL command "-H 'Authorization: Bearer [token]".
Use -H 'Content-Type: application/x-www-form-urlencoded' instead of the octet-stream.

Sqoop import all tables into hive gets stuck with below statement

By by default tables are moving to HDFS not to warehouse directory(user/hive/warehouse)
sqoop import-all-tables \
--num-mappers 1 \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--username=retail_dba \
--password=cloudera \
--hive-import \
--hive-overwrite \
--create-hive-table \
--compress \
--compression-codec org.apache.hadoop.io.compress.SnappyCodec \
--outdir java_files
Tried with --hive-home by overriding $HIVE_HOME- No use
Can any one suggest me the reason?

How to interpret Parse REST API without --data-url-encode?

For Prase REST API, we can have
curl -X GET \
-H "X-Parse-Application-Id: Y6i5v9PQOAAGlnKnULJJu5odT72ffSCpOnqqPhx9" \
-H "X-Parse-REST-API-Key: T6STkwY6XqVMySTbqeSZfmli3naZZK9KoxnAcEhR" \
-G \
--data-url-encode 'where={"username":"someUser"}' \
https://api.parse.com/1/users
Now I'm trying to send the request without --data-url-encode, but to append the related query into the URL https://api.parse.com/1/users, what shall I do?
I tried
curl -X GET \
-H "X-Parse-Application-Id: Y6i5v9PQOAAGlnKnULJJu5odT72ffSCpOnqqPhx9" \
-H "X-Parse-REST-API-Key: T6STkwY6XqVMySTbqeSZfmli3naZZK9KoxnAcEhR" \
-G \
https://api.parse.com/1/users?where={"username":"someUser"}
but it doesn't work.
Thank you.
First encode where={"username":"someUser"} to where%3D%7B%22username%22%3A%22someUser%22%7D, then
curl -X GET \
-H "X-Parse-Application-Id: Y6i5v9PQOAAGlnKnULJJu5odT72ffSCpOnqqPhx9" \
-H "X-Parse-REST-API-Key: T6STkwY6XqVMySTbqeSZfmli3naZZK9KoxnAcEhR" \
-G \
https://api.parse.com/1/users?where%3D%7B%22username%22%3A%22someUser%22%7D
works