Crooscompile qt5.4 for imx6 - configuration

I am trying to crosscompile qt-everywhere-opensource-5.4.0 to iMx6 board.
The following is my configuration file (config.imx6):
./configure --prefix=/tools/rootfs/usr/local/qt-5.4.0 -examplesdir /tools/rootfs/usr/local/qt-5.4.0/examples -verbose -opensource -confirm-license -make libs -make examples -device imx6 \
-device-option CROSS_COMPILE=\
/home/acsia/Desktop/imx6-Qt5/arm-tool-chain/freescale/usr/local/gcc-4.6.2-glibc-2.13-linaro-multilib-2011.12/fsl-linaro-toolchain/bin/arm-fsl-linux-gnueabi- \
-no-pch -no-opengl -no-icu -no-xcb -no-c++11 \
-opengl es2 \
-eglfs \
-compile-examples \
-glib -gstreamer -pkg-config -no-directfb\
When I run ./config.imx I am getting following error:
-gstreamer: invalid command-line switch
But same configuration file runs fine qt-everywhere-opensource-5.1.1
The platform I am using is ubuntu 14.04.
How do I resolve this?

Related

Error "unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory" when deploying to elastic beanstalk through bitbucket

I am trying to upload to elastic beanstalk with bitbucket and I am using the following yml file:
image: atlassian/default-image:2
pipelines:
branches:
development:
- step:
name: "Install Server"
image: node:10.19.0
caches:
- node
script:
- npm install
- step:
name: "Install and Build Client"
image: node:14.17.3
caches:
- node
script:
- cd ./client && npm install
- npm run build
- step:
name: "Build zip"
script:
- cd ./client
- shopt -s extglob
- rm -rf !(build)
- ls
- cd ..
- apt-get update && apt-get install -y zip
- zip -r application.zip . -x "node_modules/**"
- step:
name: "Deployment to Development"
deployment: staging
script:
- ls
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_REGION
APPLICATION_NAME: $APPLICATION_NAME
ENVIRONMENT_NAME: $ENVIRONMENT_NAME
ZIP_FILE: "application.zip"
All goes well until I reach the AWS deployment and I get this error:
+ docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
--env=BITBUCKET_STEP_TRIGGERER_UUID="$BITBUCKET_STEP_TRIGGERER_UUID" \
--env=BITBUCKET_REPO_FULL_NAME="$BITBUCKET_REPO_FULL_NAME" \
--env=BITBUCKET_GIT_HTTP_ORIGIN="$BITBUCKET_GIT_HTTP_ORIGIN" \
--env=BITBUCKET_PROJECT_UUID="$BITBUCKET_PROJECT_UUID" \
--env=BITBUCKET_REPO_IS_PRIVATE="$BITBUCKET_REPO_IS_PRIVATE" \
--env=BITBUCKET_WORKSPACE="$BITBUCKET_WORKSPACE" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID="$BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID" \
--env=BITBUCKET_SSH_KEY_FILE="$BITBUCKET_SSH_KEY_FILE" \
--env=BITBUCKET_REPO_OWNER_UUID="$BITBUCKET_REPO_OWNER_UUID" \
--env=BITBUCKET_BRANCH="$BITBUCKET_BRANCH" \
--env=BITBUCKET_REPO_UUID="$BITBUCKET_REPO_UUID" \
--env=BITBUCKET_PROJECT_KEY="$BITBUCKET_PROJECT_KEY" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT="$BITBUCKET_DEPLOYMENT_ENVIRONMENT" \
--env=BITBUCKET_REPO_SLUG="$BITBUCKET_REPO_SLUG" \
--env=CI="$CI" \
--env=BITBUCKET_REPO_OWNER="$BITBUCKET_REPO_OWNER" \
--env=BITBUCKET_STEP_RUN_NUMBER="$BITBUCKET_STEP_RUN_NUMBER" \
--env=BITBUCKET_BUILD_NUMBER="$BITBUCKET_BUILD_NUMBER" \
--env=BITBUCKET_GIT_SSH_ORIGIN="$BITBUCKET_GIT_SSH_ORIGIN" \
--env=BITBUCKET_PIPELINE_UUID="$BITBUCKET_PIPELINE_UUID" \
--env=BITBUCKET_COMMIT="$BITBUCKET_COMMIT" \
--env=BITBUCKET_CLONE_DIR="$BITBUCKET_CLONE_DIR" \
--env=PIPELINES_JWT_TOKEN="$PIPELINES_JWT_TOKEN" \
--env=BITBUCKET_STEP_UUID="$BITBUCKET_STEP_UUID" \
--env=BITBUCKET_DOCKER_HOST_INTERNAL="$BITBUCKET_DOCKER_HOST_INTERNAL" \
--env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--env=BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes" \
--env=BITBUCKET_PIPE_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/aws-elasticbeanstalk-deploy" \
--env=APPLICATION_NAME="$APPLICATION_NAME" \
--env=AWS_ACCESS_KEY_ID="$AWS_ACCESS_KEY_ID" \
--env=AWS_DEFAULT_REGION="$AWS_REGION" \
--env=AWS_SECRET_ACCESS_KEY="$AWS_SECRET_ACCESS_KEY" \
--env=ENVIRONMENT_NAME="$ENVIRONMENT_NAME" \
--env=ZIP_FILE="application.zip" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \
bitbucketpipelines/aws-elasticbeanstalk-deploy:1.0.2
unable to resolve docker endpoint: open /root/.docker/ca.pem: no such file or directory
I'm unsure how to approach this as I've followed the documentation bitbucket lies out exactly and it doesn't look like there's any place to add a .pem file.

Azure cli virtual machine scale sets tutorial fails with "Parameter 'osProfile' is not allowed."?

Follwing https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-use-custom-image-cli fails here with the error
...
$ az vmss create --resource-group myResourceGroup --name myScaleSet --image /subscriptions/.../myGallery/images/myImageDefinition
Deployment failed. Correlation ID: 6c5f031b-aa0e-42a8-a1d9-faba9b11b208. {
"error": {
"code": "InvalidParameter",
"message": "Parameter 'osProfile' is not allowed.",
"target": "osProfile"
}
}
Any suggestions? You can reproduce this easily using the script https://github.com/dankegel/azure-scripts/blob/main/ss-demo.sh
I can reproduce the error with your script. The problem is that there is a missing "\" after the parameter --image $IDID in your script.
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image $IDID
--specialized
It should be
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image $IDID \
--specialized

ERROR >> undefined method 'new' for nil:NilClass

SCREENSHOT HEREI'm trying to perform Tungsten Replicator installation using tpm by typing:
./tools/tpm install alpha \
--install-directory=/opt/continuent \
--master=tungsten1 \
--members=tungsten1,tungsten2 \
--enable-heterogeneous-master=true \
--enable-batch-service=true \
--replication-password=tungsten \
--replication-user=tungstenmysql \
--skip-validation-check=HostsFileCheck \
--skip-validation-check=ReplicationServicePipelines \
--start-and-report=true
However, I keep getting, "ERROR >> undefined method 'new' for nil:NilClass". Anybody know how to fix this? Thanks.

Galen framework Null pointer exception and FileNotFoundException

this is my galen command for gelya1.gspec file
D:\Galen\Project1>galen check gelya1.gspec \ --url http://samples.galenframework.com/tutorial1/tutorial1.html \ --size 640x480 \ --htmlreport .
This is my gelya1.gspec file
'
#objects
header id header
= Main section =
header:
height 40to 120px
'
This is my error logs
Test: gelya1.gspec
check gelya1.gspec \ --url http://samples.galenframework.com/tutorial1/tutorial1.html \ --size 640x480 \ --htmlreport .
java.lang.NullPointerException
Test: \
check gelya1.gspec \ --url http://samples.galenframework.com/tutorial1/tutorial1.html \ --size 640x480 \ --htmlreport .
java.io.FileNotFoundException: \ (The system cannot find the path specified)
Suite status: FAIL
Total tests: 4
Total failed tests: 3
Total failures: 3
There were failures in galen tests
D:\Galen\Project1> `
Firefox browser is getting launched when I run this command, not sure why these exceptions are coming.

Failed to create Console with embed VFS due to "Call to a possibly undefined method"

First I've created an embedded Virtual File System, as described here.
It generates this AS code:
package C_Run {}
package com.adobe.flascc.vfs {
import com.adobe.flascc.vfs.*;
import com.adobe.flascc.BinaryData
public class myvfs extends InMemoryBackingStore {
public function myvfs() {
addDirectory("/data")
addFile("/data/localization.en.afgpack", new C_Run.ALC_FS_6D79766673202F646174612F6C6F63616C697A6174696F6E2E656E2E6166677061636B)
addFile("/data/dataAudio.afgpack", new C_Run.ALC_FS_6D79766673202F646174612F64617461417564696F2E6166677061636B)
addFile("/data/data.afgpack", new C_Run.ALC_FS_6D79766673202F646174612F646174612E6166677061636B)
}
}
}
It is compiled into myvfs.abc.
Then I'm trying to create custom console with this VFS.
I've imported myvfs in Console.as:
import com.adobe.flascc.vfs.myvfs;
And created vfs object:
var my_vfs_embedded:InMemoryBackingStore = new myvfs();
So, the problem is that compiling Console.abc sometimes fails with error "Call to a possibly undefined method myvfs" and sometimes builds successfully with the same code. How can this be?
Console.abc is built by this command:
cd ./../../Engine/library/baselib/sources/flash && \
java -jar $(FLASCC_FOR_EXT)/usr/lib/asc2.jar -merge -md -AS3 -strict -optimize \
-import $(FLASCC_FOR_EXT)/usr/lib/builtin.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/playerglobal.abc \
-import $(GLS3D_ABS)/install/usr/lib/libGL.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/ISpecialFile.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/IBackingStore.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/IVFS.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/InMemoryBackingStore.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/AlcVFSZip.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/CModule.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/C_Run.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/BinaryData.abc \
-import $(FLASCC_FOR_EXT)/usr/lib/PlayerKernel.abc \
-import $(BUILD_FULL_PATH)/myvfs.abc \
Console.as -outdir $(BUILD_FULL_PATH) -out Console
myvfs.abc is located at BUILD_FULL_PATH, hinting that it might be built at the same time as Console.as. If the build order is not fully predictable, the myvfs.abc binary might be in an undetermined state when Console.as is compiled. This can happen if, for instance, you build myvfs.as and Console.as as different independent targets and are using multithreaded option in make (-j).
Seems like my VFS was too big for compiler. When I take less data everything was ok. So, I suppose it was a bug in compiler.