I'm trying to run Packer (1.7) in an Azure DevOps pipeline.
The pkr.hcl files passes validation on my PC running Packer 1.7.3. The pipeline runs Packer 1.7.2.
The YAML task in the pipeline reads like this:
- task: PackerBuild#1
inputs:
templateType: 'custom'
customTemplateLocation: 'ComboBoxes.pkr.hcl'
imageUri: 'ssi-dev-combobox'
imageId: <full resource ID>
When run in the pipeline it reads:
Current installed packer version is 1.7.2.
Running packer fix command
/usr/local/bin/packer fix -validate=false /home/vsts/work/1/s/ComboBoxes.pkr.hcl
Error parsing template: invalid character '#' looking for beginning of value
##[error]Packer fix command failed with error : ''. This could happen if task does not support packer version.
The # is the first character in the .pkr.hcl file. And changing the beginning of the file will change what character shows up as invalid.
Why is it trying to run "packer fix" instead of "packer build"?
So it turns out that the Packer task in Azure Pipelines doesn't work with current versions of Packer.
Run Packer as part of a script task instead.
- task: PowerShell#2
displayName: 'Packer build'
inputs:
targetType: 'inline'
script: 'packer build $(build.artifactstagingdirectory)/ComboBoxes.pkr.hcl'
Related
I'm trying to use the Amazon Elastic Beanstalk Tools for .NET Core applications (4.2.2) to publish a net6.0 app to AWS EB (windows). At the time of writing I need to include the net6.0 runtime since net6.0 is not supported on EB yet.
I can successfully publish my app to AWS using the AWS Toolkit for Visual Studio.
The toolkit calls dotnet publish with the following parameters:
Executing: dotnet publish "[my project path]" --output "[my project path]\bin\Release\net6.0\publish" --configuration "Release" --framework "net6.0" --runtime win-x64 --self-contained true
The toolkit creates this config file (aws-beanstalk-tools-defaults.json) following a successful publish:
{
"additional-options" : "",
"application" : "myApp",
"app-path" : "/",
"configuration" : "Release",
"enable-xray" : false,
"enhanced-health-type" : "enhanced",
"environment" : "myApp-test",
"framework" : "net6.0",
"iis-website" : "Default Web Site",
"region" : "eu-west-1",
"self-contained" : true,
"runtime" : "win-x64"
}
However when I try to use the command line utility with the command:
dotnet eb deploy-environment -cfg myConfFile.json
the self-contained and runtime parameters are not passed to the dotnet deploy call resulting in this call:
dotnet publish "my project path]" --output "my project path]\bin\Release\net6.0\publish" --configuration "Release" --framework "net6.0"
I have tried passing the parameters without using the config file as
dotnet eb deploy-environment --profile XXX -c Release -env myApp-test -po --runtime "win-x64"
only to get trigger this exception:
System.InvalidOperationException: Required argument missing for option: --runtime
Is there anyway to use this utility to publish a net6.0 app using self-contained bundled to a windows based EB instance ?
This is a bug/limitation in version 4.2.2 of AWS Beanstalk Tools for .NET Core.
The utility only reads this parameter for non windows environments.
There is however a workaround.
It is possible to pass the win-x64 parameter using the --publish-options parameter like this:
dotnet eb deploy-environment -c Release -cfg myConfFile --publish-options "--runtime win-x64" "--self-contained true"
This will actually result in a warning:
warning NETSDK1179: One of '--self-contained' or '--no-self-contained' options are required when '--runtime' is used.
But the self contained image will still get published. You can actually skip the --self-contained parameter. The result will be the same.
Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)
I tried to Creating and Deploying Oracle Cloud Functions by following the official documentation instructions. I can create and deploy using java runtime but when I deploy go runtime always return error.
I tried to init Go function using this command in Oracle Cloud Shell:
fn init --runtime go hello-go
then I tried to deploy it
fn -v deploy --app test
but it returned error like below:
Deploying hello-go to app: test
Bumped to version 0.0.7
Building image bom.ocir.io/bmptwl2psusa/repo/hello-go:0.0.7
FN_REGISTRY: bom.ocir.io/bmptwl2psusa/repo
Current Context: ap-mumbai-1
Sending build context to Docker daemon 5.632kB
Step 1/10 : FROM fnproject/go:dev as build-stage
---> 96c8fb94a8e1
Step 2/10 : WORKDIR /function
---> Using cache
---> 8961dd299ec1
Step 3/10 : WORKDIR /go/src/func/
---> Using cache
---> 5a4c2c6e13f1
Step 4/10 : ENV GO111MODULE=on
---> Using cache
---> 22022ff2fcf8
Step 5/10 : COPY . .
---> 714622a6ff03
Step 6/10 : RUN cd /go/src/func/ && go build -o func
---> Running in 39fedbc476f4
build func: cannot find module for path github.com/fnproject/fdk-go
The command '/bin/sh -c cd /go/src/func/ && go build -o func' returned a non-zero code: 1
Fn: error running docker build: exit status 1
When I'm using java runtime with fn init --runtime java hello-java command, it's successfully deployed, Why always fail when using go?
I tried to run go build -o func in hello-go directory but it's returned:
go: finding module for package github.com/fnproject/fdk-go
go: writing stat cache: mkdir /usr/share/gocode/pkg: permission denied
go: downloading github.com/fnproject/fdk-go v0.0.3
func.go:10:2: mkdir /usr/share/gocode/pkg: permission denied
I know it happened because /usr/share/gocode/ directory is under root user, but I dont know how to change the permission on that folder because Oracle Cloud Shell can not use root user or sudo. (based on this answer)
Maybe I can do it if I use real VM shell or local shell/terminal, but I want to use Oracle Cloud Shell because I just followed official instructions that suggested me using Oracle Cloud Shell, so how to deploy Oracle Cloud Functions with Go runtime using Oracle Cloud Shell?
Mostly the official documentations only give the examples using Java runtime, that make me paranoid when using go.
This is a bug in cloudshell that we are figuring out the best way to solve.
As a short-term workaround you can do this once:
mkdir ${HOME}/gopath
Then set this in your terminal:
export GOPATH=${HOME}/gopath
You should probably edit your ~/.bashrc to set the GOPATH variable automatically so you don't forget
I am unable to deploy a cloud function through google cloud build, receiving the error:
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
My git repo structure is
myrepo/cloudbuild.yaml
myrepo/new-user/index.js
myrepo/new-user/package.json
And my cloudbuild.yaml is as follows
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: 'newUser'
args: ['functions',
'deploy',
'newUser',
'--source=./new-user/.',
'--trigger-event=providers/cloud.firestore/eventTypes/document.create',
'--trigger-resource=projects/myproject/databases/default/documents/userLocations/{user}',
'--runtime=nodejs8']
I thought for cloud functions, only the cloudbuild.yaml is required, which is why the Dockerfile error is confusing.
Running the following on the command line works fine.
gcloud functions deploy newUser --runtime=nodejs8 --trigger-event=providers/cloud.firestore/eventTypes/document.create --trigger-resource=projects/myproject/databases/default/documents/userLocations/{user} --source=./new-user/.
Thanks.
Your repository has no Dockerfile, so you cannot use a non-existent Dockerfile to build.
Since you are trying to make a serverless container that needs a Docker image as input.
As I just ran into this, the problem for me was linked to the fact that I was working in a monorepo and had to redefine the workspace location Cloud Build would use to find the Dockerfile by adding a dir to my cloudbuild file.
My Cloud Build trigger was setup to look for a cloudbuild file that was located under: <root>/apps/<subfolder>/cloudbuild.yaml
The cloudbuild file was properly picked up by CloudBuild, the build would start but then errored as the Dockerfile was not found.
YAML example:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'us-west2-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1', '.' ]
images:
- 'us-west2-docker.pkg.dev/$PROJECT_ID/quickstart-docker-repo/quickstart-image:tag1'
- dir: 'apps/<subfolder>'
Reference: Docs
Is it possible to deploy a Single Page App project build using grunt to IIS using MSDeploy from TeamCity? The project is not any kind of Visual Studio solution and doesn't get built using MSBuild.
My Command parameters which are not working are:
-source:package='%teamcity.build.checkoutDir%\Dist.%build.number%.zip' -dest:auto,computerName="%system.MsDeployServiceUrl%",userName="%system.UserName%",password="%system.Password%",authtype="basic",includeAcls="False"
-verb:sync -setParamFile:"%teamcity.build.checkoutDir%\Dist.%build.number%.zip.SetParameters.xml"
-AllowUntrusted -setParam:"IIS Web Application Name"="%system.WebSiteName%" -verbose
The error I am getting is:
[11:47:31][Step 3/3] Error Code: ERROR_EXCEPTION_WHILE_CREATING_OBJECT
[11:47:31][Step 3/3] More Information: Object of type 'package' and
path 'D:\TeamCity\buildAgent\work\e2b0015b49d87e90\Dist.30.zip' cannot
be created. Learn more at:
http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_EXCEPTION_WHILE_CREATING_OBJECT.
[11:47:31][Step 3/3] Error: The Zip package
'D:\TeamCity\buildAgent\work\e2b0015b49d87e90\Dist.30.zip' could not
be loaded. [11:47:31][Step 3/3] Error: Could not find file
'D:\TeamCity\buildAgent\work\e2b0015b49d87e90\Dist.30.zip'.
[11:47:31][Step 3/3] Error count: 1. [11:47:31][Step 3/3] Process
exited with code -1 [11:47:31][Step 3/3] Step Deploy (Command Line)
failed
My build process is working as I end up with the correct artefacts, I just don't seem to be able to deploy my generated artefacts using MSDeploy
This is a screenshot of my artefacts:
I managed to get this working by changing my parameters to the following:
-source:iisapp='%teamcity.build.checkoutDir%\dist' -dest:iisapp='C:\www\xxxx-website',computerName="%system.MsDeployServiceUrl%",userName="%system.UserName%",password="%system.Password%",authtype="basic",includeAcls="False"
-verb:sync -AllowUntrusted -verbose
And changing my user to an admin user rather than an IIS user. Note use of iisapp - the key was to read the MSDeploy api using msdeploy -help
FYI - a good test is to use the intended command against msdeploy.exe in console and check output errors then push command into teamcity when it's working.
I created a grunt and gulp plugin to do just what you are looking to do. gulp-mswebdeploy-package and grunt-mswebdeploy-package will create a ms webdeploy package from any folder and do not require your build to be running on windows.
https://www.npmjs.com/package/gulp-mswebdeploy-package
https://www.npmjs.com/package/grunt-mswebdeploy-package