Unpacking tar.gz into root dir with Gradle - extract

My project contains a couple of tar.gz files that I need to extract to the root directory of the project.
I made this as a test:
task untar (type: Copy) {
from tarTree(resources.gzip('model.tar.gz'))
into getProjectDir()
}
When I run it, it's throwing this exception: org.gradle.api.UncheckedIOException: java.io.IOException: The process cannot access the file because another process has locked a portion of the file.
I'm using Gradle 1.1 in Windows 7.
Thanks for the help.

I was able to extract it using this:
task test {
doLast {
copy {
from tarTree(resources.gzip('model.tar.gz'))
into getProjectDir()
}
}
}
My only guess is that either the dir or the tgz file or both are locked during the configuration phase and it's released during the execution phase.
If someone has a solution using the copy task and not the copy method I would appreciate it.

Related

preserving a folder from being removed at build by github-action

adonis js cleans build folder every time you trigger a new build as a result public upload folder will be removed this created a lot of issue for me and Im trying diffrent methods to solve this.
Im not currently using github actions to build my project and I was wondering if it can help me on this matter by this order or somthing like this on every commit:
copy build/tmp folder
build project by running yarn build command
past the copied folder from step 1 back to build/tmp
for those who came here like me, I found the configuration. You can change local disk configuration.
/config/drive.ts
{ local: { root: Env.get('STORAGE_ROOT'), } }
In this case I created a variable in the .env with the absolute path

Cypress BDD - Unable to populate log.json file & messages.ndjson using the latest boiler plate code

I'm using the new boiler plate code present here - https://github.com/JoanEsquivel/cypress-cucumber-boilerplate on a Windows machine to generate a log.json file, which in turn makes use of the "cucumber-json-formatter.exe" to format the json file and generate a cucumber-html report. Seem to have followed all the steps correctly, but the log.json file is not getting populated with any data and in turn no cucumber-html report.
Steps followed:
Cloned the project
Performed npm commands to install all latest packages (not required but as a double-check)
Downloaded cucumber-json-formatter-windows-386 from https://github.com/cucumber/json-formatter/releases/tag/v19.0.0 , renamed to cucumber-json-formatter.exe and included in the project folder
Performed "npm run cypress:execution" command - This comes from the script in package.json file. Able to see the feature files getting executed in the terminal. This creates the json logs folder with the 2 json files (log.json, messages.ndjson)
Performed "node .\cucumber-html-report.js" command. This generates the cucumber-html report which is empty, because it should be the formatted version of the log.json file. The formatting is done by the cucumber-json-formatter.exe.
Reaching out, if anyone else also came across the same issue. If yes, require some guidance here please.

Why does gitlab CI does not find my junit report artifact?

I am trying to upload J-unit reports on Gitlab CI(these are test results from my Cypress automation framework). I am using Junit-merge. Due the architecture of Cypress (each test in isolation), it requires an extra 'merge' for the reports to get them into one file. Locally evertything works fine:
Junit generates single reports of each test with a hashcode
After all reports have been generated I run a script (shown below) that mixed all the reports into one single .xml file and outputs it below the 'results' package.
Tried to debug it locally, but locally everything just works fine. Possiblities I could think of: Either the merge script is not handled properly or Gitlab does not accept the relative path to the .xml file.
{
"baseUrl": "https://www-acc.anwb.nl/",
"reporter": "mocha-junit-reporter",
"reporterOptions": {
"mochaFile": "results/resultsreport.[hash].xml",
"testsuiteTitle": "true"
}
}
This is the Cypress.json file, where I configured the Junit reporter and let it output the single testfiles in the results package.
cypress-e2e:
image: cypress/base:10
stage: test
script:
- npm run cy:run:staging
- npx junit-merge -d results -o results/results.xml
artifacts:
paths:
- results/results.xml
reports:
junit: results/results.xml
expire_in: 1 week
This is part of the yml file. The npx junit-merge command makes sure all .xml files in the results package are being merged into results.xml.
Again, locally everything works as expected. The error I get from gitlab Ci is:
Uploading artifacts...
WARNING: results/results.xml: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
Artifacts can only exist in
directories relative to the build directory and specifying paths which don't
comply to this rule trigger an unintuitive and illogical error message (an
enhancement is discussed at
gitlab-ce#15530
). Artifacts need to be uploaded to the GitLab instance (not only the GitLab
runner) before the next stage job(s) can start, so you need to evaluate
carefully whether your bandwidth allows you to profit from parallelization
with stages and shared artifacts before investing time in changes to the
setup.
https://gitlab.com/gitlab-org/gitlab-ee/tree/master/doc/ci/caching
which means next configuration should fix the problem:
artifacts:
reports:
junit: <testing-repo>/results/results.xml
expire_in: 1 week

xxx.mex: failed to load: No such file or directory

For some time, I've been using some .mex files I created. Now I have a new computer. I copied all the files over and reinstalled Cygwin and Octave. When I try to execute any of the .mex routines I get a message like:
error: testm: /cygdrive/c/A/Cwin/...../quad.mex: failed to load: No such file or directory
The file is definitely there and I'm having no trouble loading .m files from the same directory. It does not say there is anything wrong with the file. I'm guessing this is some sort of configuration problem. I am running Octave 4.2.1. When I start it, I get the following message:
Octave was configured for "x86_64-unknown-cygwin".
Could that have something to do with it? I think I'm developing x64 paranoia, since all my Excel .dll macros no longer work either. Thanks.

Azure AppService deploy.cmd using the wrong file

I am trying to configure continuous deployment to a test server on Azure. The app is an ASP.Net application, but in this case that shouldn't really matter.
My build process (team city) produces a folder that has everything needed to deploy (minus some connection string info). If you point IIS at that directory it works great. If you FTP that directory up to Azure it also works.
I am tracking each of these builds in git and pushing them up to Github. So I am trying to use Azure deployment option to deploy from github. Everything is in git. The /bin folder included.
Kudu shouldn't need to do anything but a pull from git and copy all the files to wwwroot.
So I've set my .deployment file to be this:
[config]
project = .
Every time I do that, though, the deployment gives me the message:
Using cached version of deployment script (command: 'azure -y --no-dot-deployment -r "D:\home\site\repository" -o "D:\home\site\deployments\tools" --aspWAP "D:\home\site\repository\MyProj.csproj" --no-solution').
And it runs some generic autogenerated deploy.cmd.
If I delete the deploy.cmd from the cache, it regenerates some generic one.
And, most importantly, in doing all this, the WRONG ASSEMBLY IS BEING DEPLOYED!!
My app depends on System.Web.Helpers.dll. The correct version of this DLL is in github. I've verified this multiple times.
Kudu, however, is grabbing an OLDER one from NuGet and deploying that. And, of course, I get the dreaded YSOD error about not being able to load that file.
What do I need to do to make Kudu just copy the files from my github repository to wwwroot and nothing else?
I wound up getting it to deploy by hand editing the autogenerated deploy.cmd file that lives at \home\site\deployments\tools\deploy.cmd in kudu.
I commented out the 2 autogenerated lines of:
:: 1. Restore NuGet packages
:: 2. Build to the temporary path
(commented out all the code underneath them, too)
And then hand-edited the 3rd section to run kudu sync from the DEPLOYMENT_SOURCE instead of the temp file like this:
:: 3. KuduSync
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_SOURCE%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
IF !ERRORLEVEL! NEQ 0 goto error
)