I'm starting to use Cloud Build for a project, and I'm having the following issue:
Using this cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
- name: 'gcr.io/cloud-builders/npm'
args: ['test']
The build run okay on the first step because it is able to install the dependencies and everything but in the second step it fails because it needs to connect to a MySQL database.
I saw in another SO post, you can use an extra build step to run the Cloud SQL proxy and connect that way. Check it out that let us know if that solution works for you.
Related
I'm trying to set up a pipeline on Azure DevOps with Cypress tests.
Locally test output file is created correctly.
I'm using npx cypress run command
I'm getting an error/warning for Publish Test Results:
##[warning]No test result files matching **/test-output-*.xml were found.
Here is my cypress.json file:
{
"reporter": "junit",
"reporterOptions": {
"mochaFile": "tests/test-output-[hash].xml",
"toConsole": true,
"attachments": true
},
"video": false,
"pluginsFile": "cypress/plugins/index.js",
"supportFile": "cypress/support/index.js"
}
Here is azure-pipelines.yml:
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- develop
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'npm install'
- script:
npx cypress run
displayName: 'Execute cypress tests'
- task: PublishTestResults#2
displayName: "Publish Test Results"
condition: always()
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/test-output-*.xml'
searchFolder: '$(System.DefaultWorkingDirectory)'
- task: PublishBuildArtifacts#1
condition: always()
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
I tried to do all weird stuff, but nothing helps.
Checked all StackOverflow topics like those below:
Azure DevOps test -xml not found after running the Cypress tests
Is there any way to show Cypress Test Results in Azure DevOps Test Results Tab?
Azure DevOps test -xml not found after running the Cypress tests
No test result files were found using search pattern '...\**\TEST-*.xml
Cypress Integration with DevOps
All is looking to be set up correctly according to Cypress documentation and blogs etc.
Maybe test output file is not created on Azure?
Maybe someone has a clue?
EDIT:
I checked using ls -al command, that tests folder is not created.
But even if I created it using mkdir tests before starting cypress the folder is empty after the cypress job.
So Cypress is not creating test output report. Why locally the file is created but on Azure no?
Please check with the following steps:
Set the pipeline variable system.debug to be true, and run the pipeline again.
After the step "Execute cypress tests" is completed, check if you can get more details for troubleshooting from the debug logs on the console window.
You mentioned that the same npx cypress run command can work fine on your local machine, please try to install a self-hosted agent on your local machine to run the pipeline to see if the problem still exists.
If the problem still exists, for us to investigate this problem further, please share the complete logs of the test step with us.
Just had the same issue. Had my artifact download task set on a specific build, so I never get the new build with the correct cypress.config file. Updated the build target and now everything is working. So thanks #DuduA, I thought I'll just answer it so it's a bit easier to see if someone has the same issue.
Please add a command line task to check if cypress run command is running at which folder.
I think you might be running the cypress command in wrong folder. The issue will be resolved if you will provide the correct folder structure and run cypress run command where cypress.json file exists.
Also in cypress.json file check the path of plugins file and support file.
I want to test a CLI that should connect to PostgreSQL and MySQL servers using GitHub Actions, on all platforms if possible: Linux, Windows and macOS.
I found instructions on how to run a Postgres service and how to run a MySQL service and combined these into a workflow:
name: Test
on: [push]
jobs:
init_flow:
name: 'Run MySQL and Postgres on ${{ matrix.os }}'
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
# via https://github.com/actions/example-services/blob/master/.github/workflows/postgres-service.yml
services:
postgres:
image: postgres:10.8
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
# will assign a random free host port
- 5432/tcp
# needed because the postgres container does not provide a healthcheck
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
mysql:
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: root
ports:
- 3306
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
steps:
- uses: actions/checkout#v1
- run: node -v
env:
# use localhost for the host here because we are running the job on the VM.
# If we were running the job on in a container this would be postgres
POSTGRES_HOST: localhost
POSTGRES_PORT: ${{ job.services.postgres.ports[5432] }} # get randomly assigned published port
MYSQL_PORT: ${{ job.services.mysql.ports[3306] }}
But this only seems to work on Linux, not Windows or macOS, see the results of the action on GitHub:
Linux ✔
Windows ❌
macOS ❌
Windows fails during Initialize Containers with ##[error]Container operation is only supported on Linux, macOS even during Set up job with ##[error]File not found: 'docker'.
GitHub Actions services docs do not mention that this will only work on Linux, but I also do not know much about containers or Docker so might be missing something obvious.
(It is not important that MySQL and PostgreSQL run on the same operating system by the way - they only have to be accessible by the main job.)
Is it possible to run MySQL and PostgreSQL for GitHub Actions using Windows and macOS?
If not, what is the best workaround here?
Well, normally it's supported only on Linux. I was wondering if it would be supported in other VMs, so I ask Github. Here the answer :
Currently, Docker container actions can only execute in the GitHub-hosted Linux environment, and is not supported on other environments (such as Windows and MacOS).
More details please reference here: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-actions#types-...
We notice that some other users also had reported the same question, we had reported this as a feature request to the appropriate engineering team.
Ref: https://github.community/
I'm not sure that it's possible yet. I know that container actions only work on Linux virtual machines at the moment, as you can see from the documentation here.
https://help.github.com/en/articles/about-actions#types-of-actions
services are using containers, so it would make sense that it doesn't work on Windows and MacOS yet.
An alternative workaround is to use an external database of course.
A simple way to do this might be the free tiers of Heroku's offering:
For Postgres they have Heroku Postgres
For MySQL I can use the ClearDB or JawsDB addon
They all give you tiny space and limits, but in my case this will probably be enough for now.
For MySQL, I found a (temporary[1]) workaround:
Per Software in virtual environments for GitHub Actions I learned that all operating systems currently have a local installation of MySQL 5.7 running on port 3306 with credentials root:root. You can use this MySQL instance in your jobs.
Unfortunately for me PostgreSQL is not installed.
[1] I recall reading a product manager of GitHub Actions telling people that the installed software might change and especially the databases might go away soon unfortunately (can't recall or find the link though, somewhere in GitHub Community, GitHub Actions)
Turns out the MySQL credentials root:root also only work on Linux, I could not find working ones for Windows and macOS.
I'm using the Hibernate framework along with Maven in IntelliJ. I'm creating a MySQL database, I also have some ORM classes that map the MySQL database, and then I'm running some JUNIT tests to make sure everything works.
Where I'm having trouble is in two places, which are related to each other:
When I run mvn test, sometimes my JUNIT tests work fine and are
able to query the simulated database, establish a connection (even
though it's just with the simulated database), execute a statement,
etc. However, sometimes, if I run mvn clean before running mvn test,
while the JUNIT tests still execute, the tests output with failures
(not errors, just failures, thought this is still bad, of course).
The problem outlined in #1 is essentially duplicated when I upload to GitHub and run CircleCI (which isn't surprising, since CircleCI runs mvn test when doing its integration testing). Most of my uploads failed, but one of them finally worked. However, I'm not exactly sure why the "final" upload was successful while the others weren't.
The error messages I'm getting either from mvn test or the CircleCI builds are typically as follows. These errors are from my pent-ultimate upload, the one I did just prior to the next upload which actually worked:
java.sql.SQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
com.mysql.cj.exceptions.CJException: Public Key Retrieval is not allowed
java.sql.SQLNonTransientConnectionException: Could not create connection to database server
I should also note that my intention is to run mvn clean first, then upload to CircleCI, however, running mvn clean seems to be somehow involved in perpetuating these errors.
As far as different resources I'm using go, here they are. If I'm forgetting something, please let me know and I should be able to include it.
In my hibernate.cfg.xml file, I have the following lines:
<property name="connection.url">jdbc:mysql://localhost:3306/stocks</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
At the end of the word "stocks" on the first line, I have sometimes appended any of the following (sometimes I only appended one of the following, other times I combined them, depending on the error(s) from either Maven or CircleCI). Appending some combination of these lines seemed to help get things to work, but running mvn clean seemed to halt any effect these additions were having:
autoReconnect=true
useSSL=false
allowPublicKeyRetrieval=true
Running the JUNIT tests from within IntelliJ usually works, but if I run mvn clean first, then IntelliJ usually won't work, unless I then go back into this file and append ?autoReconnect=true&useSSL=false. If I do that, then IntelliJ will run the JUNIT tests fine.
In my config.yml file for CircleCI, I have the following code. Certain statements were added in MAVEN_OPTS based on other research I did to try to counteract the errors I was getting, but I don't know if these statements are having any impact one way or the other:
# Java Maven CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-java/ for more details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/openjdk:8-jdk
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# - image: circleci/postgres:9.4
- image: circleci/mysql:latest-ram
environment:
- MYSQL_ROOT_PASSWORD: (my real password goes here)
- MYSQL_DATABASE: stocks
- MYSQL_USER: bob
- MYSQL_PASSWORD: (the real password goes here)
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m -Dmaven.wagon.http.ssl.insecure=true -Dmaven.wagon.http.ssl.allowall=true -Dmaven.wagon.http.ssl.ignore.validity.dates=true
steps:
- checkout
- run: sudo apt install -y mysql-client
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
# run tests!
- run: mvn integration-test
If anyone has any idea what's going on, I appreciate the help. My goal is to be able to upload to CircleCI by first running mvn clean so only the src files, pom.xml file, and .circleci folder are included in the upload. Also, not to belabor the point, but my most recent upload to CircleCI did in fact work, but I'm not sure what made that build work while all the others ones did not.
I am a complete beginner at hosting applications right now but am trying to get a hold of it.
- I have the MySQL database running locally on my PC. How to exactly should I host it somewhere online.
- When I tried to deploy my Go server on Heroku, I got the following error and couldn't find a solution for it anywhere online.
-----> App not compatible with buildpack: https://codon-buildpacks.s3.amazonaws.com/buildpacks/heroku/go.tgz
More info: https://devcenter.heroku.com/articles/buildpacks#detection-failure
! Push failed
Any help in this regard would be appreciated!
To fix this problem you need to use some of vendoring tools - e.g. Godep, dep etc.
Heroku can't deploy golang app without 'vendor' folder
just run this command:
go mod init [app-name]
this will create a go mod file and a go sum for you
then run:
go get
to install those packages you called in your go app.
developers usually use go mod init github.com/name/repo
but go mod init [app-name]
will do the job!
I am having a problem deploying an EB instance with a custom .ebextensions file. This is the relevant part in that file:
container_commands:
01_migrate:
command: 'python db_migrate.py'
02_npm_build:
command: 'npm install && npm run prod'
As you can see, these commands are for migrating my PostgreSQL database (via a Flask backend) and building my React .jsx files.
If I leave these commands out, the deployment completes perfectly well. However, once I put them in, looking at the eb-activity.log it stalls at this part forever (as far as I can tell):
[2017-04-10T02:39:24.106Z] INFO [3023] - [Application deployment app-613e-170409_223418#1/StartupStage0/EbExtensionPostBuild] : Starting activity...
I also get this message on the Health overview in the console (this is after 1 day):
Performing application deployment (running for 1 day).
I have also tried to deploy it without those container_commands, and then including it back after the successful initial deployment. Then I get the same error message as before in eb-activity.log, and I also get this message on the Health overview:
Incorrect application version "app-2a3d-170409_214923" (deployment 1). Expected version "app-2a3d-170409_214923" (deployment 1).
Which is very strange because those two versions referenced are the same versions. I don't know what this means!
I found a solution.
Remove all you container_commands from .ebextensions/
Go ssh to instance, kill process with.
sudo killall python
Then Deploy new version without container_commands.
And start debuging all your container_commands, one by one on ssh..
Have fun.