Group Snyk projects in a target in Snyk.io - snyk

I'm exploring snyk for some vulnerability detection in our repos.
A repo usually contains:
Dockerfile
IaC (CloudFormation)
package.json and dependencies (or Pipfile in Python).
Now I run the following commands:
$ snyk iac test --severity-threshold=high --report --target-name=company/app
$ snyk test --project-name=company/app
$ snyk container test --project-name=company/app --severity-threshold=hight <some-image> --policy-path=.snyk
This works fine, but only the snyk iac test generates a report which is uploaded in snyk.io. Is there a way to do this for the other tests?
I was thinking to enable the monitoring after these commands e.g.
snyk monitor --project-name='company/app'
The problem here is that it suddenly uses a different "target" in snyk.io
While my IAC report is in target 'company/app', my snyk monitor report is in target /company/app.git
And when I run
snyk container monitor --project-name='company/app' <some-image>
Then the report is in a target called "some-image".
Is it possible to have everything in the same Snyk target or is this not how Snyk is supposed to work? I would prefer to have one target in which you can see the reports/monitoring for IaC, deps and Docker.

Related

Snyk monitor command is failing

I am running below SNYK command for standard WAS application.
snyk monitor --all-projects
It is failing because there is a war-src module which contains ${project.version} tag and this version is mentioned in main pom.xml inside property tag...when i run snyk monitor command it is not picking up the version ${project.version} and throwing error.
In snyk document saw that we can pass maven options using build tool command. Whatever command mentioned below is correct? If not Please let me know how can i make use of this?
snyk monitor --all-projects -- -Dproject.version=2.1.0
Yes. You can use the double dash to pass additional arguments to Maven. The command you wrote is the correct way to pass the -Dproject.version=2.1.0 argument to Maven

Snyk test returns Failed to test pip project error

I'm running security scan with Snyk CLI for python project. Unfortunately snyk test command returns Failed to test pip project error. What am I missing, snyk test works just fine while scanning npm project ?
$ snyk test --file=requirements.txt
Failed to test pip project
I found the cause basically you need to do two things:
Make sure that packages that your project uses are installed.
Make sure that you are using correct python.
Solution
$ pip3 install -r requirements.txt
$ snyk test --file=requirements.txt --command=python3
Info
You can bypass missing python packages by passing the --allow-missing pip parameter through snyk by using the additional -- argument.
$ snyk test --file=requirements.txt --command=python3 -- --allow-missing
Docs
-- [COMPILER_OPTIONS]
Pass extra arguments directly to Gradle or Maven. E.g. snyk test
-- --build-cache
Python options
--command=COMMAND
Indicate which specific Python commands to use based on Python
version. The default is python which executes your systems de-
fault python version. Run 'python -V' to find out what version
is it. If you are using multiple Python versions, use this pa-
rameter to specify the correct Python command for execution.
Default: python Example: --command=python3
snyk monitor command will also return undefined if it is not ran with
pip3 install -r requirements.txt
snyk test --file=requirements.txt --command=python3
snyk monitor --file=requirements.txt --command=python3
If you are using Snyk and VScode, and you open a repo that has a Python VirtualEnv, you can get this error in your VScode terminal window.
[Error] Open Source Security test failed for "/home/{user}/path/to/repo". Failed to test pip project
Fix for VScode:
Close that VScode window.
From a terminal, navigate to the top folder of that repo.
Run the command to activate the virtual env
Example: . .venv/bin/activate
Open VScode for that folder
Example: run code .
The Snyk Open Source Security test should run without that error now.
If you are using virtual environments, then make sure you have activated the venv with
. venv/Scripts/activate
Then try running Snyk Test again.
Snyk monitor and other cli commands should work from that! :)

Two gitlab-ci runners for one project

I used to have a project on github with a travis and an appveyor integration service configured. Thus I was able to make sur my project was compiling ok on both OSX and Windows plateform.
I'm now working with gitlab and ci runners. I have two runners configured:
One on a OSX machine
One on a Windows machine
Unfortunately when I add both runners in my project settings > CI/CD > Runners settings, only one is triggered upon push (the OSX one).
If I disable the OSX runner, the Windows runner is triggered fine.
One Job is only running by one runner.
I guess you want that your Job is running twice
on your windows runner
on your osx runner
To do so
Tag your runners (e.g. win and mac)
duplicate your job for the same stage and add for your windows runner job the win tag and for your mac runner job the mac tag.
This should take care that both runners will run the job in the next pipeline.
stages:
- build
mac_build:
stage: build
tags:
- mac
script:
- something ...
win_build:
stage: build
tags:
- win
script:
- something ...

Remove older Hyperledger-sawtooth or pull latest repo and rerun build_all?

I had previously brought down 0.8 and want to use new version.
Is it ok to update local repo and 'build_all' or must I remove all the older docker images first?
This may be brute force, but this is what I ended up doing.
Caution, the docker command will take out all images so if you want to
preserve some of them you may want a more selective approach.
Sawtooth platform
Remove all docker images using this command docker rmi -f $(docker images -a -q)
Bring down the latest sawtooth compose file sawtooth-default.yaml
Execute compose docker-compose -f sawtooth-default.yaml up
Sawtooth repo development
Clone the latest repository
Go to the root directory of the repo cd ~\sawtooth-core
At a minimum do bin\build_all -l python
I am using java so I do a bin\build_all -l java as well
Access to individual CLI and dev languages tested out 100% as per the Hyperledger Sawtooth documentation

Alternative ways to deploy code to Openshift

I am trying to setup Travis CI to deploy my repository to Openshift on a successful build. Is there a way to deploy a repository besides using Git?
Git is the official mechanism for how your code is update, however depending on the type of application that you are deploying you may not need to deploy your entire code base.
For example Java application (war, ear, etc) can be deployed to JBoss or Tomcat servers, by simply taking the built application and checking it into the OpenShift git repositories, webapps or deploy directories.
An alternative to this (and it will be unsupported), is to scp your application to the gear using the SSH key. However any time the application is moved or updated (with git) this content stands a good chance of getting deleted(cleaned), by the gear.
We're working on direct binary deploys ("push") and "pull" style deploys (Openshift downloads a binary for you. The design/process is described here:
https://github.com/openshift/openshift-pep/blob/master/openshift-pep-006-deploy.md
You can do a SCP to the app-root/dependencies/jbossews/webapps directory direcly. I was able to do that successfully and have the app working. Here is the link
Here is the code which I had in the after_success blck
after_success:
- sudo apt-get -y install sshpass
- openssl aes-256-cbc -K $encrypted_8544f7cb7a3c_key -iv $encrypted_8544f7cb7a3c_iv
-in id_rsa.enc -out ~/id_rsa_dpl -d
- chmod 600 ~/id_rsa_dpl
- sshpass scp -i ~/id_rsa_dpl webapps/ROOT.war $DEPLOY_HOST:$DEPLOY_PATH
Hope this helps