We have a custom test runner for our UI Smoke Tests and I want to report code coverage to TeamCity to enable metrics monitoring and failing of build, etc.
I'm trying the use service messages to achieve this as follows:
##teamcity[buildStatisticValue key='CodeCoverageAbsMCovered' value='5']
##teamcity[buildStatisticValue key='CodeCoverageAbsMTotal' value='10']
But it's not being picked up. Can someone please tell me how to send coverage data to Teamcity.
Thanks,
dave
The key used in service messages should not be equal to any of predefined keys.
Related
I can find a release in Azure DevOps Api 5.1 by a request to https://vsrm.dev.azure.com/mycompany/myproject/_apis/release/releases/myreleaseid?api-version=5.1
How can I get the workitems of that release as shown on the devops portal under Deployment - Stages - Workitems?
My naive approach just using https://vsrm.dev.azure.com/mycompany/myproject/_apis/release/releases/myreleaseid/workitems?api-version=5.1
resulted in a 404.
There is a stakeholder in the workitem and I want to send him a notification adter the release.
How can I get the workitems of that release as shown on the devops
portal under Deployment - Stages - Workitems?
Hard to say, but I don't find any document about this topic... So I determine to use F12 to find that. And here's the one I finally find:
Get:https://vsrm.dev.azure.com/mycompany/myproject/_apis/Release/releases/myreleaseId/workitems?baseReleaseId={my baseReleaseId}&%24top=250&artifactAlias={my artifactAlias}
It will returns the IDs of the workItems for the release. Its response format:
After you get the IDs, it's easy to get details if you need using Get Work Items Batch or what.
In addition:
1.myreleaseId is the ReleaseID. (On my side, the ID is 7 if it's Release-7)
2.my artifactAlias is this:
3.For my baseReleaseId, I'm not 100% sure about its meaning. I think it could be something like ReleaseToCompareAgainst. Hint from Daniel. (On my side, if my releaseId=7, then I use basereleaseID=6(7-1), it works to get the correct WIT ids). (Actually I suggest you can use F12 in that web page to check your corresponding URL.)
And according to Mathias F: The baseReleaseId realy is the last previous release that has a deployment (-1 in some cases)
4.About how to use F12 to find rest api which may not be documented:
Hope all above helps :)
I'm trying to implement a simple solution to send http request metrics to Stackdriver in GCP from my API hosted in a compute engine instance.
Using recent version of Spring Boot (2.1.5). I've also pulled in actuator and micrometer-registry-stackdriver packages, actuator works for health endpoint at the moment, but am unclear on how to implement metrics for this.
In the past (separate project, different stack), I mostly used the auto-configured elements with influx. Using management.metrics.export.influx.enabled=true, and some other properties in properties file, it was a pretty simple setup (though it is quite possible the lead on my team did some of the heavy lifting while I wasn't aware).
Despite pulling in the stackdriver dependency I don't see any type of properties for stackdriver. Documentation is all generalized, so I'm unclear on how to do this for my use case. I've searched for examples and can find none.
From the docs: Having a dependency on micrometer-registry-{system} in your runtime classpath is enough for Spring Boot to configure the registry.
I'm a bit of a noob, so I'm not sure what I need to do to get this to work. I don't need any custom metrics really, just trying to get some metrics data to show up.
Does anyone have or know of any examples in setting this up to work with Stackdriver?
It seems like the feature for enabling Stackdriver Monitoring for COS is currently in Alpha. If you are down to try GCE COS VM with the agent, you can request access via this form .Curiously, I was able to install monitoring agent during instance creation as a test. I used COS image : Container-Optimized OS 75-12105.97.0 stable.
Inspecting COS, collect d agent seems to be installed here :/etc/stackdriver/monitoring.config.d and
Inspecting my monitoring Agent dashboard, I can see activity from the VM (CPU usage, etc.). I'm not sure if this is what you're trying to achieve but hopefully it points you in the right direction.
From my understanding, you try to monitor a 3rd party software that you built and get the results in GCP Stackdriver? If that’s right, I would like to suggest you to implement the stackdriver monitoring agent [1] on your VM instance, including the Stackdriver API output plugin. This agent gathers system and 3rd party application metrics and pushes the information to a monitoring system like Stackdriver.
The Stackdriver Monitoring Agent is based on the open-source “collectd” daemon so let me share some more precious documentation from its website [2].
Prior to spring-boot 2.3 StackDriver is not supported out of the box, but it's not much configuration to make it work.
#Bean
StackdriverConfig stackdriverConfig() {
return new StackdriverConfig() {
#Override
public String projectId() {
return MY_PROJECT_ID;
}
#Override
public String get(String key) {
return null;
}
}
}
#Bean
StackdriverMeterRegistry meterRegistry(StackdriverConfig stackdriverConfig) {
return StackdriverMeterRegistry.builder(stackdriverConfig).build();
}
https://micrometer.io/docs/registry/stackdriver
now I run on new problems with scout testing.
I have Client fragment project for testing and I would like to test some templates I created.
My problem is that this templates contains some SmartFields and I would like to test them. For this I probably need ScoutServerTestRunner, so the server is up and running.
But If I try to add it I get error :
#RunWith(ScoutServerTestRunner.class)
#ServerTest()
I get error :
ServerTest cannot be resolved to a type
, all of my assert imports are deleted and I get error on my package line suggesting me Configure build path.
My guess is that this can't be done because it is client fragment and it can't connect to server.
But how then test smartFields ?
From your question I guess that there is some misunderstanding...
ScoutServerTestRunner and #ServerTest is something similar to ClientServerTestRunner and #ClientTest but for the server. You will need it for tests testing the server.
The classes are located in the org.eclipse.scout.rt.testing.server bundle.
If in a client test you need a server you have two possibilities:
A/ Start a server
You can start a server
This will probably not be the normal server (the one like in production) because you want to control the database or some external services. Authentication might also be slightly different (in order to control it and to have something compatible with your tests)
For the integration in your maven build, the maven-cargo plugin can be used to start your server before executing the client test suite.
B/ Mock the server services
Each of the services call that creates a ProxyService calling the server, can be replaced by mock (client only).
This is the preferred way for unit test, because you do not rely on a deployed server. You can also define for each test what the server answer will be.
This solutions requires probably initially more work, but in my opinion it worth it.
To register an alternative service, you can use:
TestingUtility.registerServices(
<activator instance>,
<priority>,
<service instances>
);
The service with the higher priority will win.
In each test, do not forget to un-register the alternative services you have registered.
SmartFields are using CodeTypes or LookupCalls. In case of a LookupCall, the LookupCall is probably calling the server through a LookupService. In case of a CodeType, the SmartField is internally using the CodeLookupCall class relying on a ICodeService.
In both cases, if you want to run your test without a server, you need to ensure that the client uses alternative implementations of the required services that do requires a server.
What do I need to add to the default html_gmail.jelly script to have it show the classes that were tested including how many tests were ran within each class?
When a Jenkins job is complete you can drill down to the Junit Test Results in an address that looks like:
http://somecompany.jenkins.com/view/App_Automation/job/Application_Under_Test/129/testReport/com.AUT.testing.mobile/
The test results are generated by the build.xml so is it just a matter of pointing to that xml file?
The email-ext page shows a clean example but not the tokens that are used to achieve that: http://wiki.hudson-ci.org/download/attachments/3604514/html.jpg
Currently using the ${FAILED_TESTS} token generates a nice Tested; Failed; Skipped number, but nothing that points to which tests passed/failed/skipped. I would like to show the total number of tests including which tests were actually ran.
Thanks ahead of time
OK I figured out how to display the pass and failed methods by adding var=pass or var=fail to the token of those assignments.
First go to the Jelly script in the this path:
~/.hudson/plugins/email-ext/WEB-INF/classes/hudson/plugins/emailext/templates/automation.jelly
$DEFAULT_SUBJECT (${build.testResultAction?.failCount} ${build.testResultAction?.failureDiffString})
SETTING UP THE CONFIG IN JENKINS
DEFAULT SUBJECT:
$PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS!
DEFAULT CONTENT:
$PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS:
Check console output at $BUILD_URL to view the results.
Changes:
${CHANGES}
Changes Since Last Success
${CHANGES_SINCE_LAST_SUCCESS}
Failed Tests:
${FAILED_TESTS}
Build Log:
${BUILD_LOG}
Total Amount of Tests:
${TEST_COUNTS, var}
Total = $TEST_COUNTS
Failed = ${TEST_COUNTS,var="fail"}
Total = $TEST_COUNTS
Passed = ${TEST_COUNTS,var="pass"}
Job Description:
${JOB_DESCRIPTION}
Place this in the email job
${JELLY_SCRIPT,template="html-with-health-and-console"}
Note the templates available are noted in the path ~/.hudson/plugins/email-ext/WEB-INF/classes/hudson/plugins/emailext/templates/automation.jelly or create your own.
I've read over the Jenkins site and its JUnit plugin, and for some reason something that is very basic is just not apparent to me.
Jenkins has an Email-ext plugin for sending custom/advanced notification emails whenever a build is ran. In these emails you can place "content tokens" that are runtime variables that get replaced with dynamic values when the email is being generated.
One of these tokens is TEST_COUNTS which allows you to display the number of JUnit tests that ran, or which failed, etc.
How does one go about getting Jenkins to display this information correctly? Is there a plugin I need, and if so, which one? I have my build running JUnit and generating an XML report. I assume Jenkins somehow parses JUnit results out of that XML and uses it to give values to that token.
But on the other hand, I've read "literature" (mailing list posts) that seems to suggest that in order to use that token you need to use Jenkins to run the unit tests, not a junit Ant task from inside your build script.
Can someone clarify this for me and perhaps even set out the "order of operations" for what steps I need to take in order to be able to make use of this token?
It would be supremely useful to get test counts in our build notifications.
Your first explanation is right. You tell Jenkins where to search for JUnit output files, and it parses them to find out the test results:
The test results appear on each project and build page, so as long as you're seeing the correct results there, you should get the correct token replacements in your e-mails
Add something like this to the content in the "Editable Email Notification" configuration:
Total = $TEST_COUNTS
Failed = ${TEST_COUNTS,var="fail"}
I also recommend the Jenkins users mailing list for Jenkins questions, usually helpful.