Springdoc and prometheus - springdoc

Is it possible to gather information on prometheus end point by sprindoc?
We've recently moved from springfox library to springdoc.
Springfox was able to gather information on all end points running on the server.
Springdoc is only showing now end points configured as #RestController on code level.
There is a springdoc config property allowing to define packages to scan.
However this doesn't help, because prometheus end point is configured in springboot properties
management.endpoints.web.exposure.include= prometheus

Found a solution. Following property is adding all spring-boot-actuator endpoints to swagger:
springdoc.show-actuator=true
Actuator end points themselves are configured via the property:
management.endpoints.web.exposure.include= prometheus
etc..

Related

Instrumentation of Mysql Jpa Repository in Spring using AWS X-Ray not working

I am trying to instrument Mysql calls using AWS X-Ray in my spring application. http and s3 instrumentation is working fine.
I have set the property: spring.datasource.jdbc-interceptors=com.amazonaws.xray.sql.mysql.TracingInterceptor
I have included following dependancies in build.gradle
compile 'com.amazonaws:aws-xray-recorder-sdk-spring'
compile("com.amazonaws:aws-xray-recorder-sdk-core")
compile("com.amazonaws:aws-xray-recorder-sdk-aws-sdk")
compile("com.amazonaws:aws-xray-recorder-sdk-aws-sdk-instrumentor")
compile("com.amazonaws:aws-xray-recorder-sdk-apache-http")
compile("com.amazonaws:aws-xray-recorder-sdk-sql-mysql")
dependencyManagement {
imports {
mavenBom('com.amazonaws:aws-xray-recorder-sdk-bom:1.3.1')
}
}
I am using JpaRepositories. I am expecting all my sql queries to get instrumented automatically after above setup is done. I am following amazon doc at this location: https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-sqlclients.html
What am I missing?
Update: I can see mysql traces for spring's health endpoint. But jpa calls are still unseen.
Are you constructing the DataSource object using the defined spring.datasource properties in your application.properties?
See this dataSource() method (GitHub) in the RdsWebConfig class which uses the #ConfigurationProperties(prefix = "spring.datasource") annotation in order to pick up the relevant jdbc-interceptors property.
Hope this helps.
James

MUnit test fails - Cannot process event as “FileConnector” is stopped

I am implementing Munit for a flow which involves Mule Requester. This mule requester would be picking up a file.
So, when i run the java class as Junit, it throws out an exception as, Cannot perform the operation on the FileConnector as it is stopped.
The expression used in mule requester is ,
file ://${path}?connector=FileConnector
I have also defined a global file connector.
Please let me know how to resolve this issue.
Thank you.
All connectors and inbound-endpoints are disabled by default in MUnit. This is to prevent flow accidentally processing/generating real data. (Some explanation here). For the same reason File Connector is also disabled.
To enable connectors, you need to override a method in your MUnitsuite as below -
#Override
protected boolean haveToMockMuleConnectors() {
return false;
}
For XML Munit, see this to enable connectors.
Note: This will enable and start all the connectors that you are using in your mule-configs under test. If you have SMTP connector, DB connector, MQ connector etc, they all be started during test, so use it with caution.
Check whether the file connector is defined in the files you loaded for munit.
<spring:beans>
<spring:import resource="classpath:api.xml"/>
</spring:beans>
You may also try mocking the mule requester.

Trouble using json-jackson in camel-blueprint-test

I am trying to test a Camel Blueprint route in camel-blueprint-test. This route can load in karaf and it also worked when using Camel and Spring. At this point I am getting:
org.apache.camel.FailedToCreateRouteException: Failed to create route route1 at: >>> Unmarshal[ref:IssRequest] <<< in route: Route(route1)[[From[seda:from_rraa]] -> [process[ref:issPrep... because of Data format 'json-jackson' could not be created. Ensure that the data format is valid and the associated Camel component is present on the classpath
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:1028)
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:185)
at org.apache.camel.impl.DefaultCamelContext.startRoute(DefaultCamelContext.java:841)
at org.apache.camel.impl.DefaultCamelContext.startRouteDefinitions(DefaultCamelContext.java:2911)
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:2634)
...
Other posts have suggested adding camel-jackson in the pom.xml, but I have that already. Also suggested was loading the feature in the karaf container, but this is when running unit tests in camel-blueprint-test, not in real karaf.
There is a bug in that version, use 2.15.2 or 2.16.0 or wait for 2.15.4

Logback config, puppet and application versions

I am busy testing a new approach to managing a java application that uses logback on a puppet-managed host, and was wondering if anyone had some advice on the best approach for this. I am stuck with a catch 22 situation.
The java application is deployed to a host by an automated system (CI). The deployment writes an application version number to a file (e.g. /etc/app.version may contain "0001")
The logback config file (logback.xml) is managed by puppet.
I am trying to configure the application to include it's version number in the logging layout (e.g. <pattern>VERSION: %version%</pattern> . However, I am not sure on the approach, as there isn't an "include" function for the logback config file (to include a file with the version number into the logback config). At the same time, I don't see a way to get puppet to do a client-side template build, using the host-side file (I've tried using a template approach, but the template is compiled on the puppet server side).
Any idea's on how to get this working?
I would write a custom fact. Facts are executed on the client.
Eg:
logback/manifests/init.pp
file { '/etc/logback.xml':
content => template('logback/logback.xml.erb')
}
logback/templates/logback.xml.erb
...
<pattern>VERSION: <%= scope.lookupvar('::my_app_version') %></pattern>
...
logback/lib/facter/my_app_version.rb
Facter.add('my_app_version') do
setcode do
begin
File.read('/etc/app.version')
rescue
nil
end
end
end
Hope that helps. I think in Puppet < 3.0 you will have to set "pluginsync = true" in puppet.conf to get this to work.

What is the best way to pass configurations to OSGI components?

I have a set of parameters that should be configured by the user. But they are just too much to send them through RESTful services or something similar. Besides there may be another set of configurations of same parameters.
Assume that my configurations are: p1, p2, p3, ... p10
I want to make possible having more than set of initialization of these configurations such as:
(p1=x, p2=y, ... p10=1)
(p1=a, p2=b, ... p10=10)
To do that I currently implement my OSGI component with metatype=true and configurationFactory = true options so that each instance of my component will have a set of configurations initialized. Then, I process the instances in a manager component.
So the question what do you suggest for passing configurations to OSGI components from user?
Thanks
If this is really about configurations you should use the OSGi ConfigurationAdmin service. A console like the Apache Felix WebConsole can then be used to edit configurations.
If the values (or some values) can be different for each RESTful call to your application and they don't fit in a URL, you can make a POST request instead of a GET, and pass the values in the body of the request, in a suitable format.