I'm trying to implement a simple solution to send http request metrics to Stackdriver in GCP from my API hosted in a compute engine instance.
Using recent version of Spring Boot (2.1.5). I've also pulled in actuator and micrometer-registry-stackdriver packages, actuator works for health endpoint at the moment, but am unclear on how to implement metrics for this.
In the past (separate project, different stack), I mostly used the auto-configured elements with influx. Using management.metrics.export.influx.enabled=true, and some other properties in properties file, it was a pretty simple setup (though it is quite possible the lead on my team did some of the heavy lifting while I wasn't aware).
Despite pulling in the stackdriver dependency I don't see any type of properties for stackdriver. Documentation is all generalized, so I'm unclear on how to do this for my use case. I've searched for examples and can find none.
From the docs: Having a dependency on micrometer-registry-{system} in your runtime classpath is enough for Spring Boot to configure the registry.
I'm a bit of a noob, so I'm not sure what I need to do to get this to work. I don't need any custom metrics really, just trying to get some metrics data to show up.
Does anyone have or know of any examples in setting this up to work with Stackdriver?
It seems like the feature for enabling Stackdriver Monitoring for COS is currently in Alpha. If you are down to try GCE COS VM with the agent, you can request access via this form .Curiously, I was able to install monitoring agent during instance creation as a test. I used COS image : Container-Optimized OS 75-12105.97.0 stable.
Inspecting COS, collect d agent seems to be installed here :/etc/stackdriver/monitoring.config.d and
Inspecting my monitoring Agent dashboard, I can see activity from the VM (CPU usage, etc.). I'm not sure if this is what you're trying to achieve but hopefully it points you in the right direction.
From my understanding, you try to monitor a 3rd party software that you built and get the results in GCP Stackdriver? If that’s right, I would like to suggest you to implement the stackdriver monitoring agent [1] on your VM instance, including the Stackdriver API output plugin. This agent gathers system and 3rd party application metrics and pushes the information to a monitoring system like Stackdriver.
The Stackdriver Monitoring Agent is based on the open-source “collectd” daemon so let me share some more precious documentation from its website [2].
Prior to spring-boot 2.3 StackDriver is not supported out of the box, but it's not much configuration to make it work.
#Bean
StackdriverConfig stackdriverConfig() {
return new StackdriverConfig() {
#Override
public String projectId() {
return MY_PROJECT_ID;
}
#Override
public String get(String key) {
return null;
}
}
}
#Bean
StackdriverMeterRegistry meterRegistry(StackdriverConfig stackdriverConfig) {
return StackdriverMeterRegistry.builder(stackdriverConfig).build();
}
https://micrometer.io/docs/registry/stackdriver
Related
I've been using the Google apiclient library in python for various Google Cloud APIs - mostly for Google Compute - with great success.
I want to start using the library to create and control the Google Logging mechanism offered by the Google Cloud Platform.
However, this is a beta version, and I can't find any real documentation or example on how to use the logging API.
All I was able to find are high-level descriptions such as:
https://developers.google.com/apis-explorer/#p/logging/v1beta3/
Can anyone provide a simple example on how to use apiclient for logging purposes?
for example creating a new log entry...
Thanks for the help
Shahar
I found this page:
https://developers.google.com/api-client-library/python/guide/logging
Which states you can do the following to set the log level:
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
However it doesn't seem to have any impact on the output which is always INFO for me.
I also tried setting httplib2 to debuglevel 4:
import httplib2
httplib2.debuglevel = 4
Yet I don't see any HTTP headers in the log :/
I know this question is old, but it is getting some attention, so I guess it might be worth answering to it, in case someone else comes here.
Stackdriver Logging Client Libraries for Google Cloud Platform are not in beta anymore, as they hit General Availability some time ago. The link I shared contains the most relevant documentation for installing and using them.
After running the command pip install --upgrade google-cloud-logging, you will be able to authenticate with your GCP account, and use the Client Libraries.
Using them is as easy as importing the library with a command such as from google.cloud import logging, then instantiate a new client (which you can use by default, or even pass the Project ID and Credentials explicitly) and finally work with Logs as you want.
You may also want to visit the official library documentation, where you will find all the details of how to use the library, which methods and classes are available, and how to do most of the things, with lots of self-explanatory examples, and even comparisons between the different alternatives on how to interact with Stackdriver Logging.
As a small example, let me also share a snippet of how to retrieve the five most recent logs which have status more sever than "warning":
# Import the Google Cloud Python client library
from google.cloud import logging
from google.cloud.logging import DESCENDING
# Instantiate a client
logging_client = logging.Client(project = <PROJECT_ID>)
# Set the filter to apply to the logs, this one retrieves GAE logs from the default service with a severity higher than "warning"
FILTER = 'resource.type:gae_app and resource.labels.module_id:default and severity>=WARNING'
i = 0
# List the entries in DESCENDING order and applying the FILTER
for entry in logging_client.list_entries(order_by=DESCENDING, filter_=FILTER): # API call
print('{} - Severity: {}'.format(entry.timestamp, entry.severity))
if (i >= 5):
break
i += 1
Bear in mind that this is just a simple example, and that many things can be achieved using the Logging Client Library, so you should refer to the official documentation pages that I shared in order to get a more deep understanding of how everything works.
However it doesn't seem to have any impact on the output which is
always INFO for me.
add a logging handler, e.g.:
formatter = logging.Formatter('%(asctime)s %(process)d %(levelname)s: %(message)s')
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.DEBUG)
consoleHandler.setFormatter(formatter)
logger.addHandler(consoleHandler)
I'm new to Google Cloud Dataflow, as is probably obvious from my questions below.
I've got a dataflow application written and can get it to run without issue using my personal credentials both locally and on a GCE instance. However, I can't seem to crack the proper steps to get it to run using the compute engine instance's service credentials or service credentials I've created using the API & AUTH section of the console. I always get a 401 not authorized error when I run.
Here's what I've tried...
1) Created virtual machine granting access rights to storage, datastore, sql and compute engine. My understanding is that this supposedly created a CI specific services account that is the server's default credentials. These should be used the same way a user's authentication is used when an app is run on this instance. Here's where I get a 401. My question is... Where can I see this service account that was supposedly created? Or do I just rely that it exists somewhere?
2) Created service credentials in API & Auth portion of developer's console. Then used cloud auth activate-service-account and activated that account by pointing the command at the credentials json file I downloaded. Kind of like the OAUTH round trip when you use gcloud auth login. Here I also get the 401.
3) This last thing was using the service credentials from step 2 separate from the GCE instance and then create an object that implements the CredentialFactory interface and pass it off to the PipelineOptions. However, when it runs the app crashes now with an error saying that it is looking for a method, fromOptions, that isn't in the CredentialFactory interface. How the options were configured, what the credentials factory looked like and the stack trace from this follows.
I would be happy to utilize any of the above 3 methods to make use of service credentials, if I could get any of them to work. Any insight you can provide on what I'm doing wrong, steps I'm leaving out, other unexplored options would be greatly appreciated. The documentation is a little dis-jointed. If there is a clear step by step guide a link to that would be sufficient. What I've found so far on my own has been of little assistance.
If I can provide any additional information please let me know.
Here's some code that may be helpful and the stack trace I get when the code runs using the credential factory.
Options setup code looks like this:
GcrDataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args)
.withValidation()
.as(GcrDataflowPipelineOptions.class);
options.setKind("Counties");
options.setCredentialFactoryClass(GoogleCredentialProvider.class);
GoogleCredentialProvider.java
Notice the json file I downloaded as part of creating the services account (renamed) is what's loaded as a resource from my apps class path.
public class GoogleCredentialProvider implements CredentialFactory {
#Override
public Credential getCredential() throws IOException, GeneralSecurityException {
final String env = System.getProperty("gcr_dataflow_env", "local");
Properties props = new Properties();
ClassLoader loader = this.getClass().getClassLoader();
props.load(loader.getResourceAsStream(env + "-gcr-dataflow.properties"));
final String credFileName = props.getProperty("gcloud.dataflow.service.account.file");
InputStream credStream = loader.getResourceAsStream(credFileName);
GoogleCredential credential = GoogleCredential.fromStream(credStream);
return credential;
}
}
Stacktrace:
java.lang.RuntimeException: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:268)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:123)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:120)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:684)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:200)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:196)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:99)
at com.google.cloud.dataflow.sdk.Pipeline.traverseTopologically(Pipeline.java:208)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.run(DirectPipelineRunner.java:640)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:354)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:76)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:149)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.run(GcrDataflowApp.java:65)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.main(GcrDataflowApp.java:49)
Caused by: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:224)
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.build(InstanceBuilder.java:161)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:180)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:175)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.getDefault(ProxyInvocationHandler.java:288)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:127)
at com.sun.proxy.$Proxy42.getGcpCredential(Unknown Source)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.getDatastore(DatastoreIO.java:335)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:320)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:186)
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:259)
... 13 more
java.lang.RuntimeException: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
2015-07-03 09:55:42,519 | main | DEBUG | co.sc.gc.da.ap.GcrDataflowApp | destroying
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:268)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:123)
at com.google.cloud.dataflow.sdk.io.Read$Bound$1.evaluate(Read.java:120)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:684)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:200)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:196)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:99)
at com.google.cloud.dataflow.sdk.Pipeline.traverseTopologically(Pipeline.java:208)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.run(DirectPipelineRunner.java:640)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:354)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner.run(DirectPipelineRunner.java:76)
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:149)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.run(GcrDataflowApp.java:65)
at com.scotcro.gcr.dataflow.app.GcrDataflowApp.main(GcrDataflowApp.java:49)
Caused by: java.lang.RuntimeException: Unable to find factory method com.scotcro.gcr.dataflow.components.pipelines.GoogleCredentialProvider#fromOptions
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:224)
at com.google.cloud.dataflow.sdk.util.InstanceBuilder.build(InstanceBuilder.java:161)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:180)
at com.google.cloud.dataflow.sdk.options.GcpOptions$GcpUserCredentialsFactory.create(GcpOptions.java:175)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.getDefault(ProxyInvocationHandler.java:288)
at com.google.cloud.dataflow.sdk.options.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:127)
at com.sun.proxy.$Proxy42.getGcpCredential(Unknown Source)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.getDatastore(DatastoreIO.java:335)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:320)
at com.google.cloud.dataflow.sdk.io.DatastoreIO$Source.createReader(DatastoreIO.java:186)
at com.google.cloud.dataflow.sdk.runners.dataflow.BasicSerializableSourceFormat.evaluateReadHelper(BasicSerializableSourceFormat.java:259)
... 13 more
You likely do not have the proper credentials. When you execute a Dataflow job from GCE, The service account attached to the instance will be used for validation by DataFlow.
Did you do this when creating your machines?
create a service account for the instance on GCE?
https://cloud.google.com/compute/docs/authentication#using
Set the required scopes for using Dataflow such as storage, compute,
and bigquery? https://www.googleapis.com/auth/cloud-platform
I was wondering if Wirecloud offers complete support for object storage with FI-WARE Testbed instead of Fi-lab. I have successfully integrated Wirecloud with Testbed and have developed a set of widgets that are able to upload/download files to specific containers in Fi-lab with success. However, the same widgets do not seem to work in Fi-lab, as i get an error 500 when trying to retrieve the auth tokens (also with the well known object-storage-test widget) containing the following response:
SyntaxError: Unexpected token
at Object.parse (native)
at create (/home/fiware/fi-ware-keystone-proxy/controllers/Token.js:343:25)
at callbacks (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:164:37)
at param (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:138:11)
at pass (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:173:5)
at Object.router (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:33:10)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
at Object.handle (/home/fiware/fi-ware-keystone-proxy/server.js:31:5)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
I noticed that the token provided in the beggining (to start the transaction) is
token: Object
id: "%fiware_token%"
Any idea regarding what might have gone wrong?
The WireCloud instance available at FI-WARE's testbed is always the latest stable version while the FI-LAB instance is currently outdated, we're working on updating it as soon as possible. One of the things that changes between those versions is the Object Storage API, so sorry for the inconvenience as you will not be able to use widgets/operators using the Object Storage in both environments.
Anyway, the response you were obtaining seems to indicate the object storage instance you are accessing is not working properly, so you will need to send an email to one of the available mail lists for getting help (fiware-testbed-help or fiware-lab-help) telling what is happening to you (remember to include your account information as there are several object storage nodes and ones can be up and the others down).
Regarding the strange request body:
"token": {
id: "%fiware_token%"
}
This behaviour is normal, as the WireCloud client code has no direct access to the IdM token of the user. It's the WireCloud's proxy which replaces the %fiware_token% pattern with the correct value.
Is there a way to access the JBoss JMX data via JSON?
I am trying to pull a management console together using data from a number of different servers. I can achieve this using screen scraping, but I would prefer to use a JSON object or XML response if one exists, but I have not been able to find one.
You should have a look at Jolokia, a full featured JSON/HTTP adapter for JMX.
It supports and has been tested on JBoss as well as on many other platforms. Jolokia
is an agent, which is deployed as a normal Java EE war, so you simply drop it into your
deploy directory within you JBoss installation. Also, there a some client libraries available, e.g. jmx4perl which allows for programatic access to the agent.
There is much more to discover and it is actively developed.
If you are using Java, then you can make small program that make JMX request to JBoss server and transform the response into XML/JSON.
Following is small code snippet. This may help you.
String strInitialProp = "javax.management.builder.initial";
System.setProperty(strInitialProp, "mx4j.server.MX4JMBeanServerBuilder");
String urlForJMX = "jnp://localhost:1099";//for jboss
ObjectName objAll = ObjectName.getInstance("*:*");
JMXServiceURL jmxUrl = new JMXServiceURL(urlForJMX);
MBeanServerConnection jmxServerConnection = JMXConnectorFactory.connect(jmxUrl).getMBeanServerConnection();
System.out.println("Total MBeans :: "+jmxServerConnection.getMBeanCount());
Set mBeanSet = jmxServerConnection.queryNames(objAll,null);
There are some jmx-rest bridges available, that internally talk JMX to MBeans and expose the result over REST calls (which can deliver JSON as data format).
See e.g. polarrose or jmx-rest-access. There are a few others out there.
Is there any way to "subscribe" from GWT to JSON objects stream and listen to incoming events on keep-alive connection, without trying to fetch them all at once? I believe that the buzzword-du-jour for this technology is "Comet".
Let's assume that I have HTTP service which opens keep-alive connection and put JSON objects with incoming stock quotes there in real time:
{"symbol": "AAPL", "bid": "88.84", "ask":"88.86"}
{"symbol": "AAPL", "bid": "88.85", "ask":"88.87"}
{"symbol": "IBM", "bid": "87.48", "ask":"87.49"}
{"symbol": "GOOG", "bid": "305.64", "ask":"305.67"}
...
I need to listen to this events and update GWT components (tables, labels) in realtime. Any ideas how to do it?
There is a GWT Comet Module for StreamHub:
http://code.google.com/p/gwt-comet-streamhub/
StreamHub is a Comet server with a free community edition. There is an example of it in action here.
You'll need to download the StreamHub Comet server and create a new SubscriptionListener, use the StockDemo example as a starting point, then create a new JsonPayload to stream the data:
Payload payload = new JsonPayload("AAPL");
payload.addField("bid", "88.84");
payload.addField("ask", "88.86");
server.publish("AAPL", payload);
...
Download the JAR from the google code site, add it to your GWT projects classpath and add the include to your GWT module:
<inherits name="com.google.gwt.json.JSON" />
<inherits name="com.streamhub.StreamHubGWTAdapter" />
Connect and subscribe from your GWT code:
StreamHubGWTAdapter streamhub = new StreamHubGWTAdapter();
streamhub.connect("http://localhost:7979/");
StreamHubGWTUpdateListener listener = new StockListener();
streamhub.subscribe("AAPL", listener);
streamhub.subscribe("IBM", listener);
streamhub.subscribe("GOOG", listener);
...
Then process the updates how you like in the update listener (also in the GWT code):
public class StockListener implements StreamHubGWTUpdateListener {
public void onUpdate(String topic, JSONObject update) {
String bid = ((JSONString)update.get("bid")).stringValue();
String ask = ((JSONString)update.get("ask")).stringValue();
String symbol = topic;
...
}
}
Don't forget to include streamhub-min.js in your GWT projects main HTML page.
I have used this technique in a couple of projects, though it does have it's problems. I should note that I have only done this specifically through GWT-RPC, but the principle is the same for whatever mechanism you are using to handle data. Depending on what exactly you are doing, there might not be much need to over complicate things.
First off, on the client side, I do not believe that GWT can properly support any sort of streaming data. The connection has to close before the client can actually process the data. What this means from a server-push standpoint is that your client will connect to the server and block until data is available at which point it will return. Whatever code executes on the completed connection should immediately re-open a new connection with the server to wait for more data.
From the server side of things, you simply drop into a wait cycle (the java concurrent package is particularly handy for this with blocks and timeouts), until new data is available. At that point in time, the server can return a package of data down to the client which will update accordingly. There are a bunch of considerations depending on what your data flow is like, but here are a few to think about:
Is a client getting every single update important? If so, then the server needs to cache any potential events between the time the client gets some data and then reconnects.
Are there going to be gobs of updates? If this is the case, it might be wiser to package up a number of updates and push down chunks at a time every several seconds rather than having the client get one update at a time.
The server will likely need a way to detect if a client has gone away to avoid piling up huge amounts of cached packages for that client.
I found there were two problems with the server push approach. With lots of clients, this means lots of open connections on the web server. Depending on the web server in question, this could mean lots of threads being created and held open. The second has to do with the typical browser's limit of 2 requests per domain. If you are able to serve your images, css and other static content fro second level domains, this problem can be mitigated.
there is indeed a cometd-like library for gwt - http://code.google.com/p/gwteventservice/
But i ve not personally used it, so cant really vouch for whether its good or not, but the doco seems quite good. worth a try.
Theres a few other ones i ve seen, like gwt-rocket's cometd library.
Some preliminary ideas for Comet implementation for GWT can be found here... though I wonder whether there is something more mature.
Also, some insight on GWT/Comet integration is available there, using even more cutting-and-bleeding edge technology: "Jetty Continuations". Worth taking a look.
Here you can find a description (with some source samples) of how to do this for IBM WebSphere Application Server. Shouldn't be too different with Jetty or any other Comet-enabled J2EE server. Briefly, the idea is: encode your Java object to JSON string via GWT RPC, then using cometd send it to the client, where it is received by Dojo, which triggers your JSNI code, which calls your widget methods, where you deserialize the object again using GWT RPC. Voila! :)
My experience with this setup is positive, there were no problems with it except for the security questions. It is not really clear how to implement security for comet in this case... Seems that Comet update servlets should have different URLs and then J2EE security can be applied.
The JBoss Errai project has a message bus that provides bi-directional messaging that provides a good alternative to cometd.
We are using Atmosphere Framewrok(http://async-io.org/) for ServerPush/Comet in GWT aplication.
On a client side Framework has GWT integration that is pretty straightforward. On a server side it uses plain Servlet.
We are currently using it in production with 1000+ concurent users in clustered environment. We had some problems on the way that had to be solved by modifying Atmosphere source. Also the documentation is really thin.
Framework is free to use.