I have a rather small (1-2 node) kubernetes cluster running in GKE with ±40 Pods running. The problem at hand is that it's not logging to the GCE Console properly. I see lots of messages from the fluentd container(s) in the following format:
$ kubectl logs fluentd-cloud-logging-gke-xxxxxxxx-node-xxxx
2016-02-02 23:30:09 +0000 [warn]: Dropping 10 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project."
2016-02-02 23:30:09 +0000 [warn]: Dropping 1 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project."
2016-02-02 23:30:09 +0000 [warn]: Dropping 3 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project."
2016-02-02 23:30:09 +0000 [warn]: Dropping 41 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project."
2016-02-02 23:30:09 +0000 [warn]: Dropping 5 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project."
...and so on. I'm seeing ~5 of these messages per second, so I know things are producing logs. However, in the compute engine console I see something like the following:
So somewhere in between I'm obviously loosing lots of messages. Strange though, that I'm not loosing all these messages!
The cluster is configured with Logging.write and Monitoring.all privileges as suggested in GH issue #15727
It's definitely confusing that some of the logs are showing up. Given that error message, I'd expect none of your logs to be showing up in the viewer, since it sounds like the logging API hasn't been enabled for your project yet.
You can do so from the Developers Console, here. Try going there, clicking the Enable API button, and seeing whether the errors keep coming.
Related
I have followed the tutorials and successfully installed the monitoring and logging agents on my debian9 machine. All statuses ok.
In metrics explorer the gce_instance Disk Usage in bytes works for a few minutes then breaks. I get the following error on my machine:
Aug 04 15:43:23 master collectd[13129]: write_gcm: Unsuccessful HTTP request 400: {
"error": {
"code": 400,
"message": "Field timeSeries[2].points[0].interval.s
tart_time had an invalid value of \"2020-08-04T07:43:22.681979-07:00\": The start time must be before th
e end time (2020-08-04T07:43:22.681979-07:00) for the non-gauge metric 'agent.googleapis.com/agent/api_r
equest_count'.",
"status": "INVALID_ARGUMENT"
}
}
Aug 04 15:43:23 master collectd[13129]: write_gcm: Error talking to the endpoint.
Aug 04 15:43:23 master collectd[13129]: write_gcm: wg_transmit_unique_segment failed.
Aug 04 15:43:23 master collectd[13129]: write_gcm: wg_transmit_unique_segments failed. Flushing.
EDITED
Anyone experiencing these issues, it's a confirmed bug now.
I issued a support ticket in google issue tracker
These error messages are harmless, you are not losing metrics so you can ignore them without any problem.
The root cause is a server-side config change and affects all agents. That change only affected the verbosity of the responses, not the processing of the requests. some of the incoming metrics were silently dropped before that change, and are now dropped noisily.
There is a issue tracker where you can see more details about the issue that are affecting you.
I have created a web application using jsp/tiles/struts/mysql/tomcat. I created new project on Openshift 3 console (Openshift online) https://console.preview.openshift.com/console/ then added tomcat/mySql. I was getting 503 error sometimes and other times, same page was working as expected. 503 error came randomly for any page from my project. When I get 503 error, I refresh some no of times and it goes away, and my page is correctly displayed.
Error that I see is:
"503 Service Unavailable
No server is available to handle this request. "
I did some research:
What I understand from this openshift 2 link:
https://blog.openshift.com/how-to-host-your-java-ee-application-with-auto-scaling/
is that to correct 503 error:
SSH into your application gear using rhc ssh --app <app_name>
Change directory to haproxy/conf
change the following in haproxy.cfg option httpchk GET / to option httpchk GET /api/v1/ping
Restart the HAProxy cartridge from your local machine using RHC rhc cartridge-restart --cartridge haproxy
I dont know if it is also applicable to openshift 3. In openshift 3 where is haproxy.log, haproxy.cfg, haproxy/conf or its slightly different in openshift 3. (Nut thanks to Warrens comments, yes he saw 503 error in openshift related to HAProxy)
Now after 1 week after posting this question:
I am getting Quota Reached Error. I am able to build my project but all deployments are failing. I wonder if 503 error that I was getting earlier(either completely or partially) was related to Quota reached. How should I proceed now.
curl -i localhost:8080/GEA
HTTP/1.1 302 Found Server:
Apache-Coyote/1.1
Location: http://localhost:8080/GEA/
Transfer-Encoding: chunked Date: Tue, 11 Apr 2017 18:03:25 GMT
Tomcat logs do not show any application error.
Will Readiness Probe and Liveness Probe help me? I have not set them yet.
Nor do I know how to set them.
Will scaling help me (I dont know how to set it either)
Do I have to set memory/... all at maximum allowed to ensure project runs smooth?
For me I had a similar situation of getting 503's sometimes and sometimes getting my actual page. the reason was because you have haproxy on the frontend handling the requests. Depending on your setup you may even have a few haproxy pods and your request could be funneled between one of the pods. So as in my case one pod was working and the other not.
So basically
oc get pods -n default
NAME READY STATUS RESTARTS AGE
docker-registry-7-i02rh 1/1 Running 0 75d
registry-console-12-wciib 1/1 Running 0 67d
router-1-533cg 1/1 Running 3 76d
router-1-9utld 1/1 Running 1 76d
router-1-uwf64 1/1 Running 1 76d
As you can see in my output default namespace is where my router(haproxy) pods live. If I change to that namespace
oc project default
Then run
oc logs -f router-1-533cg
on each of the pods you will most likely find a sepcific pod that is behaving bad. You can simply delete, and the replication controller will create a new one
I am trying to enable push notifications on my website using VAPID keys.
When i include the gcm_sender_id and remove the applicationServerKey from the pushManager.subscribe method, it runs fine.
Only when i enable VAPID keys and remove the gcm_sender_id from manifest.json file. i get the foloowing error.
DOMException: Registration failed - push service error
I am using Chrome browser.
I encountered this error in Brave browser. By default, Google Services for push messaging are disabled in Brave. To enable this, open the following URL in brave:
brave://settings/privacy
After this, enable the flag "Use Google services for push messaging":
Source:
https://github.com/firebase/firebase-js-sdk/issues/3195#issuecomment-848036637
The applicationServerKey that i was using in the pushManager.subscribe method was somehow incorrect.
It worked when i regenerated the keys in node using the following module.
const webpush = require('web-push');
const vapidKeys = webpush.generateVAPIDKeys()
In my case,I was trying to run firebase messaging on a flutter web.
My Browser was BRAVE.
It always failed with an exception of firebase fcm registration push servic error.
I followed #Nicodemuz answer, but it didn't solve the issue. I get the same error.
The only solution was setting Google chrome as my executable.
Anyhow the issue is not with firebase or flutter, it's with the brave browser itself.
I've been trying to allow a program I am writing to access Google Drive Applications. I have gotten the client secrets information successfully, and have copy and pasted the example code and tried using it to successfully authenticate my program and use the google drive API.
However, when it gets to the line
Credential credential = new AuthorizationCodeInstalledApp(flow, new LocalServerReceiver()).authorize("user");
I get this error. This error has been posted about before, and I've tried essentially every solution. I've elevated both my program and all the java.exe files to administrator and tried running the program and I still got this error.
The full error is:
Oct 03, 2015 11:48:39 AM com.google.api.client.util.store.FileDataStoreFactory setPermissionsToOwnerOnly
WARNING: unable to change permissions for everybody: D:\directory
Oct 03, 2015 11:48:39 AM com.google.api.client.util.store.FileDataStoreFactory setPermissionsToOwnerOnly
WARNING: unable to change permissions for owner: D:\directory
I've also tried overriding the setPermissionToOwnerOnly when I instantiated the FileDataStoreFactory but that failed as well.
I have tried the following solutions:
http://stackoverflow.com/questions/30634827/warning-unable-to-change-permissions-for-everybody
http://stackoverflow.com/questions/24382069/error-while-executing-google-prediction-api-command-line-sample
https://groups.google.com/forum/#!topic/google-analytics-data-export-api/-7BH7Z40gkw (where the client secret data was hard coded into the program, this is bad, I know, but it didn't work anyway)
I don't know what to do at this point. I am running my program off a flash drive, and I tried running it off my computer as well, but it still failed. I am using NetBeans 8.0.2.
The error comes up as a warning, so maybe there is some way to just ignore the warning and proceed? That could be a solution, but I've researched and I'm not sure if that's a possibility. I am running windows 10 if that matters.
I just ran the Drive REST API example Java Quickstart tutorial through Eclipse and is working fine. It does requires a bit of setup time if you have not install Gradle (also Eclipse Marketplace has a plugin for Gradle).
To your point, I did get the same warning messages. However, it happened for me during the load client secret in the authorize() method.
public static Credential authorize() throws IOException {
// Load client secrets.
InputStream in =
DriveQuickstart.class.getResourceAsStream("/client_secret.json");
GoogleClientSecrets clientSecrets =
GoogleClientSecrets.load(JSON_FACTORY, new InputStreamReader(in));
I suspect this is where your issue is happening. Since, I am not able to see that part from your code snippet, have a look at where your client_secret.json file is located.
Hope this helps. Good luck!
I am launching chrome browser using
selenium = new DefaultSelenium("localhost", 4444, "*googlechrome",
"https://example.com/");
But i get a popup with following message and it freezes:
An administrator has installed Google Chrome on this system, and it is available for all users. The system-level Google Chrome will replace your user-level installation now.
Console Log till point of freeze:
Server started
16:06:37.792 INFO - Command request: getNewBrowserSession[*googlechrome, https://example.com/, ] on session null
16:06:37.796 INFO - creating new remote session
16:06:38.081 INFO - Allocated session beb925cd0418412dbe6319fedfb28614 for https://example.com/, launching...
16:06:38.082 INFO - Launching Google Chrome...
Any suggestions?
Try giving location of your chrome exe too along with browser name like this :
selenium=new DefaultSelenium("localhost", 4444, "*googlechrome C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe", "https://example.com");