Not Able to set resource type in logs using command 'gcloud logging logs write' . Entries getting created under global resource type - google-compute-engine

I am using Ubuntu 18.04 on Google compute engine.
I am using the steps as shown in Google cloud documentation. My command is
sudo gcloud logging write "logname" "A simple entry"
The entry gets created but under the resource type as 'global'. However i want it to be created under resource name as compute engine.
I have tried setting logname as "projects/campuskudos-980/logs/appengine.googleapis.com%2Fvm.syslog" but that didn't work out
sudo gcloud logging write "logname" "A simple entry"
I want the logs to be created under GCE VM Instance resource type. So I can filter it out on stackdriver

Currently there’s no way to specify the resource type when using gcloud logging write command. As explained in the documentation for simplicity, this command makes several assumptions about the log entry. For instance, it always sets the resource type to global.
Right now, there are two ways to do that:
1- With the gcloud logging write command, use logname and specify something like projects/[PROJECT_ID]/logs/compute.googleapis.com. After that, using advanced filters on Stackdriver Logging as explained in the documentation, you can filter logs using an advanced filter to query all entries inside ‘compute.googleapis.com’.
For e.g.:
logName: (“projects/[PROJECT_ID]/logs/compute.googleapis.com”)
2- Call directly to API as explained in documentation specifying resource type as gce_instance.
Then that entry will appear under GCE VM Instance resource type on Stackdriver Logging.

Related

Logs aren't arriving in Cloud Logging from Google Compute Engine

I have a VM instance running in GCE (using the Container Optimised OS) and within that I have an actively running container that is generating json logs. I can see these logs when I navigate to /var/lib/docker/containers/<CONTAINER_IMAGE>/<CONTAINER_IMAGE>-json.log.
In the same Instance, another docker container is running using the image gcr.io/stackdriver-agents/stackdriver-logging-agent:1.8.4. This was automatically set up when I created the VM.
The VM has permission to access to Cloud Logging and the Cloud Logging API is enabled. I have also followed the steps here and added google-logging-enabled to the metadata with a value of true.
When the VM is started, the logging agent seems to spin up correctly and emits a log saying that it is tailing the log file of the docker container I want logs for, however the logs within that file never appear in Google Logging. Below is a screenshot of the logs that do make it to Cloud Logging:
I have had this issue for a while now so would be very grateful for any help with this issue! Thanks in advance (:
In the json logs I was providing, the time format used was not being accepted by fluentd. I've been able to get around that by adding:
reserve_time true
to the filter in the default config. Now the config ignores any nested fields with time specified. I learned of this from here.
Google logging uses a fluentd to catch the logs.
You can reconfugure fluentd to include additional log files.
Create a file /etc/google-fluentd/config.d/my_app_name.conf and put in the file a line in a format path /path/to/my/log. Here are more examples in the fluentd documentation.
You can also specify how the file is going to be parsed: as a single string type field or in more structured way (more convinient when you're looking for something). Again - here's some more info about fluentd's output plugins.
Finally go ahead and read the fluentd documentation to have a better understanding on using this tool.

Internal 500 error on Google Compute Engine, installing littlest jupyter

"Internal 500 server error" after VM runs for a day or two.
This is the second time it has happened, I start the instance, install littlest Jupyterhub
(see details below). I can login to the external ip, for a day, but then it stops
with internal 500 error. I cannot ssh or get into the instance, only alternate is to
create a new instance and re-do. What is the problem?
I have installed littlest jupyterhub using on this instance, using
#!/bin/bash
curl https://raw.githubusercontent.com/jupyterhub/the-littlest-jupyterhub/master/bootstrap/bootstrap.py | sudo python3 - --admin master
I would recommend you enable access on your instance to the serial console [1].
You will also need to setup a password for your user following this documentation [2].
With these two steps done, you should be able to reconnect to your instance once you are locked out like you mentioned by following this [3].
You should then be able to investigate what is going on in the instance.
Then try to verify if your application is still running, if the SSH server is still running etc.
Frederic
[1] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#enable_instance_access
[2] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#setting_up_a_local_password
[3] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console#connectserialconsole

Google Cloud SQL - Catch bad logins

I have an existing MySQL (version 5.7) instance hosted (managed) by Google Cloud SQL. I want to get a notification when someone is trying to connect my database with a bad username\password.
My idea was to look for it on the Google Stackrive logs, but it's not there.
There is an option to collect this information?
UPDATE 1:
I tried to connect the instance with gcloud but unfortunately, it's not working.
$ gcloud sql connect mydb
Whitelisting your IP for incoming connection for 5 minutes...done.
ERROR: (gcloud.sql.connect) It seems your client does not have ipv6 connectivity and the database instance does not have an ipv4 address. Please request an ipv4 address for this database instance.
It's because the database is accessible only inside the internal network. I searched for flags like --internal-ip but didn't find one.
However, I was guessing that it's not making any difference if I'll try to access the database from my DB editor (workbench). So I did it:
Searching for the query that #Christopher advised - but it's not there.
What I missed?
UPDATE 2:
Screenshot of my Stackdrive:
Even if I remove this (resource.labels.database_id="***") condition - the result is the same.
There is an option to collect this information?
One of the best options to collect information about who is trying to connect to your Google Cloud SQL instance with wrong credentials is Stackdriver Logging.
Before beginning
To reproduce this steps, I connected to the Cloud SQL instance using the gcloud command:
gcloud sql connect [CLOUD_SQL_INSTANCE]
I am not entirely sure if using the mysql command line something will change along the lines, but in case it does, you should only look for the new log message, and update the last boolean entry (from point 4 on).
How to collect this information from Stackdriver Logging
Go under Stackdriver → Logging section.
To get the information we are looking for, we will use advanced log queries. Advanced log queries are expressions that can specify a set of log entries from any number of logs. Advanced logs queries can be used in the Logs Viewer, the Logging API, or the gcloud command-line tool. They are a powerful tool to get information from logs.
Here you will find how to get and enable advanced log queries in your logs.
Advanced log queries are just boolean expressions that specify a subset of all the log entries in your project. To find out who has enter with wrong credentials into your Cloud SQL instance running MySQL, we will use the following queries:
resource.type="cloudsql_database"
resource.labels.database_id="[PROJECT_ID]:[CLOUD_SQL_INSTANCE]"
textPayload:"Access denied for user"
Where [PROJECT_ID] corresponds to your project ID and [CLOUD_SQL_INSTANCE] corresponds to the name of the Cloud SQL instance you would like to supervise.
If you notice, the last boolean expression corresponding to textPayload uses the : operator.
As described here by using the : operator we are looking for matches with any sub string in the log entry field, so every log that matches the string specified, which in this case is: "Access denied for user".
If now some user enters the wrong credentials, you should see a message like the following appear within your logs:
[TIMEFRAME][Note] Access denied for user 'USERNAME'#'[IP]' (using password: YES)
From here is a matter of using one of GCP products to send you a notification when a user enters the wrong credentials.
I hope it helps.
As said in the GCP documentation :
Cloud Shell doesn't currently support connecting to a Cloud SQL instance that has only a private IP address.

Airflow MySQL connection parameters with URI

I'm trying to use the MySqlHook.bulk_load() method docs in my Airflow task, and if my understanding is correct, I have to enable "local_infile" and "loose-local-infile" when I create the connection to my MySQL database in order for this to work.
Otherwise, it results in the following error:
"The used command is not allowed with this MySQL version."
If I create my connection from the Connections page on the webserver UI, and pass the following parameters as "extras", the method is successful and the DAG works as intended:
{"local_infile":"1","loose-local-infile":"1"}
image of connection settings on Airflow webserver UI
However, I am looking create my database connection by passing it as an environmental variable instead. To do so, requires me to pass my connection settings as a URI. see "Connections" in docs
ie: scheme://[user[:[password]]#]target[:port][/schema][?attribute1=value1&attribute2=value2...
docs
I have tried the following formats, but none have worked successfully for me:
mysql://user:password#hostname:port/schema?local_infile=1&loose-local-infile=1
mysql://user:password#hostname:port/schema?local_infile=1;loose-local-infile=1
I have also tried percent encoding my values like so:
mysql://user:password#hostname:port/schema?local_infile%3D1%26loose-local-infile%3D1
Any help would be GREATLY appreciated, thank you!

gcloud compute error: "More than one Autoscaler with given targe."

This command used to work (around May 2016) but for some reason it does not anymore:
gcloud compute --verbosity error --project ""phantomjscloud-20160125"" instance-groups managed list
I now get the following error:
ERROR: (gcloud.compute.instance-groups.managed.list) More than one
Autoscaler with given targe.
I can't find any details regarding this error. What changed, and how do I again properly enumerate my instance groups?
Given that all my instance groups use (and have always used) autoscaling I'm not sure why I am now getting this error.
I don't know what the root cause was, but I deleted an instance group with a very similar name to another one ez-deploy-pjsc-api-preempt-large-usa-central1-a-1 vs ez-deploy-pjsc-api-preempt-large-usa-central1-a and now it works.
Seems like a bug in the gcloud system.