Is there any way for a GCP Compute Engine instance to know if it was created by the Instance Group auto-scaling policy or if it was manually created?
On logs we generate on our instances we include the instance id. This is fine for manual instances that are started to test something, but it's not that useful for other instances as it clutters graphs of machine metrics.
In other words, for test machines we need the instance's id, but for other machines we need to log something else that's common to them all.
You can see who perform the creation task in stackdriver logging by using the following filter:
resource.type="gce_instance"
"create"
You can select a log and expand it to see if the VM was created by a user (email) or by the Instance group manager.
Note: Please have in mind that stackdriver has retention periods for the logs.
Related
I can see that after creating an Aurora MySQL I can then "Add a Reader" to the DB cluster. But isn't there a way to create a specified number of Replicas in the first place?
The scenarios that are easiest to understand are a cluster with a single DB instance (e.g. a developer system where availability isn't crucial) and a production cluster (where you want 2 instances at a minimum for high availability). The choice of 1 or 2 instances on initial cluster creation is controlled by the "Multi-AZ" option in the Create Cluster dialog. Selecting Multi-AZ gets you a writer instance and a reader instance. Having >1 reader instance is more for scalability reasons, and involves other considerations (promotion tiers, do all the instances get the same instance class and parameter group, etc.). I presume that bundling all those choices into the console dialog would clutter up the interface and make it easy for people to overprovision by accident.
What I normally do is automate the creation of 3-, 4-, etc. instance clusters using the AWS CLI commands. I create the cluster and the first (writer) instance, wait for the first instance to become available, then create all the reader instances. If you kick off creating all the instances at once, it's a race condition - whichever one finishes first becomes the writer.
We are using Google Cloud functions for the api layer. These functions are http-triggered. We are looking for logs of http requests made to the functions. Do such logs exist?
The goal is instrumentation on frequency, source, etc.
You could use the Stackdriver Logging Client Libraries, which is the main logging provider for GCP. These logs may look different that common Apache/Nginx logs, but one can also log custom events. These logs exist, as soon as one connects that client, which then logs something. The mere advance is, that one doesn't have a log file and one there, but all the logs within a uniform, consolidated GUI (when using this with GCE containers, this even gets more obvious, when there are a few instances).
My RDS instance was showing outdated data temporarily.
I ran a SELECT query on my data. I then ran a query to delete data from a table and another to add new data to the table. I ran a SELECT query and it was showing the old data.
I ran the SELECT query AGAIN and THEN it finally showed me the new data.
Why would this happen? I never had these issues locally or on my normal non AZ instances. Is there a way to avoid this happening?
I am running MySQL 5.6.23
According to the Amazon RDS Multi-AZ FAQs, this might be expected.
Specifically this:
You may observe elevated latencies relative to a standard DB Instance deployment in a single Availability Zone as a result of the synchronous data replication performed on your behalf.
Of course, it depends on the frequency of the delays you're observing and what is the increased latency you're seeing, but an option would be to contact AWS support in case the issue is frequently reproducible.
As embarrassing as this is... it was an issue in our Spring Java code and not AWS.
A method modified a database entity object. The method itself wasn't transactional but was called from a transactional context which would persist any changes on entities to the database.
It looked like it was rolling back changes, but what it was doing was just overwriting data. My guess is it overwrote the data a while ago so until someone tried to modify it we just assumed it was the correct data.
I have used Google Compute Engine for my backend (debian-lamp), suddenly it gets deleted automatically without any user interaction and also doesn't shows the operation(Deletion of VM Instance ) performed by which user. I have also attached the image of Google Compute Engine Operations for further study.
I want to know why does this happened and what are the ways to restore the deleted instance.
Note: I am using trial version of Google Compute Engine and this was my second VM Instance created in Current Project.
It looks like the instance was deleted by the Instance Group Manager after you resized the instance group (most likely to zero). To learn about why this happened, visit the docs pages for Instance Groups and the Instance Group Manager.
If you resize the Instance Group back up to 1, the Instance Group Manager will create a new VM automatically.
how do I add a NIC to a compute engine instance? I need more then one NIC so I can build out an environment...I've looked all over and there is nothing on how to do it...
I know it's probably some API call through the SDK, but I have no idea, and I can't find anything on it.
EDIT:
It's the rhel6 image. figured I should clarify.
The question is probably old and a lot has changed since. Now it's definitely possible to add more nics to an instance but only at creation time (you can find a networking tab on the create instance page on the portal - corresponding rest api exists too). Each nic has to connect to a different virtual network, so you need to create more before creating the instance (if you don't have already).
Do you need an external address or an internal address? If external, you can use gcutil to add an IP address to an existing instance. If internal, you can configure a static network address on the instance, and add a route entry to send traffic for that address to that instance.
I was looking for similiar thing (to have a VM which runs Apache and nginx simultaneously on different IPs), but it seems like although you can have multiple networks (up to 5) in a project and each network can belong to multiple instances, you can not have more than one network per instance. From the documentation:
A project can contain multiple networks and each network can have multiple instances attached to it. [...] A network belongs to only one project and each instance can only belong to one network.