GCP compute instance - terraform delete protection flag with disk resizing - google-compute-engine

we have recently come across an issue where an engineer has made a disk change in the console to a boot disk and it has caused our cloudbuild trigger to fail as it detects the size difference between what it is in code and in console (100GB vs 128GB)
The issue we have is in terraform we have deletion protection set to true so it wont allow us to make any changes until this is set to false.
However if i change this to false and push this change it will delete and recreate the boot disk as it will see the 100gb in terraform being less than the 130gb in console.
Is it possible to increase the disk size and disable the delete protection in the same code push?

For reference no you cannot make any amendment to an instance at the same time as the flag change if already enabled. Terraform looks at that status first and if enabled will ignore all other changes.

I'd do a terraform apply -refresh-only first in order to sync the size of the manual change with terraform state and then disable the delete protection set on the resource.

Related

Is it safe to delete metric events file in google compute engine vm

So, I found a file called uma-events under metrics folder in a google compute engine vm(built using Container Optimized OS) which is taking about 5gb space. I cannot extend the partition in the current condition and am running low on disk space. Also, the file mentioned above is owned by chronos(Maybe it is a default user/group?) So, would it be safe to delete the file?
full path of the file is - /mnt/stateful_partition/var/lib/metrics/uma-events
I went through several documentations but didn't find anything useful.
The root file system is mounted as read-only to protect system integrity. However, home directories and /mnt/stateful_partition are persistent and writable.
You can remove it as a temporary solution or you can resize it.
To resolve your issue, do the following:
Create a snapshot of the disk : Make sure that the changes you will do on your instance won't affect your system performance. So that you can easily revert it back to your previous instance state.
Delete files that you don't need on the disk to free up space(Your safe to be removed metrics folder), to learn more about these directories, See Disks and file system overview.
If your disk requires more space after this, resize the disk. (If your VM might become inaccessible if your boot disk is full, to troubleshoot full disk or resize disk refer to this link Troubleshooting full disks and disk resizing.

Google cloud compute instance metrics taking up disk space

I have a google cloud compute instance set up but it's getting low on disk space. It looks like it is the /mnt/stateful_partition/var/lib/metrics directory taking up a significant amount of space (3+gb). I assume this is the compute metrics but I can't find any way to safely remove these other than just deleting the files. Is this going to cause any issues?
The path you are referring are File System directories that are used for the GCE VM instance, and you are correct that the metrics folder is safe to be removed. To learn more about these directories, see Disks and file system overview.
I would also suggest to create a snapshot first if you wanted to make sure that the changes you will do on your instance won't affect your system performance. So that you can easily revert it back to your previous instance state.

Deleting incremental snapshots before occasional full backup Google Cloud Compute Engine

Regarding snapshots on Google Cloud Compute Engine, I have some questions I could not find answers to in the documentations:
We do have a two-hourly frequency for some of our disks. The documentation says that at not defined times, a full image of the disk is captured. In case I do not need to restore anything from before the latest full image, does this mean that all previous snapshots to the new full image could be deleted?
If so, how do I identify the snapshots that can be deleted?
Or: Is there even a way to accomplish this task automatically (e.g. something like "delete all prior incremental images after latest full image"?)
Let me provide you some links to the documentation that could answer your questions:
Accordingly to the documentation Working with persistent disk snapshots:
Compute Engine uses incremental snapshots so that each snapshot contains only the data that has changed since the previous snapshot.
on the other hand, as it was admitted by #Peter Sonntag, accordingly to the documentation Use existing snapshots as a baseline for subsequent snapshots:
Important: Snapshots are incremental by default to avoid billing you for redundant data, to minimize use of storage space, and to decrease snapshot creation latency. However, to ensure the reliability of snapshot history, a snapshot might occasionally capture a full image of the disk.
Accordingly to the documentation Snapshot deletion:
When you delete a snapshot, Compute Engine immediately marks the
snapshot as DELETED in the system. If the snapshot has no dependent
snapshots, it is deleted outright. However, if the snapshot does have
dependent snapshots:
Any data that is required for restoring other snapshots is moved into the next snapshot, increasing its size.
Any data that is not required for restoring other snapshots is deleted. This lowers the total size of all your snapshots.
The next snapshot no longer references the snapshot marked for deletion, and instead references the snapshot before it.
To automate deletion of the snapshots of you can use Snapshot retention policy:
A snapshot retention policy defines how long you want to keep your snapshots.
If you choose to set up a snapshot retention policy, you must do so as part of your snapshot schedule.

Sql update automatically rolling back or changing after some time

I have an Azure SQL db where I am executing a change with a c# call (using await db.SaveChangesAsync();)
This works fine and I can see the update in the table, and in the APIs that I call which pull the data. However, roughly 30-40 minutes later, I run the API again and the value is back to the initial value. I check the database and see that it is indeed back to the initial value.
I can't figure out why this is, and I'm not sure how to go about tracking it down. I tried to use the Track Changes SQL command but it doesn't give me any insight into WHY the change is happening, or in what process, just that it is happening.
BTW, This is a test Azure instance that nobody has access to but me, and there are no other processes. I'm assuming this is some kind of delayed transaction rollback, but it would be nice to know how to verify that.
I figured out the issue.
I'm using an Azure Free Tier service, which is done on a shared virtual machine. When the app went inactive, it was being shut down, and restarted on demand when I issued a new request.
In addition, I had a Seed method in my Entity Framework Migration set up to set the particular record I was changing to 0, and when it restarted, it re-ran the migration, because it was configured to do so in my web config.
Simply disabling the EF Migrations and republishing does the trick (or when I upgrade to a better tier for real production, it will also go away). I verified that records outside of those expressly mentioned in the Migration Seed method were not affected by this change, so it was clearly that, and after disabling the migrations, I am not seeing it any more.

fiware spagobi cockpit graphics not upgrade

All graphics in my cockpit are not updated, even though the data source dataset is scheduled to be updated every 1 minute, and checking in the bbdd the dataset is updated correctly every 1 minute...
my dataset config:
How can I see the updated graphics? maybe needs to change something in spagobi server or configuration?
Cockpit uses a cache mechanism that permits to query datasets coming from different datasources while joining them but has nothing to do with the dataset's persistence.
At this moment, there are two ways to get updated data while using cockpit:
by cleaning the cache using the button inside the cockpit itself;
by using the cache cleaning scheduling setting.
In the latter case, enter the Configuration Management as admin to change the value for
SPAGOBI.CACHE.SCHEDULING_FULL_CLEAN
variable to HOURLY. This setting creates a job that periodically (every hour, which is the minimun) cleans the cache used by cockpits.