Cannot find the Always Free Eligible VM Instance when creating it - oracle-cloud-infrastructure

I wanted to create a Always Free Eligible VM Instance (VM.Standard.E2.1.Micro) on the Oracle Cloud, but it's not on my list.
And when I check my limit for VM.Standard.E2.1.Micro in
"Governance > Limits, Quotas and Usage", it say 0.
How can I create one? My Home Region is Canada Southeast (Montreal), ca-montreal-1.
My account's trial is not over yet. Should I wait till my trial is over to create it?

As per the Always Free website, at any time you can have up to the following:
Two Oracle Autonomous Databases with powerful tools like Oracle Application Express (APEX) and Oracle SQL Developer
Two Oracle Cloud Infrastructure Compute VMs; Block, Object, and Archive Storage; Load Balancer and data egress; Monitoring and Notifications
If you already are at capacity for this, then you would not be able to add an additional. Further details of Always Free resources can be found here - https://docs.oracle.com/en-us/iaas/Content/FreeTier/resourceref.htm

The always free provide you with the following
2 Compute virtual machines with 1/8 OCPU and 1 GB memory each.
2 Block Volumes Storage
100 GB total.
10 GB Object Storage.
10 GB Archive Storage.
Resource Manager: managed Terraform.
Focus on the specs of the free one

VM.Standard.E2.1.Micro is not available for ca-montreal-1 at this time (January 2021).
I created a new account in the Ashburn region where VM.Standard.E2.1.Micro is available.

Related

How to know which resources I am using in Oracle Cloud, after I deleted the instance?

I had created one instance in Oracle cloud with AMPERE CPU, 4 cores & 24 GB RAM. It aloowed me to do so as it is within Always Free Tier. I had to delete that as it was not working OK. While deleting, I have also chosen "Delete boot volume". Now when I want to make another instance with same cpu, after three cores it is giving message of "Service Limits reached". Why this is happenning and how do I overcome it?
I waited for 48 hours so that all resources are released. On my dashboard it shows that "No resources found. Create a resource, or try another compartment." However, it does not allow me beyong 3 cores.
Regards.
Dutta

How to lower costs of having MySQL db in Google Cloud

I set up Google Cloud MySQL, I store there just one user (email, password, address) and I'm querying it quite often due to testing purposes of my website. I set up minimal zone availability, the lowest SSD storage, memory 3.75GB, 1vCPUs, automatic backups disabled but running that database from the last 6 days costing me £15... How can I decrease the costs of having MySQL database in the cloud? I'm pretty sure paying that amount is way too much. Where is my mistake?
I suggest using the Google Pricing Calculator to check the different configurations and pricing you could have for a MySQL database in Cloud SQL.
Choosing Instance type
As you've said in your question, you're currently using the lowest standard instance, which is based on CPU and memory pricing.
As you're currently using your database for testing purposes, I could suggest to configure your database with the lowest Shared-Core Machine Type which is db-f1-micro, as shown here. But note that
The db-f1-micro and db-g1-small machine types are not included in the Cloud SQL SLA. These machine types are designed to provide low-cost test and development instances only. Do not use them for production instances.
Choosing Storage type
As you have selected the lowest allowed disk space, you could lower cost changing the storage type to HDD instead of a SSD if you haven't done so, as stated in the documentation:
Choosing SSD, the default value, provides your instance with SSD storage. SSDs provide lower latency and higher data throughput. If you do not need high-performance access to your data, for example for long-term storage or rarely accessed data, you can reduce your costs by choosing HDD.
Note that Storage type could only be selected when you're creating the instance and could not be changed later, as stated in the message when creating your instance.
Choice is permanent. Storage type affects performance.
Stop instance when is not in use
Finally, you could lower costs by stopping the database instance when it is not in use as pointed in the documentation.
Stopping an instance suspends instance charges. The instance data is unaffected, and charges for storage and IP addresses continue to apply.
Using Google Pricing Calculator
The following information is presented as a calculation exercise based in the Google Pricing Calculator
The estimated fees provided by Google Cloud Pricing Calculator are for discussion purposes only and are not binding on either you or Google. Your actual fees may be higher or lower than the estimate. A more detailed and specific list of fees will be provided at time of sign up
Following the suggestions above, you could get a monthly estimate of 6.41 GBP. Based on a 24 hour per 7 days running instance.
And using a SSD, it increases to 7.01 GBP. As said before, the only way to change the storage type would be to create a new instance and load your data.
And this could lower to 2.04 GBP if you only run it for 8 hours 5 days a week running on HDD.

Isn't Google App Engine suppose to be more expensive than Google Kubernetes engine

I had my app in the app engine(Flex). But it's costing a lot with no traffic yet!
I decided to move that to Kubernetes Engine which utilizes the compute engine.
Another reason I moved to Kubernetes because I wanted to run docker container services like Memcached that come with an added cost in App Engine Flex.
If you are tempted to ask why am not using App Engine Standard which is economical, that's because I couldn't find any easy way if at all there's any for running services like GDAL & Memcached.
I thought Kubernetes should be a cheaper option, but what I am seeing is the opposite.
I have even had to change the machine type to g1-small from N1...
Am I missing something?
Any ideas on how to reduce cost in Kubernetes / compute engine instances?
Please have a look at the documentation GKE Pricing and App Engine Pricing:
GKE clusters accrue a management fee of $0.10 per cluster per hour,
irrespective of cluster size or topology. One zonal (single-zone or
multi-zonal) cluster per billing account is free.
GKE uses Compute Engine instances for worker nodes in the cluster. You
are billed for each of those instances according to Compute Engine's
pricing, until the nodes are deleted. Compute Engine resources are
billed on a per-second basis with a one-minute minimum usage cost.
and
Apps running in the flexible environment are deployed to virtual
machine types that you specify. These virtual machine resources are
billed on a per-second basis with a 1 minute minimum usage cost.
Billing for the memory resource includes the memory your app uses plus
the memory that the runtime itself needs to run your app. This means
your memory usage and costs can be higher than the maximum memory you
request for your app.
So, both GAE Flex and GKE cluster are "billed on a per-second basis with a 1 minute minimum usage cost".
To estimate usage cost in advance you can use Google Cloud Pricing Calculator, also you can use it to estimate how changing parameters of your cluster can help you to reduce cost and which solution is more cost effective.
In addition, please have a look at the documentation Best practices for running cost-optimized Kubernetes applications on GKE.

Are GCP CloudSQL instances billed by usage?

I'm starting a project where a CloudSQL instance would be a great fit however I've noticed they are twice the price for the same specification VM on GCP.
I've been told by several devops guys I work with that they are billed by usage only. Which would be perfect for me. However on their pricing page it states "Instance pricing for MySQL is charged for every second that the instance is running".
https://cloud.google.com/sql/pricing#2nd-gen-pricing
I also see several people around the web saying they are usage only.
Cloud SQL or VM Instance to host MySQL Database
Am I interpreting Googles pricing pages incorrectly?
Am I going to be billed for the instance being on or for its usage?
Billed by usage
All depend what you mean by USAGE. When you run a Cloud SQL instance, it's like a server (compute engine). Until you stop it, you will pay for it. It's not a pay-per-request pricing, as you can have with BigQuery.
With Cloud SQL, you will also pay the storage that you use. And the storage can grow automatically according with the usage. Be careful the storage can't be reduce!! even if you delete data in database!
Price is twice a similar Compute engine
True! A compute engine standard1-n1 is about $20 per month and a same config on Cloud SQL is about $45.
BUT, what about the price of the management of your own SQL instance?
You have to update/patch the OS
You have to update/patch the DB engine (MySQL or Postgres)
You have to manage the security/network access
You have to perform snapshots, ensure that the restoration works
You have to ensure the High Availability (people on call in case of server issue)
You have to tune the Database parameters
You have to watch to your storage and to increase it in case of needs
You have to set up manually your replicas
Is it worth twice the price? For me, yes. All depends of your skills and your opinion.
There are a lot of hidden configuration options that when modified can quickly halve your costs per option.
Practically speaking, GCP's SQL product only works by running 24/7, there is no time-based 'by usage' option, short of you manually stopping and restarting the compute engine.
There are a lot of tricks you can follow to lower costs, you can read many of them here: https://medium.com/#the-bumbling-developer/can-you-use-google-cloud-platform-gcp-cheaply-and-safely-86284e04b332

Which Compute Engine quotas need to be updated to run Dataflow with 50 workers (IN_USE_ADDRESSES, CPUS, CPUS_ALL_REGIONS ..)?

We are using a private GCP account and we would like to process 30 GB of data and do NLP processing using SpaCy. We wanted to use more workers and we decided to start with a maxiumn number of worker of 80 as show below. We submited our job and we got some issue with some of the GCP standard user quotas:
QUOTA_EXCEEDED: Quota 'IN_USE_ADDRESSES' exceeded. Limit: 8.0 in region XXX
So I decided to request some new quotas of 50 for IN_USE_ADDRESSES in some region (it took me few iteration to find a region who could accept this request). We submited a new jobs and we got new quotas issues:
QUOTA_EXCEEDED: Quota 'CPUS' exceeded. Limit: 24.0 in region XXX
QUOTA_EXCEEDED: Quota 'CPUS_ALL_REGIONS' exceeded. Limit: 32.0 globally
My questions is if I want to use 50 workers for example in one region, which quotas do I need to changed ? The doc https://cloud.google.com/dataflow/quotas doesn't seems to be up to date since they only said " To use 10 Compute Engine instances, you'll need 10 in-use IP addresses.". As you can see above this is not enought and other quotas need to be changed as well. Is there some doc, blog or other post where this is documented and explained ? Just for one region there are 49 Compute Engine quotas that can be changed!
I would suggest that you start using Private IP's instead of Public IP addresses. This would help in you in 2 ways:-
You can bypass some of the IP address related quotas as they are related to Public IP addresses.
Reduce costs significantly by eliminating network egress costs as the VM's would not be communicating with each other over public internet. You can find more details in this excellent article [1]
To start using the private IP's please follow the instructions as mentioned here [2]
Apart from this you would need to take care of the following quota's
CPUs
You can increase the quota for a given region by setting the CPUs quota under Compute Engine appropriately.
Persistent Disk
By default each VM needs a storage of 250 GB therefore for 100 instances it would be around 25TB. Please check the disk size of the workers that you are using and set the Persistent Disk quota under Compute Instances appropriately.
The default disk size is 25 GB for Cloud Dataflow Shuffle batch pipelines.
Managed Instance Groups
You would need to take that you have enough quota in the region as Dataflow needs the following quota:-
One Instance Group per Cloud Dataflow job
One Managed Instance Group per Cloud Dataflow job
One Instance Template per Cloud Dataflow job
Once you review these quotas you should be all set for running the job.
1 - https://medium.com/#harshithdwivedi/how-disabling-external-ips-helped-us-cut-down-over-80-of-our-cloud-dataflow-costs-259d25aebe74
2 - https://cloud.google.com/dataflow/docs/guides/specifying-networks