How to know which resources I am using in Oracle Cloud, after I deleted the instance? - oracle-cloud-infrastructure

I had created one instance in Oracle cloud with AMPERE CPU, 4 cores & 24 GB RAM. It aloowed me to do so as it is within Always Free Tier. I had to delete that as it was not working OK. While deleting, I have also chosen "Delete boot volume". Now when I want to make another instance with same cpu, after three cores it is giving message of "Service Limits reached". Why this is happenning and how do I overcome it?
I waited for 48 hours so that all resources are released. On my dashboard it shows that "No resources found. Create a resource, or try another compartment." However, it does not allow me beyong 3 cores.
Regards.
Dutta

Related

Cannot find the Always Free Eligible VM Instance when creating it

I wanted to create a Always Free Eligible VM Instance (VM.Standard.E2.1.Micro) on the Oracle Cloud, but it's not on my list.
And when I check my limit for VM.Standard.E2.1.Micro in
"Governance > Limits, Quotas and Usage", it say 0.
How can I create one? My Home Region is Canada Southeast (Montreal), ca-montreal-1.
My account's trial is not over yet. Should I wait till my trial is over to create it?
As per the Always Free website, at any time you can have up to the following:
Two Oracle Autonomous Databases with powerful tools like Oracle Application Express (APEX) and Oracle SQL Developer
Two Oracle Cloud Infrastructure Compute VMs; Block, Object, and Archive Storage; Load Balancer and data egress; Monitoring and Notifications
If you already are at capacity for this, then you would not be able to add an additional. Further details of Always Free resources can be found here - https://docs.oracle.com/en-us/iaas/Content/FreeTier/resourceref.htm
The always free provide you with the following
2 Compute virtual machines with 1/8 OCPU and 1 GB memory each.
2 Block Volumes Storage
100 GB total.
10 GB Object Storage.
10 GB Archive Storage.
Resource Manager: managed Terraform.
Focus on the specs of the free one
VM.Standard.E2.1.Micro is not available for ca-montreal-1 at this time (January 2021).
I created a new account in the Ashburn region where VM.Standard.E2.1.Micro is available.

AWS RDS MySQL innodb/btr_search_latch

I am running MySQL 5.7.24 on AWS RDS, I have an InnoDB type table and working fine with normal traffic but when send push notification to 50k user the problem happend.
server features 32 GB RAM, 8vCPU, and my AWS RDS server is db.m5.2xlarge.
the wait/synch/sxlock/innodb/btr_search_latch take resources greater than wait/io/table/sql/handler like below image
the innodb_adaptive_hash_index enabled now
You're trying to send 50,000 push notifications in five minutes?
50,000 / 300 seconds means you're pushing 167 notifications per second and I assume then updating the database to record the result of the push. Probably you are doing this in many concurrent threads so you can do the pushes in parallel.
Have you considered doing these push notifications more gradually, like over 10 or 15 minutes?
Or updating the database in batches?
Or using fewer threads to avoid the high contention on the database?
I used to work for SchoolMessenger, a company that provide notification services for the majority of public schools in the USA. We sent millions of notifications, SMS messages, and phone calls every day. The way we did it was to have a very complex Java application queue up the notifications, and then post them gradually. Then as the results of the pushes came in, these also queued up, and updated the database gradually.
We used MySQL, but we also used it together with ActiveMQ as a persistent queue. Push all the tasks to be done into the queue, then a pool of worker threads would act on the tasks, and push the results back into another queue. Then a result-reading thread would read batches of results from the queue and update the database in bulk updates.
When you are designing a back-end system to do large-scale work to do, you have to think of new ways to architect your application to avoid choke-points.
As a database performance and scaling consultant, I have observed this rule many times:
For every 10x growth in data or traffic, you should reevaluate your software architecture. You may have to redesign some parts of it to work at the larger scale.

Couchbase createIndex command times out

I have a Couchbase bucket consisting of ~110 mn documents occupying ~58 GB of disk space. The allocated Dynamic RAM Quota of the bucket is 48.8 GB. Index RAM quota for the cluster is ~36 GB. I'm trying to build a secondary index on the bucket using GSI.
The query to create the index runs for ~2 mins and returns an error GSI CreateIndex() - cause: Request Timeout , also I'm getting the following warning from the web UI : Approaching full Indexer RAM warning. Usage of Indexer RAM on node "127.0.0.1" is around 2669%. This is above the threshold of 75%.
Is there someway I can increase the timeout period for the query? Also, the query only runs for about 2 min before timing out, does that have something to do with the RAM warning, as in an increased hardware requirement?
I created the Query Workbench in the Couchbase 4.5 UI. Are you using the 4.5DP version?
There is indeed a timeout on queries issued from the UI. It should be set for 5 minutes, are you sure about the 2 minutes you are reporting, or could it be 5 minutes? If it's 2 minutes, there could well be a bug.
Please note, however, that index creation continues after this timeout. If you go to the indexes tab, you should see that the index continues to build. So it shouldn't be a problem that the Query Workbench timed out. (I believe we fixed the error message to indicate this in a later version.)
If the indexes tab does not show the index continuing to build, that is very possibly a bug, if so please provide more details about which version you are using.
In general, the entire UI will log you out after 10 minutes of inactivity, so the workbench isn't the right place for long-running queries. The right tool for long-running queries is the 'cbq' command-line tool, which does not have a time limit.
W.r.t. the messages about "Indexer RAM Warning", that is completely unrelated to the timeout in the Query Workbench. You can stop these messages by increasing the amount of RAM given to the indexer in the Settings -> Cluster tab.

Google VM Instance becomes unhealthy on its own

I have been using Google Cloud for quite some time and everything works fine. I was using single VM Instance to host both website and MySQL Database.
Recently, i decided to move the website to autoscale so that on days when the traffic increases, the website doesn't go down.
So, i moved the database to Cloud SQL and create a VM Group which will host the PHP, HTML, Image files. Then, i set up a load balancer to divert traffic to various VM Instances under VM Group.
The problem is that the Backend Service (VM Group inside load balancer) becomes unhealthy on its own after working fine for 5-6 hours and then again becomes healthy after 10-15 minutes. I have also seen that the problem can come when i run a file which is a bit lengthy with many MySQL Queries.
I checked the Health check and it was giving 200 response. During the down period of 10-15 minutes, the VM Instance is accessible from it own ip address.
Everything is same, i have just added a load balancer in front of the VM Instance and the problem has started.
Can anybody help me troubleshoot this problem?
It sounds like your server is timing out (blocking?) on the health check during the times the load balancer reports it as down. A few things you can check:
The logs (I'm presuming you're using Apache?) should include a duration along with the request status in the logs. The default health check timeout is 5s, so if your health check is returning a 200 in 6s, the health checker will time out after 5s and treat the host as down.
You mention that a heavy mysql load can cause the problem. Have you looked at disk I/O statistics and CPU to make sure that this isn't a load-related problem? If this is CPU or load related, you might look at increasing either CPU or disk size, or moving your disk from spindle-backed to SSD-backed storage.
Have you checked that you have sufficient threads available? Ideally, your health check would run fairly quickly, but it might be delayed (for example) if you have 3 threads and all three are busy running some other PHP script that's waiting on the database

mySQL "Too many connections" error influenced by number of mongrel instances?

Recently I have started getting mySQL "too many connection" errors at times of high traffic. My rails app runs on a mongrel cluster with 2 instances on a shared host. Some recent changes that might be driving it:
Traffic to my site has increased. I
am now averaging about 4K pages a
day.
Database size has increased. My largest table has ~ 100K rows.
Some associations could return
several hundred instances in the
worst case, though most are far less.
I have added some features that
increased the number and size of
database calls in some actions.
I have done a code review to reduce database calls, optimize SQL queries, add missing indexes, and use :include for eager loading. However, many of my methods still make 5-10 separate SQL calls. Most of my actions have a response time of around 100ms, but one of my most common actions averages 300-400ms, and some actions randomly peak at over 1000ms.
The logs are of little help, as the errors seem to occur randomly, or at least the pattern does not appear related to the actions being called or data being accessed.
Could I alleviate the error by adding additional mongrel instances? Or are the mySQL connections limited by the server, and thus unrelated to the number of processes I divide my traffic across?
Is this most likely a problem with my coding, or should I be pressing my host for more capacity/less load on the shared server?
ActiveRecord has pooled database connections since Rails 2.2, and it's likely that that's what's causing your excess connections here. Try turning down the value of pool in your database.yml for that environment (it defaults to 5).
Docs can be found here.
Are you caching anything? It's an important part of alleviating application and database load. The Rails Guides have a section on caching.
Something is wrong. A Mongrel instance processes 1 request at a time so if you have 2 Mongrel instances then you should not be seeing more than 2 active MySQL connections (from the mongrels at least)
You could log or graph the output of SHOW STATUS LIKE 'Threads_connected' over time.
PS: this is not very many Mongrels. if you want to be able to service more than 2 simultaneous requests then you'll want more. ...if memory is tight, you can switch to Phusion Passenger and REE.