Intermittent ping timeouts on GCE instances - google-compute-engine

These past 3 days, it seems I have network issues on my Google cloud engine instances. I investigate every single part of my architecture, and once it reaches my instances, everything responds lightning fast.
But still, from time to time, requests are stuck for 2 minutes.
I tried executing a long-running 'ping' to one of my instance. Here is the result:
64 bytes from 107.178.xxx.xxx: icmp_seq=252 ttl=45 time=65.890 ms
64 bytes from 107.178.xxx.xxx: icmp_seq=253 ttl=45 time=83.041 ms
Request timeout for icmp_seq 254
Request timeout for icmp_seq 255
Request timeout for icmp_seq 256
64 bytes from 107.178.xxx.xxx: icmp_seq=257 ttl=45 time=65.925 ms
Request timeout for icmp_seq 258
64 bytes from 107.178.xxx.xxx: icmp_seq=259 ttl=45 time=63.801 ms
64 bytes from 107.178.xxx.xxx: icmp_seq=260 ttl=45 time=65.046 ms
Request timeout for icmp_seq 261
Request timeout for icmp_seq 262
Request timeout for icmp_seq 263
Request timeout for icmp_seq 264
Request timeout for icmp_seq 265
64 bytes from 107.178.xxx.xxx: icmp_seq=266 ttl=45 time=75.441 ms
Request timeout for icmp_seq 267
64 bytes from 107.178.xxx.xxx: icmp_seq=268 ttl=45 time=67.201 ms
Request timeout for icmp_seq 269
Request timeout for icmp_seq 270
Request timeout for icmp_seq 271
Request timeout for icmp_seq 272
Request timeout for icmp_seq 273
Request timeout for icmp_seq 274
64 bytes from 107.178.xxx.xxx: icmp_seq=275 ttl=45 time=70.824 ms
Request timeout for icmp_seq 276
64 bytes from 107.178.xxx.xxx: icmp_seq=277 ttl=45 time=81.096 ms
Request timeout for icmp_seq 278
Request timeout for icmp_seq 279
Request timeout for icmp_seq 280
Request timeout for icmp_seq 281
64 bytes from 107.178.xxx.xxx: icmp_seq=282 ttl=45 time=61.271 ms
That does not look good to me.
Is there any way to contact Google when you have a problem with your instances? I just learned that by default I have a "bronze" support which actually means 0 support as I cannot contact anybody. I apparently need to pay $150/month to get someone answering my questions!!
I am quite surprise to pay for something and not having anybody on the other side if the service is not working properly. Am I missing something? Or should I just wait and pray?

Related

Unable to establish SSL connection upon wget on Ubuntu 20.04 LTS

I use browserSetup.sh to install ucsc Genome Browser in the Cloud (GBiC) program, and getting the error
--2021-08-26 13:43:10-- https://raw.githubusercontent.com/paulfitz/mysql-connector-c/master/include/my_config.h
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
Unable to establish SSL connection.
I checked this script, there may be an error in this line
wget https://raw.githubusercontent.com/paulfitz/mysql-connector-c/master/include/my_config.h -P /usr/include/mysql/
I ran this line of code and got the same error,So i want to understand why Unable to establish SSL connection when I use wget.
I ping raw.githubusercontent.com and return
64 bytes from cdn-185-199-109-133.github.com (185.199.109.133): icmp_seq=1 ttl=45 time=87.8 ms
64 bytes from cdn-185-199-109-133.github.com (185.199.109.133): icmp_seq=2 ttl=45 time=57.9 ms
64 bytes from cdn-185-199-109-133.github.com (185.199.109.133): icmp_seq=3 ttl=45 time=57.9 ms
And I tried to use the --no-check-certificate parameter but still can't solve the error,How do i need to solve this problem?

How can I optimize my Google Cloud SQL (MySQL) database for use with an API

I created a MySQL Database into the Google Cloud Platform.
Machine type is db-n1-standard-2 with 2 vCPUs and 7.5 GB Memory.
Network throughput (MB/s) is 500 of 2000
Storage type: SSD
Disk throughput (MB/s)
Read: 4.8
Write 4.8
IOPS
Read: 300
Write: 300
Availability: High availability
Database Flags:
max_connections: 500
I created a API with Laravel Lumen and let it work onto Google Cloud Platform into a App Engine
runtime: php72
instance_class: F2
automatic_scaling:
min_instances: 1
max_instances: 20
target_cpu_utilization: 0.7
max_concurrent_requests: 80
target_throughput_utilization: 0.8
If I send a request to my API with postman the first response needs 1123ms. The size of the response is 8.59 KB.
If I send the same request with loader.io with 250 clients over 1 minute,
the test aborted because it reached the error threshold.
79,5% error rate
avg resp = 9141 ms
min/max Responsetime is: 2081/10376
Response Counts success: 104
Response Counts timeout: 403
When I have a look at the MySQL Error Logging, I do have impossible much errors like this:
2020-01-07 16:29:18.670 CET
2020-01-07T15:29:18.670275Z 1507 [Note] Aborted connection 1507 to db: 'mydatabasename' user: 'mydatabaseuser' host: 'cloudsqlproxy~172.217.35.158' (Got an error reading communication packets)
Does someone have an Idea, how I can solve this problem?
I've investigated a little about this error and found some useful guides on how to diagnose this types of errors at this link I believe that as a first step we would need to find the real cause of this message (this could be due to various reasons according to the link shared), some suggestions that I could notice repeating on other posts and on the same link that I shared before are the next:
Check to make sure the value of max_allowed_packet is high enough ( this can be modified with flags in Cloud SQL).
The client connected successfully but terminated improperly
The client slept for longer than the defined wait_timeout or interactive_timeout seconds
What I would do is to go on and try tweaking the database flags as described on this public documentation on the google page and check how the behavior changes.
Please let us know if you find something useful when tweaking the instance.

Apache Mysql MaxClients

I have running system on centos + apache + mysql with "Bitrix" CMS (Russian local CMS).
Some times system goes CPU overload and apache log message "Max Client reached". After restartig httpd daemon CPU utilisation becomes OK. ab 20-30%.
At first i think it was memory leaks. But then i reduce mysql memory and now i have ab. 2GB free RAM min.
My apache settings:
<IfModule mpm_prefork_module>
StartServers 32
MinSpareServers 32
MaxSpareServers 32
MaxClients 64
MaxRequestsPerChild 5000
</IfModule>
<IfModule worker.c>
StartServers 2
MaxClients 300
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 2500
</IfModule>
on my shame i did not realize difference between this modules.
My system config: 16 GB RAM, 1 HTTPD procces eats 50-150MB, max users online 200-250 at one moment.
Why CPU goes overload when connections reaches maxclients value?
What am i doing wrong?
Thx!
htop when CPU overloads and MaxClients reaches some value

how to store the ping statistics after giving ctrl+c in a text file

mininet> h1 ping h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=15.2 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=5.43 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.243 ms
^C64 bytes from 10.0.0.2: icmp_req=13 ttl=64 time=0.216 ms
--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 12016ms
rtt min/avg/max/mdev = 0.136/1.809/15.257/4.119 ms
mininet>
I want to capture number of packets transmitted,packets received,packet loss percentage into a text file.
how to do this?
please help me.
just you have to type : ping ip >>out.txt
This will help you.

Ubuntu 10.10 - Memory Issue with MySQL

All of a sudden my Database Server is running around 98% Memory Allocation (I have a 16GB box running only a MySQL Instance).
Here is what is displayed when I do a free -m:
total used free shared buffers cached
Mem: 15498 14565 932 0 76 8081
-/+ buffers/cache: 6408 9089
Swap: 31743 0 31743
I've already rebooted the machine - It's running on a very high availability server. MySQL claims that it's running 562 queries per second.
Total ø per hour ø per minute ø per second
22 M 2.03 M 33.77 k 562.90
Is this normal?
There is nothing wrong with that memory stats. You still have about 9 GB free and no swap is used.