I have a MySQL CloudSQL instance which is, since the 6th of December writing thousands of warning logs.
I am a bit concerned but it doesn't seem to affect the service or an clients so far.
I couldn't find anything regarding the error message "Failed to capture username for connection".
Mysql error 1158 seems to indicate an "Got an error reading communication packet" error but I don't understand why this should suddenly occur and have no idea how to debug this issue further.
{
"textPayload": "2023-01-23T10:05:42.110423Z 17573914 [Warning] Failed to capture username for connection: 17573914 error: 1158",
"insertId": "s=9d70fb7d5311463287e5ae02085e0f89;i=2218e52;b=4f5699407097485a96049cb0ba5273c8;m=3cb77f98bf0;t=5f2eb8ab81a97;x=cb56e73cb77e18bd-0-0#a1",
"resource": {
"type": "cloudsql_database",
"labels": {
"database_id": "PROJECT_XXX:mysql-1",
"region": "europe",
"project_id": "gp-services"
}
},
"timestamp": "2023-01-23T10:05:42.110871Z",
"severity": "WARNING",
"labels": {
"INSTANCE_UID": "2-cbdab43e-5517-455a-a57f-f5b4bd42f638",
"LOG_BUCKET_NUM": "8"
},
"logName": "projects/PROJECT_XXX/logs/cloudsql.googleapis.com%2Fmysql.err",
"receiveTimestamp": "2023-01-23T10:05:43.767560093Z"
}
Related
I am trying to delete an instance of longhorn, as well as the namespace, that is stuck in the terminating state.
I tried all three methods on the longhorn documentation, of which all fail. I cannot uninstall longhorn using a helm chart as I never installed longhorn through helm in the first place. Uninstalling longhorn using the kubectl also fails to create job.batch/longhorn-uninstall because the namespace longhorn-system is in the Terminating state.
Editing the CRDs and finalizers, as per the troubleshooting documentation and the following site (https://avasdream.engineer/kubernetes-longhorn-stuck-terminating) also do not fix the problem of the system terminating, as in both cases, there is no change. Using the script from https://github.com/longhorn/longhorn/blob/master/scripts/cleanup.sh also fails to terminate longhorn, as it fails to find any of the resources.
The query kubectl get namespace longhorn-system -o json gives the following results:
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"creationTimestamp": "2022-10-31T20:09:45Z",
"deletionTimestamp": "2023-01-26T18:17:03Z",
"labels": {
"kubernetes.io/metadata.name": "longhorn-system",
"name": "longhorn-system"
},
"name": "longhorn-system",
"resourceVersion": "41929420",
"uid": "f1c78184-4613-4f9d-939d-13947ac8befa"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"conditions": [
{
"lastTransitionTime": "2023-01-26T18:29:35Z",
"message": "All resources successfully discovered",
"reason": "ResourcesDiscovered",
"status": "False",
"type": "NamespaceDeletionDiscoveryFailure"
},
{
"lastTransitionTime": "2023-01-26T18:17:27Z",
"message": "All legacy kube types successfully parsed",
"reason": "ParsedGroupVersions",
"status": "False",
"type": "NamespaceDeletionGroupVersionParsingFailure"
},
{
"lastTransitionTime": "2023-01-26T18:17:51Z",
"message": "All content successfully deleted, may be waiting on finalization",
"reason": "ContentDeleted",
"status": "False",
"type": "NamespaceDeletionContentFailure"
},
{
"lastTransitionTime": "2023-01-26T18:17:27Z",
"message": "Some resources are remaining: engines.longhorn.io has 2 resource instances, nodes.longhorn.io has 5 resource instances, orphans.longhorn.io has 1 resource instances, replicas.longhorn.io has 4 resource instances, snapshots.longhorn.io has 3 resource instances, volumes.longhorn.io has 2 resource instances",
"reason": "SomeResourcesRemain",
"status": "True",
"type": "NamespaceContentRemaining"
},
{
"lastTransitionTime": "2023-01-26T18:17:27Z",
"message": "Some content in the namespace has finalizers remaining: longhorn.io in 17 resource instances",
"reason": "SomeFinalizersRemain",
"status": "True",
"type": "NamespaceFinalizersRemaining"
}
],
"phase": "Terminating"
}
}
The query kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n longhorn-system yields the following information.
NAME STATE NODE INSTANCEMANAGER IMAGE AGE
engine.longhorn.io/pvc-1886524a-5ba0-459d-9d51-b8044fec3057-e-344e3d26 stopped 64d
engine.longhorn.io/pvc-5aca06f9-adf1-45c8-b11d-6e79d9719001-e-79f24cb7 stopped 64d
NAME READY ALLOWSCHEDULING SCHEDULABLE AGE
node.longhorn.io/master0 False true True 86d
node.longhorn.io/master1 True true True 14d
node.longhorn.io/master2 False true True 86d
node.longhorn.io/worker0 False true True 70d
node.longhorn.io/worker1 True true True 70d
NAME TYPE NODE
orphan.longhorn.io/orphan-010ee0d16422c151e7e039e27fe2306815361596fa3f8b6cccc8a601b673e429 replica master0
NAME STATE NODE DISK INSTANCEMANAGER IMAGE AGE
replica.longhorn.io/pvc-1886524a-5ba0-459d-9d51-b8044fec3057-r-89dfabab stopped master2 c5a7e70d-09d8-43a2-9ba3-d5b65eb12b34 13d
replica.longhorn.io/pvc-1886524a-5ba0-459d-9d51-b8044fec3057-r-a6881548 running worker1 2d7f16e8-f11b-40e8-8935-7f0559f7674e instance-manager-r-8ccf914f longhornio/longhorn-engine:v1.3.2 16d
replica.longhorn.io/pvc-5aca06f9-adf1-45c8-b11d-6e79d9719001-r-52b5a290 stopped worker1 2d7f16e8-f11b-40e8-8935-7f0559f7674e 31d
replica.longhorn.io/pvc-5aca06f9-adf1-45c8-b11d-6e79d9719001-r-8f0ae6c9 running master2 c5a7e70d-09d8-43a2-9ba3-d5b65eb12b34 instance-manager-r-672003dc longhornio/longhorn-engine:v1.3.2 13d
NAME VOLUME CREATIONTIME READYTOUSE RESTORESIZE SIZE AGE
snapshot.longhorn.io/887f9621-5417-40b3-8999-c2695d5585d7 pvc-1886524a-5ba0-459d-9d51-b8044fec3057 2023-01-12T21:07:46Z false 10737418240 312860672 13d
snapshot.longhorn.io/8f11c48b-da51-4124-80b8-1316db88eb01 pvc-5aca06f9-adf1-45c8-b11d-6e79d9719001 2023-01-12T21:21:54Z false 21474836480 20096512000 13d
snapshot.longhorn.io/b4c31fe7-5ff5-4881-9cd8-b22fc73798bb pvc-5aca06f9-adf1-45c8-b11d-6e79d9719001 2023-01-12T21:38:23Z true 21474836480 102400 13d
NAME STATE ROBUSTNESS SCHEDULED SIZE NODE AGE
volume.longhorn.io/pvc-1886524a-5ba0-459d-9d51-b8044fec3057 attaching unknown 10737418240 master0 64d
volume.longhorn.io/pvc-5aca06f9-adf1-45c8-b11d-6e79d9719001 detaching unknown 21474836480 64d
Attempting to manually delete any of the items described also failed. All APIservices have Availability as TRUE.
What do I do to resolve this problem? I will provide any more information needed.
Continuous failed connection attempts errors are occurring in Google Cloud MySQL running on Google APP Engine with public IP.
These are some of the logs:
receiveTimestamp resource.labels.module_id resource.labels.project_id resource.labels.version_id resource.labels.zone resource.type severity textPayload timestamp
2021-06-08T05:48:43.497385728Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 80.802µs ago 2021-06-08T05:48:43.494284Z
2021-06-08T05:19:08.394840567Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 42.519µs ago 2021-06-08T05:19:08.391909Z
2021-06-08T05:13:42.889911567Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 73.279µs ago 2021-06-08T05:13:42.888659Z
2021-06-08T04:47:07.470804269Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 85.928µs ago 2021-06-08T04:47:07.467377Z
I tried some different configurations of max_connections, pool_size, pool_timeout with no success.
I have consulted this previous Issue.
And this documentation.
Some help would be appreciated.
More information. The error is always preceded by this warning in the log record:
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "bbbbbbbb#appspot.gserviceaccount.com",
"serviceAccountDelegationInfo": [
{
"firstPartyPrincipal": {
"principalEmail": "app-engine-appserver#prod.google.com"
}
}
]
},
"requestMetadata": {
"callerIp": "2600:1900:2001:12::8",
"requestAttributes": {
"time": "2021-06-09T05:59:27.400680Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudsql.googleapis.com",
"methodName": "cloudsql.instances.connect",
"authorizationInfo": [
{
"resource": "instances/aaaaaaaaaaa",
"permission": "cloudsql.instances.connect",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "instances/aaaaaaaaa",
"request": {
"project": "bbbbbbb",
"#type": "type.googleapis.com/google.cloud.sql.v1beta4.SqlInstancesCreateEphemeralCertRequest",
"instance": "zzzzzzzzz",
"body": {}
},
"response": {
"#type": "type.googleapis.com/google.cloud.sql.v1beta4.SslCert",
"kind": "sql#sslCert"
}
},
"insertId": "-rgtsssssssssss",
"resource": {
"type": "cloudsql_database",
"labels": {
"region": "europe-west1",
"project_id": "bbbbbbbb",
"database_id": "aaaaaaaaaaaaaaa"
}
},
"timestamp": "2021-06-09T05:59:27.381352Z",
"severity": "NOTICE",
"logName":
"projects/demosmf/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2021-06-09T05:59:27.746071609Z"
I think it has something to do with the management of ssl certificates.
I have verified that the application certificates are valid and have not expired
This error has been reported via Google's Public Issue Tracker.
You can follow the thread I mentioned above to track the progress.
MySQL localhost:3310 ssl JS > cluster.status();
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "127.0.0.1:3320",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3310": {
"address": "127.0.0.1:3310",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"127.0.0.1:3320": {
"address": "127.0.0.1:3320",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"127.0.0.1:3330": {
"address": "127.0.0.1:3330",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "127.0.0.1:3320"
}
I am learning how to build a MySQL InnoDB cluster. After reading the document, I built a sandbox deployment mode cluster according to it. The above code is the state of the cluster I built. When following the documentation about MySQL Router, this error occurred:
root#VM-133-145-debian:~# mysqlrouter --bootstrap root#localhost:3320 --user=mysqlrouter
Please enter MySQL password for root:
# Bootstrapping system MySQL Router instance...
Executing statements failed with: 'Error executing MySQL query "INSERT INTO mysql_innodb_cluster_metadata.v2_routers (address, product_name, router_name) VALUES ('\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0', 'MySQL Router', 'system')": Data too long for column 'address' at row 1 (1406)' (1406), trying to connect to another node
Error: Error executing MySQL query "INSERT INTO mysql_innodb_cluster_metadata.v2_routers (address, product_name, router_name) VALUES ('\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0', 'MySQL Router', 'system')": Data too long for column 'address' at row 1 (1406)
Is the value of the address field wrong? Why is it such a thing and how to solve it?
Thanks a lot.
Same issue here, fixed by downgrading mysqlrouter from 8.0.22 to 8.0.21
My cluster status show me a version number in more of yours :
"XXX-INNODB3:3306": {
"address": "XXX-INNODB3:3306",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.21"
}
Looks like server and client should be on the same version :
Server version: 8.0.21-commercial MySQL Enterprise Server - Commercial
MySQL Router Ver 8.0.21-commercial for Linux on x86_64 (MySQL Enterprise - Commercial)
Hope it could helps ^^
I'm trying to create a custom image using Console or gcloud, i.e. (Note: URLs are masked with [naURL] to get around URL posting limitation.):
$ gcloud compute images create debian-9-1-v20170810-1 --source-uri=https:[naURL]//storage.googleapis.com/[my bucket]/compressed-image.tar.gz
The command runs for about half an hour (i.e. 1800s) and then gives the following error:
ERROR: (gcloud.compute.images.create) Could not fetch resource:
- Did not create the following resources within 1800s: https:[naURL]//www.googleapis.com/compute/v1/projects/spherical-proxy-175708/global/images/debian-9-1-v20170810-1. These operations may still be underway remotely and may still succeed; use gcloud list and describe commands or https:[naURL]//console.developers.google.com/ to check resource state
The corresponding REST record accessible via Console>Compute Engine>Operations for the item reads:
{
"kind": "compute#operation",
"id": "5371669110648613089",
"name": "operation-1502320142020-5565a2a676da1-06d76920-4c86568d",
"operationType": "insert",
"targetLink": "https:[naURL]//www.googleapis.com/compute/v1/projects/spherical-proxy-175708/global/images/debian-9-1-v20170810-1",
"targetId": "3098231522309056737",
"status": "DONE",
"user": "andree.leidenfrost#gmail.com",
"progress": 100,
"insertTime": "2017-08-09T16:09:02.501-07:00",
"startTime": "2017-08-09T16:09:03.136-07:00",
"endTime": "2017-08-09T16:59:29.128-07:00",
"error": {
"errors": [
{
"code": "INTERNAL_ERROR",
"message": "Code: '8625676013601614622'"
}
]
},
"httpErrorStatusCode": 503,
"httpErrorMessage": "SERVICE UNAVAILABLE",
"selfLink": "https:[naURL]//www.googleapis.com/compute/v1/projects/spherical-proxy-175708/global/operations/operation-1502320142020-5565a2a676da1-06d76920-4c86568d"
}
Because the HTTP error is 503 "SERVICE UNAVAILABLE" I've tried a few times over the last couple of days but the problem persists.
I'm trying to follow the instructions in document Importing Boot Disk Images to Compute Engine.
Any help or hints how to debug further would be greatly appreciated!
I have been writing Web Services from a recent past, this is a sample success and error response.
error
{
"code": 1150,
"status": false,
"message": "API Student does not exist.",
"serverTime": "2013-11-29 09:47:52"
}
success
{
"code": 200,
"status": true,
"data": {
"id": 49
},
"serverTime": "2014-04-17 05:06:17"
}
With regards to returning errors I have a confusion, why do we always return one error code and one message, for example, when username and password is required as input params, say a blank request is made, so what I return is a error code 1100 and error message "Username is incorrect". I never return the whole list of errors, for example, in this case, two error message with two error codes should be sent so it saves end users data & time.
This is a sample of what I suggest?
{
"code": 1010,
"status": false,
"errors": [
{
"code": 1000,
"name": "Username invalid"
},
{
"code": 1001,
"name": "Password invalid"
},
{
"code": 1002,
"name": "Password not strong"
}
],
"serverTime": "2013-12-03 12:34:02"
}
Why is this not a good way to do? I have not seen this in either Twitter API or Facebook API.
Nobody can stop API designers to return multiple possible error responses. Decision is up to them only but in my opinion you should return most relevant error occurred from server side.
There can be many reasons behind it like below :
Unified Error Handling by all clients.
Most relevant error caused will help client more actually as he will not have to take care about all probabilistic error caused.
JSON structure simplicity. ( Error Array will be avoided. )