Is there a way to get more detail about unsupported events in google cloud datastream?
I am running a datastream from MySQL and have a few UNSUPPORTED_EVENTS_DISCARDED and I would like to understand what these events are.
In the logs explorer detail is limited to something like following:
message: "Discarded 1 unsupported events with reason code: MYSQL_UNKNOWN_ERROR. Latest discarded event details: An unexpected error occurred while fetching log: mysql-bin.013919, log_pos: 91832523."
event_code: "UNSUPPORTED_EVENTS_DISCARDED"
Here are some limitations regarding datastream/mysql:
Events have a size limitation of 3 MB
Tables that have more than 100 million rows
Not all changes to the source schema can be detected automatically
I suspect that some of the data that you are fetching hits the limitation and returns the error. I recommend to review the document with the limitation and make sure all the data can be fetched.
Related
I am doing a lookup operation Couchbase Java SDK 3.0.9 which looks like this:
// Set up
bucket = cluster.bucket("my_bucket")
collection = bucket.defaultCollection()
// Look up operation
val specs = listOf(LookupInSpecStandard.get("hash"))
collection.lookupIn(id, specs)
The error I get is BUCKET_NOT_AVAILABLE. Here are is the full message:
com.couchbase.client.core.error.UnambiguousTimeoutException: SubdocGetRequest, Reason: TIMEOUT {"cancelled":true,"completed":true,"coreId":"0xdb7f8e4800000003","idempotent":true,"reason":"TIMEOUT","requestId":608806,"requestType":"SubdocGetRequest","retried":39,"retryReasons":["BUCKET_NOT_AVAILABLE"],"service":{"bucket":"export","collection":"_default","documentId":"export:main","opaque":"0xcfefb","scope":"_default","type":"kv"},"timeoutMs":15000,"timings":{"totalMicros":15008977}}
The strange part is that this code hasn't been touched for months and the lookup broke out of a sudden. The CB cluster is working fine. Its version is
Enterprise Edition 6.5.1 build 6299.
Do you have any ideas what might have gone wrong?
Note that in Couchbase Java SDK 3.x, the Cluster::bucket method returns instantly, and continues opening a bucket in the background. So the first operation you perform - a lookupIn here - needs to wait for that resource opening to complete before it can proceed. It looks like it took a little longer to access the Couchbase bucket than usual and you got a timeout.
I recommend using the Bucket::waitUntilReady method after opening a bucket, to block until the resource opening is complete:
bucket = cluster.bucket("my_bucket")
bucket.waitUntilReady(Duration.ofMinutes(1));
This problem can occur because of firewall. You need to allow these ports.
Client-to-node
Unencrypted: 8091-8097, 9140 [3], 11210
Encrypted: 11207, 18091-18095, 18096, 18097
You can check more from below
https://docs.couchbase.com/server/current/install/install-ports.html#_footnotedef_2
I'm facing an issue in Jmeter. The API I am testing, get the parameters from a prior JDBC request.
This works fine when there is only 1 thread. But, when I run multiple threads it throws the error below
{"Message": "A transient error has occurred. Please try again. (1205)","Data":null}
Here is the screenshot
I need to run 5 threads without having to run the JDBC request 5 times.
I can retrieve 5 results in 1 JDBC call and supply them sequentially for each of the thread. Is this possible? How can I do this?
Worst-case scenario I will have to manually set up CSV file instead of JDBC calls.
Normally people use setUp Thread Group for test data preparation and tearDown Thread Group for eventual clean-up. I would suggest moving your JDBC Request under the setUp Thread Group and run it with 1 virtual user.
If you have to keep the test plan structure as it is and can amend the SQL query to return more results, be aware that according to the JDBC Request sampler documentation the results look like:
myVar_#=5
myVar_1=foo
myVar_2=bar
myVar_3=baz
myVar_4=qux
myVar_5=corge
Therefore you can use the values using __V() and __threadNum() functions combination like:
${__V(myVar_${__threadNum},)}
I'm creating a Forge application which needs to get version information from a BIM 360 hub. Sometimes it works, but sometimes (usually after the code has already been run once this session) I get the following error:
Exception thrown: 'Autodesk.Forge.Client.ApiException' in mscorlib.dll
Additional information: Error calling GetItem: {
"fault":{
"faultstring":"Unexpected EOF at target",
"detail": {
"errorcode":"messaging.adaptors.http.flow.UnexpectedEOFAtTarget"
}
}
}
The above error will be thrown from a call to an api, such as one of these:
dynamic item = await itemApi.GetItemAsync(projectId, itemId);
dynamic folder = await folderApi.GetFolderAsync(projectId, folderId);
var folders = await projectApi.GetProjectTopFoldersAsync(hubId, projectId);
Where the apis are initialized as follows:
ItemsApi itemApi = new ItemsApi();
itemApi.Configuration.AccessToken = Credentials.TokenInternal;
The Ids (such as 'projectId', 'itemId', etc.) don't seem to be any different when this error is thrown and when it isn't, so I'm not sure what is causing the error.
I based my application on the .Net version of this tutorial: http://learnforge.autodesk.io/#/datamanagement/hubs/net
But I adapted it so I can retrieve multiple nodes asynchronously (for example, all of the nodes a user has access to) without changing the jstree. I did this to allow extracting information in the background without disrupting the user's workflow. The main change I made was to add another Route on the server side that calls "GetTreeNodeAsync" (from the tutorial) asynchronously on the root of the tree and then calls it on each of the returned children, then each of their children, and so on. The function waits until all of the nodes are processed using Task.WhenAll, then returns data from each of the nodes to the client;
This means that there could be many api calls running asynchronously, and there might be duplicate api calls if a node was already opened in the jstree and then it's information is requested for the background extraction, or if the background extraction happens more than once. This seems to be when the error is most likely to happen.
I was wondering if anyone else has encountered this error, and if you know what I can do to avoid it, or how to recover when it is caught. Currently, after this error occurs, it seems that every other api call will throw this error as well, and the only way I've found to fix it is to rerun the code (I use Visual Studio so I just rerun the server and client, and my browser launches automatically)
Those are sporadic errors from our apigee router due to latency issues in the authorization process that we are currently looking into internally.
When they occur please cease all your upcoming requests, wait for a few minutes and retry again. Take a look at stuff like this or this to help you out.
And our existing reports calling out similar errors seem to point to concurrency as one of the factors leading up to the issue so you might also want to limit your concurrent requests and see if that mitigate the issue.
I deploy a google cloud function with lazy loading that loads data from google datastore. The last update time of my function is 7/25/18, 11:35 PM. It works well last week.
Normally, if the function is called less than about 30 minutes since last called. The function does not need to load data loaded from google datastore again. But I found that the lazy loading is not working since yesterday. Even the time between two function is less than 1 minute.
Does anyone meet the same problem? Thanks!
The Cloud Functions can fail due to several reasons such as uncaught exception and internal process crashes, therefore, it is required to check the logs files / HTTP responses error messages to verify the issue root cause and determine if the function is being restarted and generating Function execution timeouts that could explain why your function is not working.
I suggest you take a look on the Reporting Errors documentation that explains the process required to return a function error in order to validate the exact error message thrown by the service and return the error at the recommended way. Keep in mind that when the errors are returned correctly, then the function instance that returned the error is labelled as behaving normally, avoiding cold starts that leads higher latency issues, and making the function available to serve future requests if need be.
I've created a website using HTML 5 offline Application Cache and it works well in most cases, but for some users it fails. In Chrome, when the application is being cached, the progress is displayed for each file and also error messages if something goes wrong, like:
Application Cache Checking event
Application Cache Downloading event
...
Application Cache Progress event (7 of 521) http://localhost/HTML5App/js/main.js
...
Application Cache Error event: Failed to commit new cache to storage, would exceed quota.
I've added event listeners to window.applicationCache (error, noupdate, obsolete, etc.), but there is no information stored on the nature of the error.
Is there a way to access this information from the web site using JavaScript ?
I would like to determine somehow which file caused the error or what kind of error occurred.
I believe that the spec doesn't mention that the exact cause of the exception should be included in the error. Currently the console is your only friend.
To wit, your current error "exceed quota" is due to the fact that Chrome currently limits the storage to 5MB. You can work around this by creating an app package that requests unlimited_Storage via the permission model. See http://code.google.com/chrome/apps/docs/developers_guide.html#live for more details.
If you want specific error messages on the "onerror" handler raise a bug on http://crbug.com/new