i need to get the response code to use in scripts
like i run a command
oci compute instance update --instance-id ocid.of.instance --shape-config '{"OCPU":"2"}' --force
i will get this message
ServiceError:
{
"code": "InternalError",
"message": "Out of host capacity.",
"opc-request-id": "3FF4337F4ECE43BBB4B8E52524E80247/37CB970D371A9C6BB01DFB23E754FE5B/18DFE9AE75B88A77AB3A1FBEBD3B191B",
"status": 500
}
in this case, i got the error message and a status code 500
but if the commond works, it will output a full json of my instance's parameters, and i can only see a line of response code 200 in debug mode
is there a way to only show the response code?
Currently OCI CLI does not provide the HTTP response code directly in the response. The response would either contain the service response in case of success or a service error message in case of error.
Can you explain how you are using the HTTP response code in your script? Could you not use the command error code (non-zero on error) to determine the error case?
The ERROR: "Out of host capacity" means The selected shape does not have any available servers in the selected region and Availability Domain (AD). Virtual Machines (VM) are dynamically provisioned. If an AD has reached a minimum threshold, new hypervisors (physical servers) will be automatically provisioned.
There may be some occasions where the additional capacity has not finished provisioning before the existing capacity is exhausted, but when retrying in 15 minutes the customer may find the shape they want is available.
Alternatively, selecting a different shape, AD or region will almost certainly have the capacity needed.
Bare metal instances: Host capacity is ordered on a proactive basis guided by the growth rate of a region. Specialized shapes such as DenseIO do not have as much spare overhead and may be more likely to run out of capacity. Customers may need to try another AD or region.
Related
I use AMQ 6 (ActiveMQ) on OpenShift, and I use a queue with re-delivery with exponentialBackoff (set in connection query params).
When I have one consumer and two messages and the first message gets processed by my single consumer and does NOT get an ACK...
Will the broker deliver the 2nd message to the single consumer?
Or will the broker wait for the re-delivery to preserve message order.
This documentation states:
...Typically a consumer handles redelivery so that it can maintain message order while a message appears as inflight on the broker. ...
I don't want to have my consumer wait for re-delivery. It should consume other messages. Can I do this without multiple consumers? If so, how?
Note: In my connection query params I don't have the ActiveMQ exclusive consumer set.
I have read the Connection Configuration URI docs, but jms.nonBlockingRedelivery isn't mentioned there.
Can the resource adapter use it by query param?
If you set jms.nonBlockingRedelivery=true on your client's connection URL then messages will be delivered to your consumer while others are in the process of redelivery. This is false by default.
I'm creating a Forge application which needs to get version information from a BIM 360 hub. Sometimes it works, but sometimes (usually after the code has already been run once this session) I get the following error:
Exception thrown: 'Autodesk.Forge.Client.ApiException' in mscorlib.dll
Additional information: Error calling GetItem: {
"fault":{
"faultstring":"Unexpected EOF at target",
"detail": {
"errorcode":"messaging.adaptors.http.flow.UnexpectedEOFAtTarget"
}
}
}
The above error will be thrown from a call to an api, such as one of these:
dynamic item = await itemApi.GetItemAsync(projectId, itemId);
dynamic folder = await folderApi.GetFolderAsync(projectId, folderId);
var folders = await projectApi.GetProjectTopFoldersAsync(hubId, projectId);
Where the apis are initialized as follows:
ItemsApi itemApi = new ItemsApi();
itemApi.Configuration.AccessToken = Credentials.TokenInternal;
The Ids (such as 'projectId', 'itemId', etc.) don't seem to be any different when this error is thrown and when it isn't, so I'm not sure what is causing the error.
I based my application on the .Net version of this tutorial: http://learnforge.autodesk.io/#/datamanagement/hubs/net
But I adapted it so I can retrieve multiple nodes asynchronously (for example, all of the nodes a user has access to) without changing the jstree. I did this to allow extracting information in the background without disrupting the user's workflow. The main change I made was to add another Route on the server side that calls "GetTreeNodeAsync" (from the tutorial) asynchronously on the root of the tree and then calls it on each of the returned children, then each of their children, and so on. The function waits until all of the nodes are processed using Task.WhenAll, then returns data from each of the nodes to the client;
This means that there could be many api calls running asynchronously, and there might be duplicate api calls if a node was already opened in the jstree and then it's information is requested for the background extraction, or if the background extraction happens more than once. This seems to be when the error is most likely to happen.
I was wondering if anyone else has encountered this error, and if you know what I can do to avoid it, or how to recover when it is caught. Currently, after this error occurs, it seems that every other api call will throw this error as well, and the only way I've found to fix it is to rerun the code (I use Visual Studio so I just rerun the server and client, and my browser launches automatically)
Those are sporadic errors from our apigee router due to latency issues in the authorization process that we are currently looking into internally.
When they occur please cease all your upcoming requests, wait for a few minutes and retry again. Take a look at stuff like this or this to help you out.
And our existing reports calling out similar errors seem to point to concurrency as one of the factors leading up to the issue so you might also want to limit your concurrent requests and see if that mitigate the issue.
I have an application which stores photo data in database. An SSRS report is used to generate reports of photos related to a specific entity. The information required to generate this report is stored in a separate database and very simply links a ReportId with a number of photos. The Reporting Services Execution Web service is used to render a report in Word format and output a byte array which is then used by the application.
The issue is, there is a very clear and repeatable size limit beyond which the report is not received by the application. In each case the report is rendered successfully but the response is never sent as per the RS logs:
rshost!rshost!1bfc!03/14/2018-21:46:48:: e ERROR: HttpPipelineCallback::SendResponse(): failed writing response.
rshost!rshost!1bfc!03/14/2018-21:46:48:: e ERROR: Failed with win32 error 0x0057, pipeline=0x0000027542592740.
httpruntime!ReportServer_0-1!1bfc!03/14/2018-21:46:48:: e ERROR: Failed in
BaseWorkerRequest::SendHttpResponse(bool), exception=System.ArgumentException: Value does not fall within the expected range.
at Microsoft.ReportingServices.HostingInterfaces.IRsHttpPipeline.SendResponse(Void* response, Boolean finalWrite, Boolean closeConn)
at ReportingServicesHttpRuntime.BaseWorkerRequest.SendHttpResponse(Boolean finalFlush)
library!ReportServer_0-1!1bfc!03/14/2018-21:46:48:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerHttpRuntimeInternalException: RsWorkerRequest::FlushResponse., Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerHttpRuntimeInternalException: An internal or system error occurred in the HTTP Runtime object for application domain ReportServer_SSRS_0-1-131655000672564770. ---> System.ArgumentException: Value does not fall within the expected range.
at ReportingServicesHttpRuntime.BaseWorkerRequest.SendHttpResponse(Boolean finalFlush)
at ReportingServicesHttpRuntime.RsWorkerRequest.FlushResponse(Boolean finalFlush)
--- End of inner exception stack trace ---;
rshost!rshost!1bfc!03/14/2018-21:46:48:: e ERROR: HttpPipelineCallback::SendResponse(): failed writing response.
rshost!rshost!1bfc!03/14/2018-21:46:48:: e ERROR: Failed with win32 error 0x10DD, pipeline=0x0000027542592740.
This appears to be directly related to some size limit of the service http response for the following reasons:
The request itself is very small, just a Report Id i.e. httpRuntime maxRequestLength is not related, this has been tested.
The error happens within a few minutes of the request i.e. httpRuntime executionTimeout is not related, this has been tested.
I can reliably and repeatably add one photo too many to the report and it will fail, remove one photo and it renders and sends with no issues, add one photo and it will fail...
The times to render vary so again it appears more related to size than some timeout.
The report server execution logs show that the rendering is successful every time, regardless of whether the report is sent or not.
I have reproduced the error using both the application which normally calls the report and a separate console app that simply generates a web service client and tries to render and receive the report as a byte array using the Render2 method.
I have reproduced this error using SQL Server 2012 and SQL Server 2016, on several different report servers, including setting up a report server locally and running the server and request on the same machine.
Regardless of the combination of photos, the report always renders successfully but fails to send around a bytecount of ~238,000,000 i.e. not related to some bad data.
In all cases I am able to generate reports of the same size or greater through the Report Server portal with no issues.
Has anyone experienced similar behaviour using the RS web service? I have searched online extensively without any luck. Any suggestions on how to address this issue would be greatly appreciated.
Microsoft finally identified HttpSendResponseEntityBody as the source of the error. The function sends entity-body data associated with an HTTP response, and as per MSDN has the following parameter:
EntityChunkCount [in] A number of structures in the array pointed to by pEntityChunks. This count cannot exceed 9999.
Microsoft don't have any SSRS documentation that addresses report size limits. At this stage all they have done is suggest that "we can use other tools for this data extracting" and that "generally we suggest put the data smaller than 100MB or the row counts less than 1,000,000".
I'm using mavlink with a pixhawk flight controller. I receive messages from heartbeat thought I don't know how to receive information about it's altitude, pitch, roll or yaw.
When I connect pixhawk through qgroundcontrol application I immediately connected and can see the direction on compass, yet I don't know how to replicate that. The information I'm specificallly looking for can be received by getting a return from messages: msg_vfr_hud.MAVLINK_MSG_ID_VFR_HUD, msg_ahrs2.MAVLINK_MSG_ID_AHRS2, msg_ahrs3.MAVLINK_MSG_ID_AHRS3.
I tried creating them like that:
msg_ahrs2 msg = new msg_ahrs2();
communicationService.pushMavLinkMessage(msg);
But I don't receive any information back. Do I have to make any preflight configuration?
Any help will be appreciated.
When the connection is established between the flight controller and companion board, the flight controller will automatically start sending telemetry messages (like GPS information ..).
The connection between the flight controller and the companion board is either serial or by socket (TCP/UDP) so you should handle the incoming data correctly and use mavlink_parse_char function to get mavlink packet format.
You can use dronekit (A python API) , Ottofly (C++ API) or you can build your own one to get and send data to flight controller.
Check this example in C for Udp connection.
We are running a Sails.js API on Google Container Engine with a Cloud SQL database and recently we've been finding some of our endpoints have been stalling, never sending a response.
I had a health check monitoring /v1/status and it registered 100% uptime when I had the following simple response;
status: function( req, res ){
res.ok('Welcome to the API');
}
As soon as we added a database query, the endpoint started timing out. It doesn't happen all the time, but seemingly at random intervals, sometimes for hours on end. This is what we have changed the query to;
status: function( req, res ){
Email.findOne({ value: "someone#example.com" }).then(function( email ){
res.ok('Welcome to the API');
}).fail(function(err){
res.serverError(err);
});
}
Rather suspiciously, this all works fine in our staging and development environments, it's only when the code is deployed in production that the timeout occurs and it only occurs some of the time. The only thing that changes between staging and production is the database we are connecting to and the load on the server.
As I mentioned earlier we are using Google Cloud SQL and the Sails-MySQL adapter. We have the following error stacks from the production server;
AdapterError: Invalid connection name specified
at getConnectionObject (/app/node_modules/sails-mysql/lib/adapter.js:1182:35)
at spawnConnection (/app/node_modules/sails-mysql/lib/adapter.js:1097:7)
at Object.module.exports.adapter.find (/app/node_modules/sails-mysql/lib/adapter.js:801:16)
at module.exports.find (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:120:13)
at module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:163:10)
at _runOperation (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:408:29)
at run (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:69:8)
at bound.module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/basic.js:78:16)
at bound [as findOne] (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at Deferred.exec (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:501:16)
at tryCatcher (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/util.js:26:23)
at ret (eval at <anonymous> (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/promisify.js:163:12), <anonymous>:13:39)
at Deferred.toPromise (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:510:61)
at Deferred.then (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:521:15)
at Strategy._verify (/app/api/services/passport.js:31:7)
at Strategy.authenticate (/app/node_modules/passport-local/lib/strategy.js:90:12)
at attempt (/app/node_modules/passport/lib/middleware/authenticate.js:341:16)
at authenticate (/app/node_modules/passport/lib/middleware/authenticate.js:342:7)
at Object.AuthController.login (/app/api/controllers/AuthController.js:119:5)
at bound (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at routeTargetFnWrapper (/app/node_modules/sails/lib/router/bind.js:179:5)
at callbacks (/app/node_modules/sails/node_modules/express/lib/router/index.js:164:37)
Error (E_UNKNOWN) :: Encountered an unexpected error :
Could not connect to MySQL: Error: Pool is closed.
at afterwards (/app/node_modules/sails-mysql/lib/connections/spawn.js:72:13)
at /app/node_modules/sails-mysql/lib/connections/spawn.js:40:7
at process._tickDomainCallback (node.js:381:11)
Looking at the errors alone, I'd be tempted to say that we have something misconfigured. But the fact that it works some of the time (and has previously been working fine!) leads me to believe that there's some other black magic at work here. Our Cloud SQL instance is D0 (though we've tried upping the size to D4) and our activation policy is "Always On".
EDIT: I had seen others complain about Google Cloud SQL eg. this SO post and I was suspicious but we have since moved our database to Amazon RDS and we are still seeing the same issues, so it must be a problem with sails and the mysql adapter.
This issue is leading to hours of downtime a day, we need it resolved, any help is much appreciated!
This appears to be a sails issue, and not necessarily related to Cloud SQL.
Is there any way the QPS limit for Google Cloud SQL is being reached? See here: https://cloud.google.com/sql/faq#sizeqps
Why is my database instance sometimes slow to respond?
In order to minimize the amount you are charged for instances on per use billing plans, by default your instance becomes passive if it is not accessed for 15 minutes. The next time it is accessed there will be a short delay while it is activated. You can change this behavior by configuring the activation policy of the instance. For an example, see Editing an Instance Using the Cloud SDK.
It might be related to your policy setting. If you set it to ON_DEMAND, the instance will sleep to save your budget so that the first query to activate the instance is slow. This might cause the timeout.
https://cloud.google.com/sql/faq?hl=en