What is the default timeout set in forward-request - azure-api-management

In the official documentation, it is set to None. What does None means?

I believe that the default timeout in forward-request set to none is not the no timeout.
As Specified in the same official documentation, forward-request is a root element and the required in the operation level policy.
Any value greater than 240 seconds won't be reliable and it can timeout within the range of 0 to 240 seconds, and the fix of timeout issue can be like changing the implementation a bit if that's in your control based on the number of requests forwarded by APIM gateway or backend instances or due to proxy response time.

Documentation shows 300 default but max is 240 seconds. I got timeout after 30 second so I believe it is a mistake in docs. Default should be 30 seconds.

Related

Kafka Consumer - How to set fetch.max.bytes higher than the default 50mb?

I want my consumers to process large batches, so I aim to have the consumer listener "awake", say, on 1800mb of data or every 5min, whichever comes first.
Mine is a kafka-springboot application, the topic has 28 partitions, and this is the configuration I explicitly change:
Parameter
Value I set
Default Value
Why I set it this way
fetch.max.bytes
1801mb
50mb
fetch.min.bytes+1mb
fetch.min.bytes
1800mb
1b
desired batch size
fetch.max.wait.ms
5min
500ms
desired cadence
max.partition.fetch.bytes
1801mb
1mb
unbalanced partitions
request.timeout.ms
5min+1sec
30sec
fetch.max.wait.ms + 1sec
max.poll.records
10000
500
1500 found too low
max.poll.interval.ms
5min+1sec
5min
fetch.max.wait.ms + 1sec
Nevertheless, I produce ~2gb of data to the topic, and I see the consumer-listener (a Batch Listener) is called many times per second -- way more than desired rate.
I logged the serialized-size of the ConsumerRecords<?,?> argument, and found that it is never more than 55mb.
This hints that I was not able to set fetch.max.bytes above the default 50mb.
Any idea how I can troubleshoot this?
Edit:
I found this question: Kafka MSK - a configuration of high fetch.max.wait.ms and fetch.min.bytes is behaving unexpectedly
Is it really impossible as stated?
Finally found the cause.
There is a broker fetch.max.bytes setting, and it defaults to 55mb. I only changed the consumer preferences, unaware of the broker-side limit.
see also
The kafka KIP and the actual commit.

How to disable default publishing interval time i.e. every 3 seconds on AWS IoT

I am new to AWS IoT. I am using "AWSIotDevice" as super class of my virtual device.
By using below, i am able to update shadow on AWS IoT. But my concern is, it is updating shadow every 3 seconds. I don't require it. Shadow should update only after setting new values in my virtual device. It can be after 10 seconds or 30 seconds. I tried using "setKeepAliveInterval" to 30 seconds, but still it is updating shadow every 3 seconds.
Please suggest how to disable it or increase the interval for longer time say 10 minutes or so?
AWSIotMqttClient awsIotClient = new AWSIotMqttClient(clientEndpoint,
clientId, pair.keyStore, pair.keyPassword);
awsIotClient.setKeepAliveInterval(30000);
AWSIotDevice awsIotDevice = new MyAWSIotDevice(thingName);
awsIotClient.attach(awsIotDevice);
awsIotClient.connect(10000);
Really appreciate your help.
Regards,
Krishan
You haven't explicitly said, but that looks like the Java SDK.
That being the case, you need to change the DEVICE_REPORT_INTERVAL, which, as you've notice, defaults to 3000ms.
To do this on AWSIotDevice you should use setReportInterval.

Measuring total request time in Chrome - wrong total time

What could be the reason that the total time of a request is higher than the Connection Setup time + Request/Response time?
357.32ms > (19.04 + 0.56 + 8.56 + 23.46 + 0.41)ms
Am I missing something?
I had a similar problem. In my case I noticed that the "gap" was always after the "DNS lookup" as shown below. Somehow Chrome couldn't measure the time taken to translate localhost into127.0.0.1`
I solved the problem by replacing localhost to 127.0.0.1 in the URL bar. This removed 80%~90% of the overall load time

Is there any constant interval in Nservicebus' automatic retries

I need the figure out how to manage my retries in Nservicebus.
If there is any exception in my flow, It should retry 10 times every 10 seconds. But when I search in Nservicebus' website (http://docs.particular.net/nservicebus/errors/automatic-retries), there are 2 different retry mechanisms which are First Level Retry(FLR) and Second Level Retry (SLR).
FLR is for transient errors. When you got exception, It will try instantly according to your MaxRetries parameter. This parameter should be 1 for me.
SLR is for errors that persist after FLR, where a small delay is needed between retries. There is a config parameter called "TimeIncrease" defines a delay time between tries. However, Nservicebus do these retries increasingly delay time. When you set this parameter to 10 second. It will try 10.seconds, 30.seconds, 60.seconds and so on.
What do you suggest to me to provide my first request to try every 10 seconds with or without these mechanisms?
I found my answer;
The reply of Particular Software's community(John Simon), You need to apply a custom retry policy, have a look at http://docs.particular.net/nservicebus/errors/automatic-retries#second-level-retries-custom-retry-policy-simple-policy for an example.

Only one node owns data in a Cassandra cluster

I am new to Cassandra and just run a cassandra cluster (version 1.2.8) with 5 nodes, and I have created several keyspaces and tables on there. However, I found all data are stored in one node (in the below output, I have replaced ip addresses by node numbers manually):
Datacenter: 105
==========
Address Rack Status State Load Owns Token
4
node-1 155 Up Normal 249.89 KB 100.00% 0
node-2 155 Up Normal 265.39 KB 0.00% 1
node-3 155 Up Normal 262.31 KB 0.00% 2
node-4 155 Up Normal 98.35 KB 0.00% 3
node-5 155 Up Normal 113.58 KB 0.00% 4
and in their cassandra.yaml files, I use all default settings except cluster_name, initial_token, endpoint_snitch, listen_address, rpc_address, seeds, and internode_compression. Below I list those non-ip address fields I modified:
endpoint_snitch: RackInferringSnitch
rpc_address: 0.0.0.0
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "node-1, node-2"
internode_compression: none
and all nodes using the same seeds.
Can I know where I might do wrong in the config? And please feel free to let me know if any additional information is needed to figure out the problem.
Thank you!
If you are starting with Cassandra 1.2.8 you should try using the vnodes feature. Instead of setting the initial_token, uncomment # num_tokens: 256 in the cassandra.yaml, and leave initial_token blank, or comment it out. Then you don't have to calculate token positions. Each node will randomly assign itself 256 tokens, and your cluster will be mostly balanced (within a few %). Using vnodes will also mean that you don't have to "rebalance" you cluster every time you add or remove nodes.
See this blog post for a full description of vnodes and how they work:
http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2
Your token assignment is the problem here. An assigned token are used determines the node's position in the ring and the range of data it stores. When you generate tokens the aim is to use up the entire range from 0 to (2^127 - 1). Tokens aren't id's like with mysql cluster where you have to increment them sequentially.
There is a tool on git that can help you calculate the tokens based on the size of your cluster.
Read this article to gain a deeper understanding of the tokens. And if you want to understand the meaning of the numbers that are generated check this article out.
You should provide a replication_factor when creating a keyspace:
CREATE KEYSPACE demodb
WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 3};
If you use DESCRIBE KEYSPACE x in cqlsh you'll see what replication_factor is currently set for your keyspace (I assume the answer is 1).
More details here