How to disable default publishing interval time i.e. every 3 seconds on AWS IoT - aws-sdk

I am new to AWS IoT. I am using "AWSIotDevice" as super class of my virtual device.
By using below, i am able to update shadow on AWS IoT. But my concern is, it is updating shadow every 3 seconds. I don't require it. Shadow should update only after setting new values in my virtual device. It can be after 10 seconds or 30 seconds. I tried using "setKeepAliveInterval" to 30 seconds, but still it is updating shadow every 3 seconds.
Please suggest how to disable it or increase the interval for longer time say 10 minutes or so?
AWSIotMqttClient awsIotClient = new AWSIotMqttClient(clientEndpoint,
clientId, pair.keyStore, pair.keyPassword);
awsIotClient.setKeepAliveInterval(30000);
AWSIotDevice awsIotDevice = new MyAWSIotDevice(thingName);
awsIotClient.attach(awsIotDevice);
awsIotClient.connect(10000);
Really appreciate your help.
Regards,
Krishan

You haven't explicitly said, but that looks like the Java SDK.
That being the case, you need to change the DEVICE_REPORT_INTERVAL, which, as you've notice, defaults to 3000ms.
To do this on AWSIotDevice you should use setReportInterval.

Related

Kafka Consumer - How to set fetch.max.bytes higher than the default 50mb?

I want my consumers to process large batches, so I aim to have the consumer listener "awake", say, on 1800mb of data or every 5min, whichever comes first.
Mine is a kafka-springboot application, the topic has 28 partitions, and this is the configuration I explicitly change:
Parameter
Value I set
Default Value
Why I set it this way
fetch.max.bytes
1801mb
50mb
fetch.min.bytes+1mb
fetch.min.bytes
1800mb
1b
desired batch size
fetch.max.wait.ms
5min
500ms
desired cadence
max.partition.fetch.bytes
1801mb
1mb
unbalanced partitions
request.timeout.ms
5min+1sec
30sec
fetch.max.wait.ms + 1sec
max.poll.records
10000
500
1500 found too low
max.poll.interval.ms
5min+1sec
5min
fetch.max.wait.ms + 1sec
Nevertheless, I produce ~2gb of data to the topic, and I see the consumer-listener (a Batch Listener) is called many times per second -- way more than desired rate.
I logged the serialized-size of the ConsumerRecords<?,?> argument, and found that it is never more than 55mb.
This hints that I was not able to set fetch.max.bytes above the default 50mb.
Any idea how I can troubleshoot this?
Edit:
I found this question: Kafka MSK - a configuration of high fetch.max.wait.ms and fetch.min.bytes is behaving unexpectedly
Is it really impossible as stated?
Finally found the cause.
There is a broker fetch.max.bytes setting, and it defaults to 55mb. I only changed the consumer preferences, unaware of the broker-side limit.
see also
The kafka KIP and the actual commit.

How do I wait for a random amount of time before executing the next action in Puppeteer?

I would love to be able to wait for a random amount of time (let's say a number between 5-12 seconds, chosen at random each time) before executing my next action in Puppeteer, in order to make the behaviour seem more authentic/real world user-like.
I'm aware of how to do it in plain Javascript (as detailed in the Mozilla docs here), but can't seem to get it working in Puppeteer using the waitFor call (which I assume is what I'm supposed to use?).
Any help would be greatly appreciated! :)
You can use vanila JS to randomly wait between 5-12 seconds between action.
await page.waitFor((Math.floor(Math.random() * 12) + 5) * 1000)
Where:
5 is the start number
12 is the end number
1000 means it's converting seconds to milliseconds
(PS: However, if you question is about waiting 5-12 seconds randomly before every action, then you should have a class with wrapper, which is a different issue until you update your question.)

Is there any constant interval in Nservicebus' automatic retries

I need the figure out how to manage my retries in Nservicebus.
If there is any exception in my flow, It should retry 10 times every 10 seconds. But when I search in Nservicebus' website (http://docs.particular.net/nservicebus/errors/automatic-retries), there are 2 different retry mechanisms which are First Level Retry(FLR) and Second Level Retry (SLR).
FLR is for transient errors. When you got exception, It will try instantly according to your MaxRetries parameter. This parameter should be 1 for me.
SLR is for errors that persist after FLR, where a small delay is needed between retries. There is a config parameter called "TimeIncrease" defines a delay time between tries. However, Nservicebus do these retries increasingly delay time. When you set this parameter to 10 second. It will try 10.seconds, 30.seconds, 60.seconds and so on.
What do you suggest to me to provide my first request to try every 10 seconds with or without these mechanisms?
I found my answer;
The reply of Particular Software's community(John Simon), You need to apply a custom retry policy, have a look at http://docs.particular.net/nservicebus/errors/automatic-retries#second-level-retries-custom-retry-policy-simple-policy for an example.

How to implemente a Tile which can be updated every 1 minute?

In windows phone store, there are have an app named TimeMe Tile, it can update current time on Tile every 1 minute, I am very curious how it is implemented, as far as know, the period of background task is 30 minutes.
Here is this app's link:
http://www.windowsphone.com/zh-cn/store/app/timeme-tile/ef6099f2-41dd-4bad-9fa1-8f4143386194
Thank you.
If you have predictable tile information (like the time) then you can schedule the tile notifications ahead of time with the ScheduledTileNotification class. Scheduled notifications will fire even if the app itself is not running. The app only needs to run (in the foreground or as a background task) to schedule the notification.
You can schedule a tile to update every minute for the next hour something like the following:
int min = 0;
for(min=0;min<60;min++)
{
// Create a tile template with whatever we want to show
XmlDocument tileXml = GenerateTileTemplate(min);
// Schedule it for min minutes from now
DateTime dueTime = DateTime.Now.AddMinutes(min);
ScheduledTileNotification scheduledTile = new ScheduledTileNotification(tileXml, dueTime);
TileUpdateManager.createTileUpdaterForApplication().AddToSchedule(scheduledTile);
}
For a fuller example see How to schedule a tile notification.
If the tile needs to have more timely data that can't be predicted then you'd need to push a notification from off-system to include that information more often than the app can get CPU time.

Ability to limit maximum reducers for a hadoop hive mapred job?

I've tried prepending my query with:
set mapred.running.reduce.limit = 25;
And
set hive.exec.reducers.max = 35;
The last one jailed a job with 530 reducers down to 35... which makes me think it was going to try and shoe horn 530 reducers worth of work into 35.
Now giving
set mapred.tasktracker.reduce.tasks.maximum = 3;
a try to see if that number is some sort of max per node ( previously was 7 on a cluster with 70 potential reducer's ).
Update:
set mapred.tasktracker.reduce.tasks.maximum = 3;
Had no effect, was worth a try though.
Not exactly a solution to the question, but potentially a good compromise.
set hive.exec.reducers.max = 45;
For a super query of doom that has 400+ reducers, this jails the most expensive hive task down to 35 reducers total. My cluster currently only has 10 nodes, each node supporting 7 reducers...so in reality only 70 reducers can run as one time. By jailing the job down to less then 70, I've noticed a slight improvement in speed without any visible changes to the final product. Testing this in production to figure out what exactly is going on here. In the interim it's a good compromise solution.