Steps sync to wrong day when I bucketByTime with TimeUnit.DAYS as 1 - google-fit

We have several customers reported this kind issues, for example, a user have 1,361 steps at day 11/17/2021, but 0 steps at 11/18/2021, but the google's app "google fit" shows 149 steps at 11/17 and 1,212 steps at 11/18.
we read log to see a dataPoint as follow:
start=Wed Nov 17 23:34:17 PST 2021, end=Thu Nov 18 19:57:49 PST 2021, name=com.google.step_count.delta, fields=[steps(i)]
It is as long as more than 1 day.
Questions:
How is google's app google fit ready steps, Is it using same way as the document here?
https://developers.google.com/fit/scenarios/read-daily-step-total
Or it use some other ways? Or there is an algorithm update at google fit recently? Because we didn't see this issue before.
We tried a solution to bucket by hours and sum the step, this issue gone, but it is not a good solution, it take more time, 24 times data, need more memory. Is there a good solution to avoid this issue?
Code:
val estimatedStepsDelta = DataSource.Builder()
.setDataType(DataType.TYPE_STEP_COUNT_DELTA)
.setType(DataSource.TYPE_DERIVED)
.setStreamName("estimated_steps")
.setAppPackageName("com.google.android.gms")
.build()
return DataReadRequest.Builder()
.setTimeRange(startTimeEpochSeconds, endTimeEpochSeconds.coerceAtLeast(startTimeEpochSeconds + 1), TimeUnit.SECONDS)
.aggregate(DataType.TYPE_DISTANCE_DELTA, DataType.AGGREGATE_DISTANCE_DELTA)
.aggregate(estimatedStepsDelta, DataType.AGGREGATE_STEP_COUNT_DELTA)
.aggregate(DataType.TYPE_MOVE_MINUTES, DataType.AGGREGATE_MOVE_MINUTES)
.bucketByTime(1, TimeUnit.DAYS)
.enableServerQueries()
.build()

Related

How to translate timestamp_ms to regular time stamp?

I'm pretty dumb when it comes to anything like this and someone I know asked how to translate this: "timestamp_ms": 1606291977223, to a regular time, anybody know how?
EDIT: I found a converter online lol, why didn't I try that sooner :')
(https://www.epochconverter.com/)
If you're reading the JSON from JavaScript, you can send the timestamp_ms property to the Date object in the below way which will provide the time in regular time format.
Sample code,
var ms = new Date().getTime(); //1608795435606
console.log(new Date(ms)); //Thu Dec 24 2020 13:07:15 GMT+0530 (India Standard Time)

Are dates/times handled differently in the Google Apps Script V8 Runtime?

I recently switched to the V8 Runtime in Google Apps Script and I'm having a strange problem. (Which I can't seem to reproduce in a minimal format.)
My Google Sheet allows users to enter (in cells) a 'work_start' time and a 'work_end' time. For now I have chosen "9:30" and "18:00". With the old Runtime this becomes:
Sat Dec 30 1899 09:30:00 GMT+0009 (Central European Standard Time)
Sat Dec 30 1899 18:00:00 GMT+0009 (Central European Standard Time)
When I switch to the V8 Runtime, however, I get this:
Sat Dec 30 1899 08:39:21 GMT+0009 (Central European Standard Time)
Sat Dec 30 1899 17:09:21 GMT+0009 (Central European Standard Time)
If I run it again immediately, with the old runtime, it goes back to 9:30 and 18:00.
I have checked my code multiple times and I set these values once (as globals) but I never change them. I only use/read them. I even set a breakpoint on the first line of my main function.
I set up a new project and tried to recreate the problem, but for some reason the problem doesn't occur in a new sheet. I have also tried clearing the formatting of those two cells.
Next, I tried one last thing. I moved the code out of the global space and put it in a function, then had a breakpoint on the next line so I could check my variables:
function Main() {
var work_start = cal.getRange("G1").getValue(); // work start time
var work_end = cal.getRange("G2").getValue(); // work end time
var test = 0; // SET A BREAKPOINT HERE
...
}
This gives me the same strange results: 8:39:21 and 17:09:21. (Again, only when part of my program. In a new sheet it gives 9:30 and 18:00, as expected.)
Not even sure how to begin to look for an answer to this bug, so any help or guidance will be appreciated.
I could also experience this 50'39'' time difference between the value in the cell and what Apps Script reads. Enabling or disabling V8 didn't make any difference.
This is because Sheets uses the date 1899-12-30 0:00:00 as reference, while Apps Script (and most Google services) uses Unix Time (which starts at 1970-1-1 00:00:00 UTC). Leap seconds are ignored in Unix Time so it could be that.
As a workaround, you can:
Get the values with getDisplayValue instead.
Set the cell format to "plain text".

Update ttl for all records in aerospike

I was stuck in a situation that I have initialised a namesapce with
default-ttl to 30 days. There was about 5 million data with that (30-day calculated) ttl-value. Actually, my requirement is that ttl should be zero(0), but It(ttl-30d) was kept with unaware or un-recognise.
So, Now I want to update prev(old) 5 million data with new ttl-value (Zero).
I've checked/tried "set-disable-eviction true", but it is not working, it is removing data according to (old)ttl-value.
How do I overcome out this? (and I want to retrieve the removed data, How can I?).
Someone help me.
First, eviction and expiration are two different mechanisms. You can disable evictions in various ways, such as the set-disable-eviction config parameter you've used. You cannot disable the cleanup of expired records. There's a good knowledge base FAQ What are Expiration, Eviction and Stop-Writes?. Unfortunately, the expired records that have been cleaned up are gone if their void time is in the past. If those records were merely evicted (i.e. removed before their void time due to crossing the namespace high-water mark for memory or disk) you can cold restart your node, and those records with a future TTL will come back. They won't return if either they were durably deleted or if their TTL is in the past (such records gets skipped).
As for resetting TTLs, the easiest way would be to do this through a record UDF that is applied to all the records in your namespace using a scan.
The UDF for your situation would be very simple:
ttl.lua
function to_zero_ttl(rec)
local rec_ttl = record.ttl(rec)
if rec_ttl > 0 then
record.set_ttl(rec, -1)
aerospike:update(rec)
end
end
In AQL:
$ aql
Aerospike Query Client
Version 3.12.0
C Client Version 4.1.4
Copyright 2012-2017 Aerospike. All rights reserved.
aql> register module './ttl.lua'
OK, 1 module added.
aql> execute ttl.to_zero_ttl() on test.foo
Using a Python script would be easier if you have more complex logic, with filters etc.
zero_ttl_operation = [operations.touch(-1)]
query = client.query(namespace, set_name)
query.add_ops(zero_ttl_operation)
policy = {}
job = query.execute_background(policy)
print(f'executing job {job}')
while True:
response = client.job_info(job, aerospike.JOB_SCAN, policy={'timeout': 60000})
print(f'job status: {response}')
if response['status'] != aerospike.JOB_STATUS_INPROGRESS:
break
time.sleep(0.5)
Aerospike v6 and Python SDK v7.

On Reddit's API: How can I get top posts of a certain time period as json?

How do I get, for example, a json containing all (as long as all < 100) posts from /r/tifu between Jan 1, 2016 and July 31, 2016?
I've been looking around the documentation, stackoverflow, and /r/redditdev, but I had no luck finding this.
Thanks in advance!
All of reddit's listings are pre-computed, and there are none pre-computed that match those specific requirements.
You may be able to get what you want by adding a timestamp range to the advanced search syntax. This will not be sorted by score, and may or may not continue to work, as it's not officially exposed as an API.

post timestamp in web_submit_data itemdata in loadrunner

In our hyperion application, we have a explore button.
Here is its post:
web_submit_data("explorer", //FIXME: id vlaue parameter
"Action=https://{host_url}/raframework/browse/explorer",
"Method=POST",
"TargetFrame=",
"RecContentType=application/x-json",
"Referer=https://{host_url}/workspace/index.jsp?framed=true",
"Snapshot=t19.inf",
"Mode=HTML",
ITEMDATA,
"Name=class", "Value=com.hyperion.tools.cds.explorer.ExplorerView", ENDITEM,
"Name=id", "Value=EV1390418511260", ENDITEM, //"Name=id", "Value=EV1389926310921", ENDITEM,
LAST);
This EV1390418511260 is from
this._rstExplorerViewId = "EV" + (new Date()).getTime();
from the loaded module.js file I guess, which is Unix time stamp
I tried lr_save_timestamp("timestamp", LAST ); to correlate value of id with Unix time stamp. like Value=EV{timestamp}. The request is being posted but the response body has
Content-Length: 0
X-ORACLE-BPMUI-CSRF: false
I want to try
typedef long time_t;
time_t t;
and correlate something like Value=EV{time(&t)} but here it converts to url encode and takes ascii values of those special chars.
What should I do?
Why are you using C code to replace the functionality of built in capability to LoadRunner?
See web_save_timestamp_param() for the number of milliseconds since Jan 1, 1970.
web_save_timestamp_param("tStamp", LAST );
web_submit_data("explorer",
...
ITEMDATA,
"Name=class", "Value=com.hyperion.tools.cds.explorer.ExplorerView", ENDITEM,
"Name=id", "Value=EV{tStamp}", ENDITEM,
LAST);
if this is your first trip into the Hyperion universe I would heartily suggest you find the most experienced LoadRunner professional available anywhere in the nation to support your efforts. You do not need to be struggling with tool use while at the same time working in one of the more difficult environments for any performance testing tool. Assuming you get someone who has been succesful with Hyperion recently, no matter what your organization spends (even as high as $300 per hour or more) it will be money well spent versus struggling with both tool mechanics and testing of the environment