Why GoogleFit is not syncing steps data? - google-fit

I am a developer working on integrating Google Fit with a smart wearable companion app.I am exporting steps data to google fit app.It works most of the time, but randomly sometimes it doesn't work.All hourly data has been dumped into DataSet class and inserting dataset in Fitness.getHistoryClient function. DataSet object had steps for the hours and I got 200 response for the API but the data is not seen in Google Fit app.
Could someone help me?
Here is my code,
val dataSource = DataSource.Builder()
.setAppPackageName(context)
.setDataType(DataType.TYPE_STEP_COUNT_DELTA)
.setStreamName(TAG + AppConstants.STEPSCOUNT_FIT.value)
.setType(DataSource.TYPE_RAW)
.build()
// Create a data set
var dataSet = DataSet.create(dataSource)
dataSet = fillStepsData(dataSet)
LogHelper.i(TAG, "Inserting the dataset in the History API.")
val lastSignedInAccount = GoogleSignIn.getLastSignedInAccount(context)
return if (lastSignedInAccount != null) {
Fitness.getHistoryClient(context, lastSignedInAccount)
.insertData(dataSet)
.addOnCompleteListener { task ->
if (task.isSuccessful) {
// At this point, the data has been inserted and can be read.
LogHelper.i(TAG, "Data insert was successful!")
readHistoryData()
} else {
LogHelper.e(
TAG,
"There was a problem inserting the dataset.",
task.exception
)
}
}
}
fillStepsData(dataSet) - this function returns DataSet.DataSet contains DataPoint which includes all hourly data.

Related

azure ADF - Get field list of .csv file from lookup activity

context: azure - ADF brief process description:
Get a list of the fields defined in the first row of a .csv(blobed) file. This is the first step, detect fields
then 2nd step would be a kind of compare with actual columns of an SQL table
3rd one a stored procedure execution to make the alter table task, finishing with a (customized) table containing all fields needed to successfully load the .csv file into the SQl table.
To begin my ADF pipeline, I set up a lookup activity that "querys" the first line of my blobed file, "First row only" flag = ON.As a second pipeline activity, an "Append Variable" task, there I would like to get all .csv fields(first row) retrieved from the lookup activity, as a list.
Here is where a getting the nightmare.
As far I know, with dynamic content I can get an array with all values (w/ format like {"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row", etc })
with something like #activity('Lookup1').output.firstrow.
Or any array element with #activity('Lookup1').output.firstrow.<element_name>,
but I can't figure out how to get a list of all field names (keys?) of the array.
I will appreciate any advice, many thanks!
I would save the part of LookUp Activity because it seems that you are familiar with it.
You could use Azure Function HttpTrigger to get the key list of firstrow JSON object. For example your json object like this as you mentioned in your question:
{"field1_name":"field1_value_1st_row", "field2_name":"field2_value_1st_row"}
Azure Function code:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
var array = [];
for(var key in req.body){
array.push(key);
}
context.res = {
body: {"keyValue":array}
};
};
Test Output:
Then use Azure Function Activity to get the output:
#activity('<AzureFunctionActivityName>').keyValue
Use Foreach Activity to loop the keyValue array:
#item()
Still based on the above sample input data,please refer to my sample code:
dct = {"field1_name": "field1_value_1st_row", "field2_name": "field2_value_1st_row"}
list = []
for key in dct.keys():
list.append(key)
print(list)
dicOutput = {"keys": list}
print(dicOutput)
Have you considered doing this in ADF data flow? You would map the incoming fields to a SQL dataset without a target schema. Define a new table name in the dataset definition and then map the incoming fields from your CSV to a new target table schema definition. ADF will write the rows to a new table using that file's schema.

CollectionView.reloadData() outputs cells in incorrect order

I am working on an app that requires a sync to the server after logging in to get all the activities the user has created and saved to the server. Currently, when the user logs in a getActivity() function that makes an API request and returns a response which is then handled.
Say the user has 4 activities saved on the server in this order (The order is determined by the time of the activity being created / saved) ;
Test
Bob
cvb
Testing
looking at the JSONHandler.getActivityResponse , it appears as though the the results are in the correct order. If the request was successful, on the home page where these activities are to be displayed, I currently loop through them like so;
WebAPIHandler.shared.getActivityRequest(completion:
{
success, results in DispatchQueue.main.async {
if(success)
{
for _ in (results)!
{
guard let managedObjectContext = self.managedObjectContext else { return }
let activity = Activity(context: managedObjectContext)
activity.name = results![WebAPIHandler.shared.idCount].name
print("activity name is - \(activity.name)")
WebAPIHandler.shared.idCount += 1
}
}
And the print within the for loop is also outputting in the expected order;
activity name is - Optional("Test")
activity name is - Optional("Bob")
activity name is - Optional("cvb")
activity name is - Optional("Testing")
The CollectionView does then insert new cells, but it seemingly in the wrong order. I'm using a carousel layout on the home page, and the 'cvb' object for example is appearing first in the list, and 'bob' is third in the list. I am using the following
func controller(_ controller: NSFetchedResultsController<NSFetchRequestResult>, didChange anObject: Any, at indexPath: IndexPath?, for type: NSFetchedResultsChangeType, newIndexPath: IndexPath?)
{
switch (type)
{
case .insert:
if var indexPath = newIndexPath
{
// var itemCount = 0
// var arrayWithIndexPaths: [IndexPath] = []
//
// for _ in 0..<(WebAPIHandler.shared.idCount)
// {
// itemCount += 1
//
// arrayWithIndexPaths.append(IndexPath(item: itemCount - 1, section: 0))
// print("itemCount = \(itemCount)")
// }
print("Insert object")
// walkThroughCollectionView.insertItems(at: arrayWithIndexPaths)
walkThroughCollectionView.reloadData()
}
You can see why I've tried to use collectionView.insertItems() but that would cause an error stating:
Invalid update: invalid number of items in section 0. The number of items contained in an existing section after the update (4) must be equal to the number of items contained in that section before the update (4), plus or minus the number of items inserted or deleted from that section (4 inserted, 0 deleted)
I saw a lot of other answers mentioning how reloadData() would fix the issue, but I'm real stuck at this point. I've been using swift for several months now, and this has been the first time I'm truly at a loss. What I also realised is that the order displayed in the carousel is also different to a separate viewController which is passed the same data. I just have no idea why the results return in the correct order, but are then displayed in an incorrect order. Is there a way to sort data in the collectionView after calling reloadData() or am I looking at this from the wrong angle?
Any help would be much appreciated, cheers!
The order of the collection view is specified by the sort descriptor(s) of the fetched results controller.
Usually the workflow of inserting a new NSManagedObject is
Insert the new object into the managed object context.
Save the context. This calls the delegate methods controllerWillChangeContent, controller(:didChange:at: etc.
In controller(:didChange:at: insert the cell into the collection view with insertItems(at:, nothing else. Do not call reloadData() in this method.

How to update mysql database from csv files in groovy-grails?

I have a table in my database which needs to updates with value for some rows and columns from csv file( this file outside of the grails application). The csv file contains large set of data with map to specific address and city. Some of the address in my application have wrong cities. So I want to get a city from database(grails application db), compare it with the city in csv file, map address to it, and add that address to the application database.
what is the best approach?
For Grails 3 use https://bintray.com/sachinverma/plugins/org.grails.plugins:csv to parse CSV, add the following to build.gradle. The plugin is also available for Grails 2.
repositories {
https://bintray.com/sachinverma/plugins/org.grails.plugins:csv
}
dependencies {
compile "org.grails.plugins:csv:1+"
}
Then in your service use like:
def is
try {
is = params.csvfile.getInputStream()
def csvMapReader = new CSVMapReader( new InputStreamReader( is ) )
csvMapReader.fieldKeys = ["city","address1", "address2"]
csvMapReader.eachWithIndex { map, idx ->
def dbEntry = DomainObject.findByAddress1AndAddress2( map.address1, map.address2 )
if ( map.city != dbEntry.city ) {
// assuming we're just updating the city on current entry?
dbEntry.city = map.city
dbEntry.save()
}
// do whatever logic
}
finally {
is?.close
}
This is of course a simplified version as I don't know you're csv or schema layout.

lightswitch html client failed to convert circular structure to json

Im trying to insert new data into the DB when a user scans a barcode into a field. When I hit save on the screen it says fail to convert circular structure to json.
var report = myapp.activeDataWorkspace.BlanccoData.BMCReports.addNew();
report.c_Date = Date.now();
report.IsScannedReport = true;
if (contentItem.screen.ScanSSN == true) {
report.SSN = contentItem.value;
}
var system = myapp.activeDataWorkspace.BlanccoData.BMCSystemInfo.addNew();
// system.Report = report;
system.Barcode = contentItem.screen.Barcode;
I think the commented line is throwing the exception but I need to reference it.
thanks
Have you considered that you may have a circular relationship in your database? That is reflected in your DataSource?

Excluding Content From SQL Bulk Insert

I want to import my IIS logs into SQL for reporting using Bulk Insert, but the comment lines - the ones that start with a # - cause a problem becasue those lines do not have the same number f fields as the data lines.
If I manually deleted the comments, I can perform a bulk insert.
Is there a way to perform a bulk insert while excluding lines based on a match such as : any line that beings with a "#".
Thanks.
The approach I generally use with BULK INSERT and irregular data is to push the incoming data into a temporary staging table with a single VARCHAR(MAX) column.
Once it's in there, I can use more flexible decision-making tools like SQL queries and string functions to decide which rows I want to select out of the staging table and bring into my main tables. This is also helpful because BULK INSERT can be maddeningly cryptic about the why and how of why it fails on a specific file.
The only other option I can think of is using pre-upload scripting to trim comments and other lines that don't fit your tabular criteria before you do your bulk insert.
I recommend using logparser.exe instead. LogParser has some pretty neat capabilities on its own, but it can also be used to format the IIS log to be properly imported by SQL Server.
Microsoft has a tool called "PrepWebLog" http://support.microsoft.com/kb/296093 - which strips-out these hash/pound characters, however I'm running it now (using a PowerShell script for multiple files) and am finding its performance intolerably slow.
I think it'd be faster if I wrote a C# program (or maybe even a macro).
Update: PrepWebLog just crashed on me. I'd avoid it.
Update #2, I looked at PowerShell's Get-Content and Set-Content commands but didn't like the syntax and possible performance. So I wrote this little C# console app:
if (args.Length == 2)
{
string path = args[0];
string outPath = args[1];
Regex hashString = new Regex("^#.+\r\n", RegexOptions.Multiline | RegexOptions.Compiled);
foreach (string file in Directory.GetFiles(path, "*.log"))
{
string data;
using (StreamReader sr = new StreamReader(file))
{
data = sr.ReadToEnd();
}
string output = hashString.Replace(data, string.Empty);
using (StreamWriter sw = new StreamWriter(Path.Combine(outPath, new FileInfo(file).Name), false))
{
sw.Write(output);
}
}
}
else
{
Console.WriteLine("Source and Destination Log Path required or too many arguments");
}
It's pretty quick.
Following up on what PeterX wrote, I modified the application to handle large log files since anything sufficiently large would create an out-of-memory exception. Also, since we're only interested in whether or not the first character of a line starts with a hash, we can just use StartsWith() method on the read operation.
class Program
{
static void Main(string[] args)
{
if (args.Length == 2)
{
string path = args[0];
string outPath = args[1];
string line;
foreach (string file in Directory.GetFiles(path, "*.log"))
{
using (StreamReader sr = new StreamReader(file))
{
using (StreamWriter sw = new StreamWriter(Path.Combine(outPath, new FileInfo(file).Name), false))
{
while ((line = sr.ReadLine()) != null)
{
if(!line.StartsWith("#"))
{
sw.WriteLine(line);
}
}
}
}
}
}
else
{
Console.WriteLine("Source and Destination Log Path required or too many arguments");
}
}
}