Aggregating by some dataSources results in error: datasource not found or not readable - google-fit

Aggregating by some dataSources result in this error.
'error': [{
'message': 'datasource not found or not readable:
derived:com.google.activity.segment:com.google.android.fit:Sony:H8216:cd3b0cfa:top_level',
'domain': 'global',
'reason': 'forbidden'}]
From the endpoint GET googleapis.com/fitness/v1/users/me/dataSources, using the returned dataources that have the field "device" to request data cause the above problem. If I aggregate by any other data source, all works properly.
Question: What could be the problem?
Example:
these work:
'raw:com.google.step_count.cadence:nl.appyhapps.healthsync:walking step cadence '
'raw:com.google.step_count.delta:nl.appyhapps.healthsync:walking steps'
'raw:com.google.step_count.delta:nl.appyhapps.healthsync:HealthSync - step count'
derived:com.google.step_count.delta:com.google.android.gms:merge_step_deltas'
'derived:com.google.step_count.cadence:com.google.android.gms:merged'
'derived:com.google.step_count.delta:com.google.android.gms:estimated_steps'
these cause the error above
'raw:com.google.step_count.cumulative:Google:Pixel 5:34812dc3:Step Counter'
'raw:com.google.step_count.cumulative:samsung:SM-R870:6130108e:Samsung Step Counter'
'derived:com.google.step_count.delta:com.google.android.fit:Google:Pixel 5:34812dc3:top_level'
"dataStreamId": 'derived:com.google.step_count.delta:com.google.android.fit:Google:Pixel 5:34812dc3:top_level'
"device": {
"uid": <str>,
"type": <str>,
"version": <str>,
"model": <str>,
"manufacturer": <str>
}
}

Related

Using Route 53 CLI to delete records

I'm having trouble using the AWS CLI to delete Route 53 records. I have a list of hundreds of domains and each one needs both 'A' records deleted. I wanted to do this using the CLI to save time, but I can't get the functionality working.
For example, let's say I have the following domain and I want to delete both 'A' records:
I'm using boto3 here, but it is the same AWS CLI API that I can't get working (https://docs.aws.amazon.com/cli/latest/reference/route53/change-resource-record-sets.html). My issue is somewhere in the json filter for this api call:
HostedZoneId='ABC123DEF456',
ChangeBatch={
'Comment': 'deleteing A records for domains',
'Changes': [
{
'Action': 'DELETE',
'ResourceRecordSet': {
'Name': 'example.com',
'Type': 'A',
'Region': 'us-east-1',
'ResourceRecords': [
{
"Value": "1.2.3.4"
}
],
'AliasTarget': {
'HostedZoneId': 'ABC123DEF456',
'DNSName': 'example.com',
'EvaluateTargetHealth': False
}
}
}
]
}
The error I am getting is:
InvalidInput: An error occurred (InvalidInput) when calling the ChangeResourceRecordSets operation: Invalid request: Expected exactly one of [AliasTarget, all of [TTL, and ResourceRecords], or TrafficPolicyInstanceId], but found more than one in Change with [Action=DELETE, Name=example.com, Type=A, SetIdentifier=null]
I think there is some confusion between simple record of A type, and simple record of alias A type. Namely, simple alias record should not don't have ResourceRecords.
To check how they are described in your case, you can use the following command:
aws route53 list-resource-record-sets --hosted-zone-id <your-zone-id>
The output of the above command should be helpful in constructing your DELETE.
Below are examples of outputs from my route53:
simple record
{
"Name": "<simple-a.example.com.>",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "1.2.3.4"
}
]
}
simple record with alias
{
"Name": "<simple-alias.example.com.>",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z06990762X86XLR2ZGTK4",
"DNSName": "<example>.",
"EvaluateTargetHealth": true
}
},

JMeter: Trying to verify two or more values in a randomly assigned json path

I have a JSON response that looks like this:
{
"results": [
{
"entityType": "PERSON",
"id": 679,
"graphId": "679.PERSON",
"details": [
{
"entityType": "PERSON",
"id": 679,
"graphId": "679.PERSON",
"parentId": 594,
"role": "Unspecified Person",
"relatedEntityType": "DOCUMENT",
"relatedId": 058,
"relatedGraphId": "058.DOCUMENT",
"relatedParentId": null
}
]
},
{
"entityType": "PERSON",
"id": 69678,
"graphId": "69678.PERSON",
"details": [
{
"entityType": "PERSON",
"id": 678,
"graphId": "678.PERSON",
"parentId": 594,
"role": "UNKNOWN",
"relatedEntityType": "DOCUMENT",
"relatedId": 145,
"relatedGraphId": "145.DOCUMENT",
"relatedParentId": null
}
]
}
The problem with this JSON response is that $.results[0] is not always the same, and it can have dozens of results. I know I can do individual JSON Assertion calls where I do the JSON with a wild card
$.results[*].details[0].entityType
$.results[*].details[0].relatedEntityType
etc
However I need to verify that both "PERSON" and "DOCUMENT" match correctly in the same path on one api call since the results come back in a different path each time.
Is there a way to do multiple calls in one JSON Assertion or am I using the wrong tool?
Thanks in advance for any help.
-Grav
I don't think JSON Assertion is flexible enough, consider switching to JSR223 Assertion where you have the full flexibility in terms of defining whatever pass/fail criteria you need.
Example code which checks that:
all attributes values which match $.results[*].details[0].entityType query are equal to PERSON
and all attributes values which match $.results[*].details[0].relatedEntityType are equal to DOCUMENT
would be:
def entityTypes = com.jayway.jsonpath.JsonPath.read(prev.getResponseDataAsString(), '$.results[*].details[0].entityType').collect().find { !it.equals('PERSON') }
def relatedEntityTypes = com.jayway.jsonpath.JsonPath.read(prev.getResponseDataAsString(), '$.results[*].details[0].relatedEntityType').collect().find { !it.equals('DOCUMENT') }
if (entityTypes.size() != 1) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Entity type mismatch, one or more entries are not "PERSON" ' + entityTypes)
}
if (relatedEntityTypes.size() != 1) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Entity type mismatch, one or more entries are not "DOCUMENT" ' + relatedEntityTypes)
}
More information:
SampleResult class JavaDoc
Groovy: Working with collections
Scripting JMeter Assertions in Groovy - A Tutorial

Always raise error "1:1 error Parse error on line 1: " t y p e "" when analysing result with geojsonhint

I wanted to use osmtogeojson in order to upload osm data to mapbox. However Mapbox tells me "Unknown filetype" and when I analyse any result from osmtogeojson with geojsonhint, I always get :
1:1 error Parse error on line 1:
" t y p e "
^
Expecting 'STRING', 'NUMBER', 'NULL', 'TRUE', 'FALSE', '{', '[', got 'INVALID'
Here is an example of data I obtained this way :
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"id": "node/1853272897",
"properties": {
"timestamp": "2017-03-11T12:50:59Z",
"version": "4",
"changeset": "46761722",
"user": "sbiribizio",
"uid": "354284",
"name": "Capo Linaro",
"natural": "cape",
"wikidata": "Q3657144",
"wikipedia": "it:Capo Linaro",
"id": "node/1853272897"
},
"geometry": {
"type": "Point",
"coordinates": [
11.8357546,
42.028944
]
}
}
]
}
So I don't know where the problem comes from (I thought of different encodings maybe).
So, yes I just had to save it in UFT-8. However it still doesn't work in mapbox, but the question was for geojsonhint verification, so this is the correct answer!

Data Factory: AzureSQL in- and output for pipeline activity type AzureMLBatchExecution

In Azure Data Factory, I’m trying to call an Azure Machine Learning model by a Data Factory Pipeline. I want to use a Azure SQL table as input and another Azure SQL table for the output.
First I deployed a Machine Learning (classic) web service. Then I created an Azure Data Factory Pipeline, using a LinkedService (type= ‘AzureML’, using Request URI and API key of the ML-webservice) and a input and output dataset (‘AzureSqlTable’ type).
Deploying and Provisioning is succeeded. The pipeline starts as scheduled, but keeps ‘Running’ without any result. The pipeline activity is not being shown in the Monitor&Manage: Activity Windows.
On different sites and tutorials, I only find JSON-scripts using the activity type ‘AzureMLBatchExecution’ with BLOB in- and outputs. I want to use AzureSQL in- and output but I can’t get this working.
Can someone provide a sample JSON-script or tell me what’s possibly wrong with the code below?
Thanks!
{
"name": "Predictive_ML_Pipeline",
"properties": {
"description": "use MyAzureML model",
"activities": [
{
"type": "AzureMLBatchExecution",
"typeProperties": {},
"inputs": [
{
"name": "AzureSQLDataset_ML_Input"
}
],
"outputs": [
{
"name": "AzureSQLDataset_ML_Output"
}
],
"policy": {
"timeout": "02:00:00",
"concurrency": 3,
"executionPriorityOrder": "NewestFirst",
"retry": 1
},
"scheduler": {
"frequency": "Week",
"interval": 1
},
"name": "My_ML_Activity",
"description": "prediction analysis on ML batch input",
"linkedServiceName": "AzureMLLinkedService"
}
],
"start": "2017-04-04T09:00:00Z",
"end": "2017-04-04T18:00:00Z",
"isPaused": false,
"hubName": "myml_hub",
"pipelineMode": "Scheduled"
}
}
With a little help from a Microsoft technician, I've got this working. The JSON script as mentioned above is only changed in the schedule-section:
"start": "2017-04-01T08:45:00Z",
"end": "2017-04-09T18:00:00Z",
A pipeline is active only between its start time and end time. Because the scheduler is set to weekly, the pipeline is triggered at the start of the week: that date should be within start- and end date. For more details about scheduling, see: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-scheduling-and-execution
The Azure SQL Input dataset should look like this:
{
"name": "AzureSQLDataset_ML_Input",
"properties": {
"published": false,
"type": "AzureSqlTable",
"linkedServiceName": "SRC_SQL_Azure",
"typeProperties": {
"tableName": "dbo.Azure_ML_Input"
},
"availability": {
"frequency": "Week",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
I added the external and policy properties to this dataset (see script above) and after that, it worked.

Unexpected symbol: COMMA error from json file

I'm using Talend ETL Tool and extracting data from json files and storing them in Mysql database.
But I get the error while reading in very first json. For reading json I'm using tExtractJSONFileds component.
I'm sure about the configuation set up in talend etl tool its right. I believe there is some problem in json file.
While extracting the component shows error like this
Exception in component tExtractJSONFields_1
javax.xml.stream.XMLStreamException: java.io.IOException: Unexpected symbol: COMMA
at de.odysseus.staxon.base.AbstractXMLStreamReader.initialize(AbstractXMLStreamReader.java:218)
at de.odysseus.staxon.json.JsonXMLStreamReader.<init>(JsonXMLStreamReader.java:65)
at de.odysseus.staxon.json.JsonXMLInputFactory.createXMLStreamReader(JsonXMLInputFactory.java:148)
at de.odysseus.staxon.json.JsonXMLInputFactory.createXMLStreamReader(JsonXMLInputFactory.java:44)
at de.odysseus.staxon.base.AbstractXMLInputFactory.createXMLEventReader(AbstractXMLInputFactory.java:118)
I dont know how to deal with JSONs, So Acc to this error can anyone help me where could be the error in JSON file ?
Is there any value passed as NULL or something else ?
Sample JSON
[
[, {
"tstamp": "123456",
"event": "tgegfght",
"is_duplicate": false,
"farm": "dyhetygdht",
"uid": "tutyvbrtyvtrvy",
"clientip": "52351365136",
"device_os_label": "MICROSOFT_WINDOWS_7",
"device_browser_label": "MOZILLA_FIREFOX",
"geo_country_code": "MA",
"geo_region_code": "55",
"geo_city_name_normalized": "agadir",
"referer": "www.abc.com",
"txn": "etvevv5r",
"txn_isnew": true,
"publisher_id": 126,
"adspot_id": 11179502,
"ad_spot": 5188,
"format_id": 1611,
"misc": {
"PUBLISHER_FOLDER": "retvrect",
"NO_PROMO": "rctrctrc",
"SECTION": "evtrevr",
"U_COMMON_ALLOW": "0",
"U_Auth": "0"
},
"handler": "uint"
}, , ]
Thanks in advance !!
You have extra empty commas in your sample json.
Your Sample Json should look like
[{
"tstamp": "123456",
"event": "tgegfght",
"is_duplicate": false,
"farm": "dyhetygdht",
"uid": "tutyvbrtyvtrvy",
"clientip": "52351365136",
"device_os_label": "MICROSOFT_WINDOWS_7",
"device_browser_label": "MOZILLA_FIREFOX",
"geo_country_code": "MA",
"geo_region_code": "55",
"geo_city_name_normalized": "agadir",
"referer": "www.abc.com",
"txn": "etvevv5r",
"txn_isnew": true,
"publisher_id": 126,
"adspot_id": 11179502,
"ad_spot": 5188,
"format_id": 1611,
"misc": {
"PUBLISHER_FOLDER": "retvrect",
"NO_PROMO": "rctrctrc",
"SECTION": "evtrevr",
"U_COMMON_ALLOW": "0",
"U_Auth": "0"
},
"handler": "uint"
}]
OR
[
{
"somethinghere": "its value"
},
"somethingelse": "its value"
]
Your sample json is not valid json, due to the spurious extra commas on the second and last lines. Json only allows commas BETWEEN elements of a vector or object, and empty elements are not allowed.