I'm new to Alexa development using ask cli, and I ran into a problem when I was going through the dialog-delegate-starter. In the json model, there is a field called elicitation, and it has the value "Elicit.Slot.251925459829.983270759031", which seems to be some sort of auto generated id. I'm imagining creating my own dialog intents, and having to fill this out manually. How is this id generated, and where does one find it?
"dialog": {
"intents": [
{
"name": "factIntent",
"confirmationRequired": false,
"slots": [
{
"name": "city",
"type": "cityName",
"elicitationRequired": true,
"confirmationRequired": false,
"prompts": {
"elicitation": "Elicit.Slot.251925459829.983270759031"
}
}
]
}
]
},
You don't really need to fill those in manually. Even if you use "manual delegation" you'll have those available. I'm not sure what you need them for, but there's a way to get them.
Maybe this will help: https://developer.amazon.com/es-mx/docs/custom-skills/delegate-dialog-to-alexa.html
Related
Wondering if I can create a "dynamic mapping" within an elasticsearch index. The problem I am trying to solve is the following: I have a schema that has an attribute that contains an object that can differ greatly between records. I would like to mirror this data within elasticsearch if possible but believe that automatic mapping may get in the way.
Imagine a scenario where I have a schema like the following:
{
name: string
origin: string
payload: object // can be of any type / schema
}
Is it possible to create a mapping that supports this? I do not need to query the records by this payload attribute, but it would be great if I can.
Note that I have checked the documentation but am confused on if what elastic calls dynamic mapping is what I am looking for.
It's certainly possible to specify which queryable fields you expect the payload to contain and what those fields' mappings should be.
Let's say each doc will include the fields payload.livemode and payload.created_at. If these are the only two fields you'll want to perform queries on, and you'd like to disable dynamic, index-time mappings autogenerated by Elasticsearch for the rest of the fields, you can use dynamic templates like so:
PUT my-payload-index
{
"mappings": {
"dynamic_templates": [
{
"variable_payload": {
"path_match": "payload",
"mapping": {
"type": "object",
"dynamic": false,
"properties": {
"created_at": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss"
},
"livemode": {
"type": "boolean"
}
}
}
}
}
],
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"origin": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
Then, as you ingest your docs:
POST my-payload-index/_doc
{
"name": "abc",
"origin": "web.dev",
"payload": {
"created_at": "2021-04-05 08:00:00",
"livemode": false,
"abc":"def"
}
}
POST my-payload-index/_doc
{
"name": "abc",
"origin": "web.dev",
"payload": {
"created_at": "2021-04-05 08:00:00",
"livemode": true,
"modified_at": "2021-04-05 09:00:00"
}
}
and verify with
GET my-payload-index/_mapping
no new mappings will be generated for the fields payload.abc nor payload.modified_at.
Not only that — the new fields will also be ignored, as per the documentation:
These fields will not be indexed or searchable, but will still appear in the _source field of returned hits.
Side note: if fields are neither stored nor searchable, they're effectively the opposite of enabled.
The Big Picture
Working with variable contents of a single, top-level object is quite standard. Take for instance the stripe event object — each event has an id, an api_version and a few other shared params. Then there's the data object that's analogous to your payload field.
Now, all is fine, until you need to aggregate on the contents of your payload. See, since the content is variable, so are the data paths / accessors. But wildcards in aggregation paths don't work in Elasticsearch. Scripts do but are onerous to maintain.
Back to stripe. They partially solved it through what they call polymorphic, typed hashes — as discussed in their blog on API design:
A pretty neat approach that's worth emulating.
P.S. I discuss dynamic templates in more detail in the chapter "Mapping Automation" of my ES Handbook.
I am reverse engineering an app that sends queries to
SOMESERVERNAME.analysis.windows.net/public/reports/querydata via an HTTP POST of an JSON-structured query.
Some initial lines of a sample query are at the end of this message.
I can't find any documentation on this anywhere. I don't know if this is some secret API or what. I ultimately would like to just ignore the aggregations altogether and just dump the raw data, which seems to sit in some flat-file type container on the back-end, but without some API documentation I'm stuck with just re-running the super basic handful of queries I've been able to intercept.
Note: this app is an embedded analytics page created with PowerBI, but the only REST API I can find for PowerBI has nothing to do with querying, but just basic object management.
Thanks!
{
"version": "1.0.0",
"queries": [
{
"Query": {
"Commands": [
{
"SemanticQueryDataShapeCommand": {
"Query": {
"Version": 2,
"From": [
{
"Name": "s",
"Entity": "Sheet1"
}
],
"Select": [
{
"Aggregation": {
"Expression": {
"Column": {
"Expression": {
"SourceRef": {
"Source": "s"
}
},
"Property": "Total"
}
},
"Function": 0
},
"Name": "Sum(Sheet1.Total)"
}
],
"Where": [
{
"Condition": {
"In": {
"Expressions": [
{
"Column": {
"Expression": {
"SourceRef": {
"Source": "s"
}
},
"Property": "Year"
}
}
],
"Values": [
[
{
"Literal": {
"Value": "'2018'"
}
}
]
]
}
}
},
............
I have built a client that scrapes data off a specific Power BI report using the same API, but probably you'll be able to adapt it to your use case. Maybe we can even abstract the code into a more generalized Power BI client!
Having tinkered with the API for two days, I realised that there are many ways the data can be formatted:
"nested"/multidimensional data can be unflattened, flattened by 1 degree, etc.
a primary "table" of a result dataset (in data.PH) can reference others (in data.SH)
The basics are as follows:
A dataset is structured like a multidimensional table, with cells containing values.
In a set of cells, the first always has a field S that contains the schema of its and all subsequent cells.
The schema maps a field of each cell's object with a selection from your query, e.g. the G0 field with the queried column age.
My client seems to work only with a specific type of query (SemanticQueryDataShapeCommand), a specific nr of dimensions and a specific column marked as primary (via Binding.Primary). But maybe that helps! https://github.com/derhuerst/fetch-bvg-occupancy/blob/1ebb864b1ff7130f9d2f0ab031c6d78bcabdd633/lib/parse-dataset.js
The only documented way to use this API is through the ADOMD.NET or OleDb provider.
If you want to send a DAX/MDX query and retrieve data programmatically, there's a sample of how to front-end the service with a simple REST API here.
So I'm trying to load the data received from a webservice into a sencha touch 2 store.
The data is nested JSON, however it is made to include multiple dataArrays.
I am working with sencha touch 2.3.1, somewhat equal to Ext JS 4.2. I don't have that much experience with sencha yet, but I'm getting there. I decided to go for MVC, so I'd like the answers to be as close to this as possible :).
This is the example JSON I am using:
[
{
"DataCollection": {
"DataArrayOne": [
{
"Name": "John Smith",
"Age": "19"
},
{
"Name": "Bart Smith",
"Age": "16"
}
],
"DataArrayTwo": [
{
"Date": "20110601",
"Product": "Apple",
"Descr": "",
"Remark": ""
},
{
"Date": "20110601",
"Product": "Orange",
"Descr": "",
"Remark": ""
},
{
"Date": "20110601",
"Product": "Pear",
"Descr": "",
"Remark": ""
}
],
"DataArrayThree": [
{
"SomeTotalCost": "400,50",
"IntrestPercentage": "3"
}
]
}
}
]
Through only one call, I get this json. I don't want to cause any unnecessary traffic so I hope to be able to use the data somehow.
I want to be able to use each DataArray on its own.
The data gets sent to the store through its proxy:
Ext.define("MyApp.store.myDataObjects", {
extend: "Ext.data.Store",
config: {
model: "MyApp.model.myDataObject",
proxy: {
reader: {
type: "json",
rootProperty: "DataCollection"
},
type: "ajax",
api: {
read: "https://localhost/Service.svc/json"
},
limitParam: false,
startParam: false,
pageParam: false,
extraParams: {
id: "",
token: "",
filter: ""
},
writer: {
encodeRequest: true,
type: "json"
}
}
}
});
I am a bit stuck with the model here. I tried using mappings which would look like this:
config: {
fields: [ {
name: "IntrestPercentage",
mapping: "Calculation.IntrestPercentage",
type: "string"
}
]}
I tried associations as well but to no avail.
According to google chrome console, it doesn't make any objects containing data. I get only 1 object with all values "null".
My endgoal is to be able to show each dataArray in a separate table. So a table for DataArrayOne, a table for DatarrayTwo... The data itself isn't linked. They are only details that have to be shown on a view.
John Smith isn't related to the apples, as in he didn't buy. The apples are just there as an item to be shown.
The possible solutions I've seen yet not understood due to them being outdated are:
ChildStores: You have a master store that receives the data, and then
you split the data to other stores according to rootProperty. I have
no idea how to do this however and I'm not sure if it will work at
all.
Associations, in case I was doing them wrong. I don't think they
are needed because the data isn't linked to each other but it is part
of "DataCollection" though.
Could someone please post an example on how to deal with this unusual(?) kind of nested json.
Or any other solution which will lead to being able to use the 3 dataArrays at will.
Thanks in advance
The best would be to load the complete data with a separate Ext.Ajax.request and then use store.loadData in the success callback. For example:
var data = Ext.decode(response.responseText);
store1.loadData(data[0].DataCollection.DataArrayOne);
store2.loadData(data[0].DataCollection.DataArrayTwo);
store3.loadData(data[0].DataCollection.DataArrayThree);
I'm Working on AngularJS.
In this part of the project my goal is to obtain a JSON structure after filling a form with some particulars values.
Here's the fiddle of my simple form: Fiddle
With the form I will do a query to KairosDB, that is my NoSql Database, I will query data from it by a JSON object. The form is structured in this way:
a Name
a certain Number of Tags, with Tag Id ("ch" for example) and tag value ("932" for example)
a certain Number of Aggregators to manipulate data coming from DB
Start Timestamp and End Timestamp (now they are static and only included in the final JSON Object)
After filling this form, with my code I'll obtain for example this JSON object:
{
"metrics": [
{
"tags": [
{
"id": "ch",
"value": "932"
},
{
"id": "ch",
"value": "931"
}
],
"aggregators": {
"name": "sum",
"sampling": [
{
"value": "1",
"unit": "milliseconds",
"type": "SUM"
}
]
}
}
],
"cache_time": 0,
"start_absolute": 123,
"end_absolute": 1234
}
Unfortunately, KairosDB accepts a different structure, and as you could see, Tag id "ch" doesn't hase an "id" string before, or for example, Tag values coming from the same tag id are grouped together
{
"metrics": [
{
"tags": {
"ch": [
"932",
"931"
]
},
"name": "AIENR",
"aggregators": [
{
"name": "sum",
"sampling": {
"value": "1",
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1367359200000,
"end_absolute": 1386025200000
}
My question is: Is there a way to obtain the JSON structure like the one accepted by Kairos DB with an Angular JS form?. Thanks to everyone.
I've seen this topic as the one more similar to mine but it isn't in AngularJS.
Personally, I'd do the refactoring work in the backend - Have what ever server interfaces sends and receives data do the manipulation - Otherwise you'll end up needing to refactor your data inside Angular anywhere you want to use that dataset.
Where as doing it in the backend would put it in a single access point.
Of course, you could do it in Angular, just replace userString in the submitData method with a copy of the array and replace the tags section with data in the new format, and likewise refactor the returned result to the correct format when you get a reply.
In the app we're developing, we create all the JSON at the server side using dinamically generated configs (JSON objects). We use that for stores (and other stuff, like GUIs), with a dinamically generated list of its data fields.
With a JSON like this:
{
"proxy": {
"type": "rest",
"url": "/feature/163",
"timeout": 600000
},
"baseParams": {
"node": "163"
},
"fields": [{"name": "id", "type": "int" },
{"name": "iconCls", "type": "auto"},
{"name": "text","type": "string"
},{ "name": "name", "type": "auto"}
],
"xtype": "jsonstore",
"autoLoad": true,
"autoDestroy": true
}, ...
Ext will gently create an "implicit model" with which I'll be able to work with, load it on forms, save it, delete it, etc.
What I want is to specify through a JSON config not the fields, but the model itself. Is this possible?
Something like:
{
model: {
name: 'MiClass',
extends: 'Ext.data.Model',
"proxy": {
"type": "rest",
"url": "/feature/163",
"timeout": 600000},
etc... }
"autoLoad": true,
"autoDestroy": true
}, ...
That way I would be able to create a whole JSON from the server without having to glue stuff using JS statements on the client side.
Best regards,
I don't see why not. The syntax to create a model class is similar to that of store and components:
Ext.define('MyApp.model.MyClass', {
extend:'Ext.data.Model',
fields:[..]
});
So if you take this apart you could call Ext.define(className,config);
where className is a string and config is a JSON object and both are generated on the server.
There's no way to achieve what I want.
The only way you can do it is by means of defining the fields of the Ext.data.Store and have it to generate the implicit model by using the fields configuration.