Evaluating context variables in a condition with Watson Assistant - json

Everytime Watson can't answer a question or a comment it pulls up the "everything_else" node that says 'Sorry, I don't know bla bla bla". I want Watson to send and specific message to the user after three failed attempts. How do I do it?
I read this page (https://console.bluemix.net/docs/services/assistant/dialog-slots.html) but I could not apply the solution given.
My 'anything_else' Json
{
"output": {
"generic": [
{
"time": 2000,
"typing": true,
"response_type": "pause"
},
{
"values": [
{
"text": "Ainda não tenho todas as respostas, talvez reformular a frase ajude..."
},
{
"text": "Perdão, acho que não entendi. Tente inserir palavras chave ou reformular a frase."
},
{
"text": "Sorry! Essa eu não sei... Tente algumas palavras chave, pode me ajudar a entender!"
}
],
"response_type": "text",
"selection_policy": "random"
}
]
}
}

There are two approaches.
Approach 1:
Change your responses from random to sequential. This works fine if your users are not expected to hit that topic that often, or are "tyre kickers" (playing with system, but not using as expected).
For example, in an off topic you may give 2 responses to it, but the third tells them to stop playing.
Approach 2:
Have two anything_else nodes. The first node checks if a counter is over a certain value. The logic for that would be something like:
If $counter < 3
In the node you would give the normal "I don't understand", and increment the counter.
Important Make sure you have created a default $counter variable and set it (in your welcome node is good).
The second anything_else node after it would give your expected response you want. You can optionally reset the counter at this point.
Added option would be to have a flag that checks if you hit the first anything_else. If you didn't then reset your counter.
An example of this would be if someone asked too many off topic questions in a row you might want to stop them, but if they go on topic you reset to prevent misunderstandings being picked up as off topic.

Related

X-scale does not update (sometimes) on signal change in a Vega chart

This is a somewhat simplified version of a chart I built recently. When I select the Extended time rage checkbox, more data points will show and the X axis will adjust accordingly.
When I clear the checkbox the additional data points will disappear (OK) but the X axis will not go back to the previous state (bug?).
Oddly enough if I change the time unit back and forth (for example monthly -> weekly -> monthly) then the X axis will eventually redraw correctly.
Why does this happen an how could I work-around the issue?
Also note that Y axis seem to update fine every time the checkbox toggles.
Chart renders with expected x-axis (when clicking checkbox) if xscale "domain" is changed from
"domain": {
"signal": "timeSequence(tbin_delivered.unit, tbin_delivered.start, tbin_delivered.stop)"
}
to
"domain": {"data": "deliveries", "field": "unit0", "sort": true}
Note: An issue with this workaround is that only time units with data are shown in the x-axis. For example, selecting time unit "daily" in the dropdown shows chart with no gaps for Saturday and Sunday when there were no deliveries.
Vega docs for ordinal scale and sort parameter:
https://vega.github.io/vega/docs/scales/#ordinal
https://vega.github.io/vega/docs/scales/#sort
View chart in Vega online editor
It looks like signal tbin_delivered is not updated when the time range shrinks. This is an edited chart; the only difference is in the tooltip (which will now show the value of tbin_delivered.start).
I wonder if this behavior is correct? tbin_delivered is calculated in the deliveries data stream, which is derived from stream fruit, which depends on the extendedtime signal, which does change.
Posting it as an answer because the link won't fit in a comment.
The issue appears to be the values of signal tbin_delivered.start and tbin_delivered.stop are not updated when dataset deliveries values are changed.
The workaround in this solution is to use Vega transform extent to obtain minimun and maximum values of the dataset after data have changed. The function timeSequence in xscale will then show the updated domain correctly.
Added Vega transform:
{
"type": "extent",
"field": "unit0",
"signal": "signal_delivered_extent"
}
Vega scale:
"scales": [
{
"name": "xscale",
"type": "band",
"range": "width",
"padding": 0.05,
"round": true,
"domain": {
"signal": "timeSequence(tbin_delivered.unit, signal_delivered_extent[0], signal_delivered_extent[1] + 1)"
}
Note that Vega expression function timeSequence: "Returns an array of Date instances from start (inclusive) to stop (exclusive)...". For the rendered x-axis domain to include the maximum value, the argument for stop in timeSequence function has to be higher than the maximum value in signal_delivered_extent.
View in Vega online editor

Is nesting tables with bootstrap Vue a good way to achieve this?

I had to solve a specific problem that I'm going to explain using an image to better understand the question.
Basically the result that I wanted to obtain is this:
The white rows are the main items and the gray ones are subitems of the main. To achieve this with Bootstrap Vue tables I pass to the main table (the one contained in every accordion tab) the array with all the main items, and every main item has his subItems set contained in an array. Now to render the grey rows for every white row I used the row details slot in which I put another b-table that receives as items the subItems of the main row.
This solution looks fine but it comes with some important problems:
first of all I had to reset the row-details padding in order to align the outer table with the inner table, but it still has some differences in width (if you look close between the TDs there are some differences)
as second point, the most important one, every row has to be selectable only one per time, in order to modify the infos contained inside. At the moment I set the outer table selectable with only one row per time, and same for the inner tables, but since these are more than one it creates conflicts every time I select rows on different subitems or if I select a main item together with one or more subitems.
Now my question is:
before I proceed finding a solution maintaining the actual structure that I built, I was wondering if there is a better way to achieve this without nesting tables and at the same time maintaining the actual look of the entire table.
If this can help, I post the piece of code related to the result seen in the above image:
// OUTER TABLE
<b-table
:items="dataTable.itemsOnDesktop"
:fields="billOfMaterialsType == '2' ? dataTable.fields : dataTable.fieldsA1"
:selectable="isSelectable"
#row-selected="onRowSelected"
ref="selectableTable"
outlined
sticky-header="700px"
head-variant="light"
select-mode="single"
stacked="md"
selected-variant="danger"
details-td-class="row-details-styling"
id="outer-project-table"
bordered
fixed
:busy="tableBusy"
>
.
.
.
<template
v-if="billOfMaterialsType == '2'"
#row-details="row"
>
// INNER TABLE
<b-table
id="inner-project-table"
hover
table-variant="secondaryLight"
selected-variant="danger"
select-mode="single"
#row-selected="onRowSelected"
:selectable="isSelectable"
thead-class="hidden_header"
:fields="dataTable.fields"
:items="row.item.node.bomiteminfoSet.edges"
:ref="`innerTable-${row.item.node.id}`"
bordered
small
fixed
>
.
.
.
</b-table>
</template>
</b-table>
And here it is a short example of the dataTable.itemsOnDesktop json structure:
[{
"node": {
"id": "Qk9NSXRllE2vZGU6MjA2MzU=",
"code": "1.4.7.2.a",
"description": " INTONACO COMPLETO AL CIVILE PREMISCELATO A PROIEZIONE ...",
"quantity": "1290.00",
"unitOfMeasurement": "m2",
"bomiteminfoSet": {
"edges": [{
"node": {
"id": "Qk9YinRlbUalk87230mGlOjExOTkx",
"code": null,
"description": "PORZIONE BURLOTTI\npiano interrato, piano terra",
"parts": "955.00",
"length": null,
"width": null,
"heightOrWeight": null,
"unitOfMeasurement": null,
"quantity": "955.00"
}
},
{
"node": {
"id": "Qk9ld5RlSXRlbUaOb2RlOjExOTky",
"code": null,
"description": "PORZIONE LAZZARINI\npiano terra",
"parts": "335.00",
"length": null,
"width": null,
"heightOrWeight": null,
"unitOfMeasurement": null,
"quantity": "335.00"
}
}]
}
},
"isActive": true,
"_showDetails": true
}]
Let me know your ideas about this, I would really appreciate it.

How could *data inter-dependent* <select> dropdowns in rails be populated quickly?

Users need to select a car.
We have several dropdowns when picking a car in order to pick the year, make, model and submodel.
Initially we don't know what to use for the select options for make/model/submodel as they are interdependent.
Once we pick year we use ajax to make requests which query ActiveRecord to populate the make dropdown.
Then when we pick make we use ajax to query and populate the model dropdown.
Then when we pick model we ajax to query and populate the submodel dropdown.
The problem is that this is a lot of separate network requests and in real-world conditions of low bandwidth, network issues, etc. quite often there are pauses severely impacting the user experience and occasionally leading to failures.
What approaches could help avoid all these network requests. In there an approach would could store all of the several thousand makes-model combinations on the client browser?
Currently the data is stored in a sql database accessed via ActiveRecord in the Rails framework. Each dropdown selection results in another query because yuou can't show populate and show make until you know year and you can't populate and show model until you know make. Same for submodel (though I've omitted submodel from the rest of this post for simplicity!).
Would session (http://simonsmith.io/speeding-things-up-with-sessionstorage/) storage of the JSON data for 10,000 combinations be possible? I see that sessionStorage can generally be relied on to have at least 5MB(5,200,000 bytes) so that gives me 5200000/10000= 520 bytes per record. Probably enough? If this persists for the session and across pages then in many cases we could actually kick this off on the previous page and if that had time to finish we wouldn't need the external call at all on the relevant (next) page.
We would need to refresh that data either occasionally or on demand as new year-make-models are added periodically (several times a year).
Ultimately I think the solution here could be very useful to a large number of applications and companies. The example here of picking a vehicle itself it used by dozens of major car insurance websites (who all do the multiple calls right now). The general appraoch of storing client side data for relatioship dependent sdropdown could also mapply in many other situations such as online shopping for make-brand-year-model. The backend framework to populate sessionStorage could also be done via different backend frameworks.
Another options might be to try google's Lovefield - https://github.com/google/lovefield More at https://www.youtube.com/watch?v=S1AUIq8GA1k
It's open source and works in ff, chrome, IE, Safari, etc.
Seems like sessionStorage might be better for our (considerable) business than basing it on a google 100 day dev thing - though it is open source.
Hello you can create the JSON object
for all the detail and based on the Value selected you can loop the array and populate the value. Let me
var cardetail = [{
"name": "MARUTI",
"model": [{
"name": "SWIFT",
"year": ["2005", "2006", "2008"]
}, {
"name": "ALTO",
"year": ["2009", "2010", "2011"]
}]
}, {
"name": "Hundai",
"model": [{
"name": "I20",
"year": ["2011", "2012", "2013"]
}, {
"name": "I20",
"year": ["2013", "2014", "2015"]
}]
}];
var currentCumpany = null;
var currentModel = null;
$(document).ready(function() {
$("#company").append("<option value=''>Select Company</option>");
for (i = 0; i < cardetail.length; i++) {
$("#company").append("<option value='" + cardetail[i].name + "'>" + cardetail[i].name + "</option>");
};
$("#company").change(function() {
for (i = 0; i < cardetail.length; i++) {
if (cardetail[i].name == $("#company").val()) {
currentCumpany = cardetail[i];
}
};
$("#model").html("");
for (i = 0; i < currentCumpany.model.length; i++) {
$("#model").append("<option value='" + currentCumpany.model[i].name + "'>" + currentCumpany.model[i].name + "</option>");
};
});
$("#company").change(function() {
for (i = 0; i < cardetail.length; i++) {
if (cardetail[i].name == $("#company").val()) {
currentCumpany = cardetail[i];
}
};
$("#model").html("");
for (i = 0; i < currentCumpany.model.length; i++) {
$("#model").append("<option value='" + currentCumpany.model[i].name + "'>" + currentCumpany.model[i].name + "</option>");
};
});
$("#model").change(function() {
for (i = 0; i < currentCumpany.model.length; i++) {
if (currentCumpany.model[i].name == $("#model").val()) {
currentModel = currentCumpany.model[i];
}
};
$("#year").html("");
for (i = 0; i < currentModel.year.length; i++) {
$("#year").append("<option value='" + currentModel.year[i] + "'>" + currentModel.year[i] + "</option>");
};
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<select id="company"></select>
<select id="model"></select>
<select id="year"></select>
First, unless the requisite bandwidth is too expensive you could conceivably check the cache and then start making requests for popular makes/models/submodels as soon as (or even before) the user picks a year and cache it. There's even a full RDBMS for the browser now (full disclosure: its new and I haven't played with it much) which sits atop indexDB.
In terms of picking which ones to preload, you could do it based on units produced, units sold, car and driver magazine rankings, data-mining your actual users' requests, whatever.
I'm of the opinion that from a UX perspective you should at least be caching the requests the user actually makes and offering an option on load to jump right back to the last year/make/model they searched for rather than having them enter it all fresh each visit. Having popular vehicles preloaded only makes things easier. How much you want to push the envelope with predictive analysis of what a given user is likely to search for is up to your team skills/budget/time constraints.
I realize that this isn't a full answer per se, I'm not sure as stated the question has one (e.g. 'use this strategy/framework/library and all your problems will magically disappear! it even makes julienned fries!'). But if faced with this kind of problem my first thought is how to get more (hopefully relevant) data to the client sooner, which hopefully translates to faster (in the UX sense of fast).
I would also recommend that you have that popular data in json files to request rather than have to hit Rails/ActiveRecord/Database server each time. That alone would shave valuable milliseconds off your response times (not to mention usage load on those machines).
Its not like that data really changes, a 2009 Toyota Rav 4 has the same specs it did in...2009.

Exception action when condition isn't met

Im trying to make an except script that will take action only if a string is NOT present on the command execution: example:
send -- "sys set -nd\r"
expect "showdebugcommands" {} "\n$PROMPT" {send -- "sys set showdebugcommands 1\r"}
What i want to do is: Do NOTHING if "showdebugcommands" is present on cmd output but in case it doesn't, execute command "sys set showdebugcommands 1".
How can i accomplish this using expect?
Try this
set seen false
expect {
"showdebugcommands" {set seen true; exp_continue}
"\n$PROMPT"
}
if { ! $seen} {
send -- "sys set showdebugcommands 1\r"
}
One way of doing this is by nesting an expect inside an expect. This is perfectly legal.
send -- "sys set -nd\r"
expect {
"showdebugcommands" {
expect "\n$PROMPT"
}
"\n$PROMPT" {
send -- "sys set showdebugcommands 1\r"
expect "\n$PROMPT"
}
}
The aim is to drain the activity back to the known state (prompt showing) after seeing the thing that you wanted. It's always a good idea to think in terms of code units that take things back to a known state. (Because of that, I also added another expect of the prompt after the inner send; let's get it all back to the state of “I've just seen a prompt” at the end of the outer expect since that's the least crazy option.)

conceptualizing noSQL... using Magic the Gathering

So, I've been using SQL for ever. It's all I know, really, but I really want to understand how to conceptualize document data.
{
"MBS":{
"name":"Mirrodin Besieged",
"code":"MBS",
"releaseDate":"2011-02-04",
"border":"black",
"type":"expansion",
"block":"Scars of Mirrodin",
"cards":[
{"layout":"normal",
"type":"Creature — Human Knight",
"types":["Creature"],
"colors":["White"],
"multiverseid":214064,
"name":"Hero of Bladehold",
"subtypes":["Human", "Knight"],
"cmc":4,
"rarity":"Mythic Rare",
"artist":"Austin Hsu",
"power":"3",
"toughness":"4",
"manaCost":"{2}{W}{W}",
"text":"Battle cry (Whenever this creature attacks, each other attacking creature gets +1/+0 until end of turn.)\n\nWhenever Hero of Bladehold attacks, put two 1/1 white Soldier creature tokens onto the battlefield tapped and attacking.",
"number":"8",
"watermark":"Mirran",
"rulings":[
{"date":"2011-06-01", "text":"Whenever Hero of Bladehold attacks, both abilities will trigger. You can put them onto the stack in any order. If the token-creating ability resolves first, the tokens each get +1/+0 until end of turn from the battle cry ability."},
{"date":"2011-06-01", "text":"You choose which opponent or planeswalker an opponent controls that each token is attacking when it is put onto the battlefield."},
{"date":"2011-06-01", "text":"Although the tokens are attacking, they never were declared as attacking creatures (for purposes of abilities that trigger whenever a creature attacks, for example)."}
],
"imageName":"hero of bladehold",
"foreignNames":[
{"language":"Chinese Traditional", "name":"銳鋒城塞勇士"},
{"language":"Chinese Simplified", "name":"锐锋城塞勇士"},
{"language":"French", "name":"Héroïne de Fortcoutel"},
{"language":"German", "name":"Held der Klingenfeste"},
{"language":"Italian", "name":"Eroina di Rifugio delle Lame"},
{"language":"Japanese", "name":"刃砦の英雄"},
{"language":"Portuguese (Brazil)", "name":"Herói de Bladehold"},
{"language":"Russian", "name":"Герой Блэйдхолда"},
{"language":"Spanish", "name":"Héroe de Fortaleza del Filo"}
],
"printings":["Mirrodin Besieged"]},
{next card...},
{next card...},
{next card...},
{last card...}
]
},
{next set...},
{next set...},
{last set...}
}
So, I have a json file of all of the cards from Magic: The Gathering. The json is 'segmented' into the different sets. So, if I were to import this somehow into a noSQL database (mongoDB):
what would constitute a document?
Is each set a document?
Is each card a document?
What would be a good structure?
How does the querying work at that point... is it just a giant 'text search' if I'm looking for something?
Just looking for some sort of insight to wrap my head around noSQL.