Browse Carousel Card not working for google assistant in Dialogflow - json

I am trying out browse carousel card (in rich responses) feature available for Google Assistant in Google's Dialogflow.
I am getting only simple response as shown:.
Pasted below the Raw API response (no instances of browse carouse card response).
{
"responseId": "ea913388-8753-458c-b033-396512d1af42-e13762d2",
"queryResult": {
"queryText": "show browse carousel",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"platform": "ACTIONS_ON_GOOGLE",
"simpleResponses": {
"simpleResponses": [
{
"textToSpeech": "sample text"
}
]
}
},
{
"platform": "ACTIONS_ON_GOOGLE"
},
{
"text": {
"text": [
""
]
}
}
],
"intent": {
"name": "projects/leafy-winter-268704/agent/intents/bd457567-02c8-4e15-aca7-c32adfcb45f2",
"displayName": "sampleintent"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
}
This is the simulator response. The bot is getting disconnected when an intent with browse carousel is triggered.
Am I doing it in the correct way? what can be done to resolve this issue?

The issue is that you're using a Browse Carousel, but attempting to test it with a Smart Display. Smart Displays don't support links, so they can't support the Browse Carousel.
You can switch to testing it with Android and you should be able to see the Browse Carousel.

Related

Convert my own DB.JSON file into working API URL to use in REACT REDUX project

I want to convert my own db.json file into a working API URL to use in my practice react redux project. Can anyone suggest me open source free websites?
DB.JSON
{
"navLinks": {
"logo": {
"url": "https://images.pexels.com/photos/10620143/pexels-photo-10620143.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=650&w=940"
},
"parentLinks": [
{
"id": 1,
"link": "Shop"
},
{
"id": 2,
"link": "Learn",
"subLinks": [
{ "id": 1, "link": "Process" },
{ "id": 2, "link": "About Us" },
{ "id": 3, "link": "Blog" },
{ "id": 4, "link": "News" },
{ "id": 5, "link": "Beyond The Bottle" }
]
},
{ "id": 3, "link": "Sign Up" },
{ "id": 4, "link": "Login" }
]
}
}
Hey I found the website which can convert your db.json file data into a url which you can use it your practice project.
Checkout this link
https://app.json-generator.com/
Steps to follow:
signup with your email id
copy your db.json data and paste into generate data section
click on generate data button, this will generate your data
click on get data and copy url e.g:https://api.json-generator.com/templates/cq7jZIauJXPO/data
go to the login dropdown and click on access token and generate token
copy the token and keep in your notepad
add ?access_token=PASTE_YOUR_GENERATED_TOKEN
https://api.json-generator.com/templates/cq7jZIauJXPO/data?access_token=PASTE_YOUR_GENERATED_TOKEN
Done.

How to link QnA Maker answer in Adaptive Card=

I'm rather new with these technologies so bear with me. I have successfully deployed Bot Framework and linked QnA Maker to it. I am using Adaptive Card for first response and i want images in that adaptive card to (when clicked) to generate answer from QnA Maker. How can i link these images to generate QnA Maker answer? Is there a way to just give it URL that would trigger QnA Maker?
You could use the data property in your Adaptive Card to send a message payload to the Bot, which would then trigger the QnA answer.
For example, in the data property, if you put something like 'How do I upload a file', so when the image is clicked, the payload will be 'How do I upload a file' and will be sent to the Bot, where the QnA service should respond in kind with the correct answer.
{
"type": "AdaptiveCard",
"body": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "Image",
"style": "Person",
"url": "${creator.profileImage}",
"size": "Small",
"selectAction": {
"type": "Action.Submit",
"id": "image",
"title": "image",
"data": "show me the text 'image'"
}
}
],
"width": "auto"
}
]
}
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.2"
}

Basic Card is not displayed in the Google action simulator console, and also not on my iPhone using google assistant

Basic Card is not displayed in the Google action simulator console, and also not on my iPhone using Google Assistant
Here is the JSON code I send :
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "This is a basic card example."
}
},
{
"basicCard": {
"title": "Title: this is a title",
"subtitle": "This is a subtitle",
"formattedText": "This is a basic card. Text in a basic card can include \"quotes\" and\n most other unicode characters including emoji 📱. Basic cards also support\n some markdown formatting like *emphasis* or _italics_, **strong** or\n __bold__, and ***bold itallic*** or ___strong emphasis___ as well as other\n things like line \nbreaks",
"image": {
"url": "https://example.com/image.png",
"accessibilityText": "Image alternate text"
},
"buttons": [
{
"title": "This is a button",
"openUrlAction": {
"url": "https://assistant.google.com/"
}
}
],
"imageDisplayOptions": "CROPPED"
}
}
]
}
}
}
]
}
and here is what I get in the simulator and on the iPhone using Google assistant :
[object Object]
The debug in the simulator returns :
{
"response": "[object Object]",
"expectUserResponse": true,
"conversationToken": "EroCS2w1Tm...",
"audioResponse": "//NExAAAAA...",
"ssmlMarkList": [],
"debugInfo": {
"assistantToAgentDebug": {
"curlCommand": "curl -v https://88.176.64.72:8081/ -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6IjM3ODJkM2YwYmM4OTAwOGQ5ZDJjMDE3MzBmNzY1Y2ZiMTlkM2I3MGUiLCJ0eXAiOiJKV1QifQ.eyJpc3MiOiJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20iLCJhdWQiOiJteS10ZXN0LWFwcC1kMTNkZSIsIm5iZiI6MTU1NTI3NTE2OCwiaWF0IjoxNTU1Mjc1NDY4LCJleHAiOjE1NTUyNzU1ODgsImp0aSI6IjVhZWM0ZjAwNzJiNmNjMTcyMDlmZTdiMmJlZDhjZDRlZTI4ZjExYTIifQ.P-SnzkjiWcr-GubTRdT_juTUVAPBn9J6spSRHPqykwsBq3AppsHg2GNBzlDfwhAXbxZAogW-Mdr4k_U2E1cQMGu-fdGnrkkg4UmjGvYz8za5SGaN-OVx3TNYsoayGIgTFn01gmZOIZfI_33-OucZtFSQCHo82YEmK6ypz3kBq5_vGAjdu01cgYEHAXdT6c53LGSEoewhK4F2M-KphlSx3eFYj2yLWJFn7w9w-Yf3W1n5Rm9q9ZGFJ4vNKIZlX0_J-T-6HhB84OX6k9qJYZ8_1FXp6CS9bPOAo_Nid9k1OeONDIJcCwp1GnQTQB4dek77xybmUn5Qo4-ad1IOzHOkNA' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"ABwppHGZMM6CHA-JcPkrCzVpkgGv953hFvVdAGAEGOWQSETGxFO18zXyIbXrhHAlw63M9Gz7dKcFxn3fIGKd2sw\",\"locale\":\"fr-CA\",\"lastSeen\":\"2019-04-14T20:53:31Z\",\"userStorage\":\"{\\\"data\\\":{}}\"},\"conversation\":{\"conversationId\":\"ABwppHEJaVmbTFuCLu0rK3SBxm_bviFYhyoY6oIa8o3MNqI2gHalTVPumj9cetdIsmtbVlfU-vNhIxBGsYRmWvs\",\"type\":\"ACTIVE\",\"conversationToken\":\"{}\"},\"inputs\":[{\"intent\":\"actions.intent.TEXT\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"image\"}],\"arguments\":[{\"name\":\"text\",\"rawText\":\"image\",\"textValue\":\"image\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.ACCOUNT_LINKING\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"}]}],\"requestType\":\"SIMULATOR\"}'",
"assistantToAgentJson": "{\"user\":{\"userId\":\"ABwppHGZMM6CHA-JcPkrCzVpkgGv953hFvVdAGAEGOWQSETGxFO18zXyIbXrhHAlw63M9Gz7dKcFxn3fIGKd2sw\",\"locale\":\"fr-CA\",\"lastSeen\":\"2019-04-14T20:53:31Z\",\"userStorage\":\"{\\\"data\\\":{}}\"},\"conversation\":{\"conversationId\":\"ABwppHEJaVmbTFuCLu0rK3SBxm_bviFYhyoY6oIa8o3MNqI2gHalTVPumj9cetdIsmtbVlfU-vNhIxBGsYRmWvs\",\"type\":\"ACTIVE\",\"conversationToken\":\"{}\"},\"inputs\":[{\"intent\":\"actions.intent.TEXT\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"image\"}],\"arguments\":[{\"name\":\"text\",\"rawText\":\"image\",\"textValue\":\"image\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.ACCOUNT_LINKING\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"}]}],\"requestType\":\"SIMULATOR\"}",
"delegatedRequest": {
"delegatedRequest": ""
}
},
"agentToAssistantDebug": {
"agentToAssistantJson": "{\"conversationToken\":\"{}\",\"expectUserResponse\":true,\"expectedInputs\":[{\"inputPrompt\":{\"initialPrompts\":[{\"textToSpeech\":\"[object Object]\"}],\"noInputPrompts\":[]},\"possibleIntents\":[{\"intent\":\"actions.intent.TEXT\"}]}]}",
"delegatedResponse": {
"delegatedResponse": ""
}
},
"sharedDebugInfoList": []
},
"visualResponse": {
"visualElementsList": [
{
"displayText": {
"content": "[object Object]"
}
}
],
"suggestionsList": [],
"agentLogoUrl": "https://www.gstatic.com/voice/opa/partner_icons/generic_3p_avatar.png",
"agentStyle": {
"primaryColor": "",
"fontFamily": "",
"borderRadius": 0,
"backgroundColor": "",
"backgroundImageUrl": ""
}
},
"clientError": 0,
"is3pResponse": true,
"clientOperationList": [
{
"operationType": 1,
"micUpdatePayLoad": {
"micMode": 1
}
}
],
"projectName": ""
}
Why am I not getting the correct results?
I use Node-Red and the google-action-contrib to make the link beetween ation-on-google and my machine. Everything works fine, I can create a dialog, receive and send sentences... Now, I would like to send a BasicCard (because it seems hat is the correct way to send an image, and many other things, like a button, etc...)
I have put as many datas as possble in this forum, with attached files (wireshark capture, debug information, code):
https://discourse.nodered.org/t/google-action-response-with-an-image-basic-card/10145/7
Thanks for your help
It looks like you're not actually sending the JSON as JSON, but rather you've built an object, and are sending the toString version of it. This is suggested by the part of the response logged that says
"response": "[object Object]"
Without seeing the code you are using to send the response, it is pretty difficult to help further.
I assume you've used the sample code in your action. However, unless you changed the url fields, your action can not find the imageUrl and openUrlAction.
If you change url fields with actual(not "http://example.com") links, your app will respond properly.
Also make sure you've added necessary classes.
e.g.
const { dialogflow, BasicCard, Image, Button } = require('actions-on-google');

Facebook Messenger bot, returning multiple messages or payloads

I'm looking to return multiple responses to a user. For instance this might be an image and a text block, or a text block and a list.
So far I've not been able to find a way of doing this, everything I try either results in one of the payloads not displaying or it failing completely.
Here's an example of an attempt at displaying a text block and a list:
{
speech:"myMessage",
displayText:"myMessage",
data:{
facebook:{
"attachment": {
"type": "template",
"payload": {
"template_type": "list",
"top_element_style": "compact",
"elements": [
{
"title": "£10",
"image_url": "http://example.com/example.jpg",
"subtitle": "An amazing t-shirt"
},
{
"title": "£30",
"image_url": "http://example.com/example.jpg",
"subtitle": "Another amazing t-shirt"
},
{
"title": "£40",
"image_url": "http://example.com/example.jpg",
"subtitle": "An amazing t-shirt"
}
]
}
}
}
},
contextOut:[],
source:"webhook"
}
Any ideas on where I'm going wrong?
Each message is separate, but you can send a batch request to the graph API to dispatch all the messages with a single API call:
https://developers.facebook.com/docs/graph-api/making-multiple-requests/

MediaWiki API Incorrect User Rights

I'm battling with the WikiEditor extension of MediaWiki 1.27. When a user tries to use the fancy image upload button from the enhanced editor toolbar, I get the message "You must be logged in to upload files."
So far I've narrowed it down to the user rights returned from the API being incomplete. I've tested with the following two API calls:
The one WikiEditor uses: action=query&meta=userinfo&uiprop=rights
returns:
{
"batchcomplete": "",
"query": {
"userinfo": {
"id": 1006,
"name": "john_smith",
"rights": [
"read",
"createpage",
"createtalk",
"writeapi",
"editmyusercss",
"editmyuserjs",
"viewmywatchlist",
"editmywatchlist",
"viewmyprivateinfo",
"editmyprivateinfo",
"editmyoptions",
"autocreateaccount"
]
}
}
}
However, this API call: action=query&list=users&ususers=john_smith&usprop=rights returns:
{
"batchcomplete": "",
"query": {
"users": [
{
"userid": 1006,
"name": "john_smith",
"rights": [
"block",
"createaccount",
"delete",
"bigdelete",
"deletedhistory",
"deletedtext",
"undelete",
"editinterface",
"editusercss",
"edituserjs",
"editcontentmodel",
"import",
"importupload",
"move",
"move-subpages",
"move-rootuserpages",
"move-categorypages",
"patrol",
"autopatrol",
"protect",
"editprotected",
"rollback",
"upload",
"reupload",
"reupload-shared",
"unwatchedpages",
"autoconfirmed",
"editsemiprotected",
"ipblock-exempt",
"blockemail",
"markbotedits",
"apihighlimits",
"browsearchive",
"noratelimit",
"movefile",
"unblockself",
"suppressredirect",
"mergehistory",
"managechangetags",
"deleterevision",
"read",
"createpage",
"createtalk",
"writeapi",
"editmyusercss",
"editmyuserjs",
"viewmywatchlist",
"editmywatchlist",
"viewmyprivateinfo",
"editmyprivateinfo",
"editmyoptions",
"autocreateaccount",
"edit",
"minoredit",
"purge",
"sendemail",
"applychangetags",
"changetags"
]
}
]
}
}
I'm totally stumped and at a loss of why these two very similar API calls return a different set of rights. As a result, users are not able to upload images via the enhanced editor button.