I have constructed a simple echo skill that speaks out the word that is being captured via a Capture block.
The test in VF works find but when uploading to Amazon Alexa, I get Alexa to say the variable name “Fallback Intent” instead of the content of the variable that I have defined.
Any hint what needs to be fixed?
which slot type do you choose in capture block? if you choose custom type, you need to input 1 example at least.
I use a Capture block
I defined a custom type
I defined a global variable which stores the input
It works fine in VoiceFlow
With skill uploaded to Alexa it says “ALEXA.FallbackIntent” for any input different than the one I defined in the Custom block
Is that clear? Makes sense?
I am not sure I understand the discrepancy between VF and the DevConsole behavior, unless this is a “feature” where Alexa intents overrule user’s variables, for some reason
very clear. you defined some custom input examples for it.
There are some options, I think. IMO, you should try to use interaction block instead of capture block, because that’s more flexible and controllable.
Or just put if block after capture block, check if inputs match with examples you defined and if not, use Else path and say error or go back to capture block again. This is very simple but depends on how many examples you define because you have to consider those examples in both Capture block and IF block.
I did exactly that but I don’t want to define a closed list of words but rather have an “echo test” to hear what Alexa perceives without an explicit intent list. Even if “error” (not in the list), I want Alexa to say the word that it perceived and not “ALEXA.FallbackIntent”.
In the VF test simulator it works OK but not on Alexa Dev Console
If you use Alexa Dev Console, spoken words will be processed like this:
input --> ADC --> VF --> ADC --> output
VF test simulator is like this:
input --> VF --> output
There is a different. If you use Echo devices, it is almost same as ADC. However, with VF test simulator has some advantages like you can check more detail inside of variable, etc. It depends on situation.
Anyway, If you want capture what users say without many examples, I recommend interaction block with Search Query slot type. But there is one limitation; with Search Query slot, you can’t define an intent with only the [slot]. You will need to add something like “I want to record [slot]” or “I will say [slot]”, etc.
I think this is a workaround now.
any other advices, guys?
I am not sure how to make that hack wor in terms of the exact syntax I have to put into the block. I selected a SearchQuery slot and then what do I actually write in the examples?
Then, what do I write in the preceding Speech block so that it actually say that?
Thanks for your guidance.
I managed to pull this off! Working OK now
@yoav I would also like to retrieve a specific phrase without having to think about everything the user could potentially say. Can you share how you ^managed to fix this as I have the same Fallback.Intent issue ? Many Thanks
I used an interaction block with a <search_query> slot. I prompt the user to say a phrase composed of a known part (e.g. “I say”) followed by an unknown part (the ), which I assigned into a user-defined variable. Then everything in that slot is captured by that variable and you can do whatever you wish.
I hope this is helpful. See also KUN 432 response above (same but a bit cryptic)
Ok, figured it out as well for Sending SMS with Twilio. So needed SearchQuery type
First create a slot to fill ( in my case message_text)
Second Create an Intent and one known word
Third - Map the Slot to the Variable
And in the previous voice card I just clarify for the user how to do this
Then the first word is stripped… and you are left with the message
Here is the interaction:
Alexa: What would you like to say ? Start with the word message
User: message This is the text of my message
Twilio receives: This is the text of my message
Hope this helps