When I use the command block, which is equivalent to intent block without the immediate jump and go function, I included many utterances to detect an intent, such as what’s the price for a large pizza. I found out that the punctuation at the end of the question makes a huge difference in detecting the right intent. For example, the price intent includes one of the utterances, “what’s the price of a large pizza” but the user’s input was “what’s the price of a large pizza?”, which leads to no detection at all. So I am a bit curious about the intent detecting function provided by voiceflow. Normally, In NLP, when we want to match certain intents using either regular expression or classifiers, we tend to omit the punctuation in the user query. Then Why dose Voiceflow tend to include them?
Hey David, great question. This is because Voiceflow currently does not use an NLP/NLU within the test tool so it requires an exact match. That’s why the added punctuation treats the utterance as a completely different entity.
We have an NLP/NLU in-browser coming before end of February. This will solve your incongruence.
Great to hear from you and the good news about NLP and NLU as the upcoming features in Voiceflow!
The problem is that even though I added the question at the end of the question as one of the utterances, it’s still not working. For example, “can I see the menu?”, I added this sentence as my utterance and it’s still not matching the intent when user was saying “can I see the menu?”
I would remove the question mark in general as it’s not needed as an utterance and isn’t supported by many of the platforms! Let me know if I can do anything else to help. @David1
the most important thing is how speech-to-text works. For Alexa/Google, NLU in their cloud do this. For test in Voiceflow, browser do this (I guess). This means you should use mic to see how what you speak is converted to texts, then set utterances based in those.