If you use Alexa Dev Console, spoken words will be processed like this:
input --> ADC --> VF --> ADC --> output
VF test simulator is like this:
input --> VF --> output
There is a different. If you use Echo devices, it is almost same as ADC. However, with VF test simulator has some advantages like you can check more detail inside of variable, etc. It depends on situation.
Anyway, If you want capture what users say without many examples, I recommend interaction block with Search Query slot type. But there is one limitation; with Search Query slot, you can’t define an intent with only the [slot]. You will need to add something like “I want to record [slot]” or “I will say [slot]”, etc.
I think this is a workaround now.
any other advices, guys?