In our experiment, we have come across a couple of new findings. Firstly, everyone grows up with a celebrity they wish they could talk to. As a child, you may have sent fan mail to your favorite celebrity, and waited to receive an autographed glossy 5x7 that said, “Thanks for the note!”
Imagine our surprise, when we wrote about our experience to the team at Wit.ai, and actually received a reply! First off, we were really impressed that the team took the care to reach back out to us and provide feedback on how we had used their product. We will forever be huge fans of the product and brand. That being said, the note had good news and bad news. (Bad news always goes first in my book…)
The bad news:
We’re hackers. We didn’t use the platform as the good gods of Wit.ai had intended.
The good news:
We’re Hackers. Oh yeah.
Still, I promised the Wit.Ai Team that I would make sure our loyal readers understand how their product is intended to be used, so that proper expectations are met. Hence, here are some things you should note:
· Wit is focused on NLP and not managing conversations within bots and is intended to be used in conjunction with additional code and/or other platforms.*
· It’s designed such that the confidence level should be assessed on the bot level, and additional coding would be required to show when the bot should show the response—including a catch-all response. The way we use the Wit is re-purposing its functionality, so results may not always be optimal because the platform is being used in a way for which it was not intended.
· For trait, the idea behind this type of entity is similar to a “tag” that is related to the full sentence/request. Intent is one of them but you could have multiple tags (a.k.a trait entities) per request. For example if someone typed in your bot: Hey, I really need to know how much your pricing is?, you could have intent=pricing, but also urgency=high). (IMO this can be very powerful and is a cool feature.)
*Despite the original intent of the Wit team, the Instabot team will continue to use the platform as a tool for NLP within our bots and others (for non-developer users), and will report on tips and tricks to do so. We still find it as a method to make bots more robust and “smarter”.
We know, we skipped Part 4: Amazon Lex
Amazon Lex touts itself as a service for building conversational interfaces into any application using voice and text—providing advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text. Our next step in this process was to integrate Lex into Instabot, but we found it is not a good fit.
We found Amazon Lex to be ineffective for our particular use case. From our understanding, it is a platform focused on Lex's NLP service seems to be for applications catered towards "intent fulfillment" (e.g. "I want to book an appointment", or "ship me more Tide detergent").
Our use case is more focused on specific topics within natural language processing, mainly: extraction of intent and the delivery of appropriate responses based on this intent. Hence, we were unable to use Lex in this particular bot experiment.