Imagine if an organization could construct chatbots with refined common dialect preparing and no operational overhead. Amazon Lex can be described as a fully managed service for building conversational interfaces into application using voice and text. Lex is integrated with AWS Lambda, a service that allows code to run without the provisioning or managing of servers. Lambda enables companies to write and run logic for your chatbot using serverless compute. The implication of bots into the conversation gives operations teams the ability to work collaboratively to understand, diagnose, and address emerging issues while simultaneously tracking changes made to the target system by way of commands directed to a bot. In the same manner, AWS S3 can be in asset uploads for credentials, Lamda can be used to control functions. As a result, this management mandates functions to fetch and post data. Once define a function that performs an operation, and AWS provisions and schedules that function.
Chatbots also change how companies interface with their customers. With chatbots, you can easily fulfill the needs of your customers in an automated way using natural, human-like chat interfaces. Chatbots serve a variety of use cases, such as customer support, transaction fulfillment, data retrieval, or even DevOps functions (ChatOps). ChatOps can be thought of as conversation-driven development. Chat rooms that use ChatOps can either be a combination of tools and humans talking, or they can be “bot-only” rooms that provide convenient logs of information. As a log, ChatOps can also help coworkers see what commands have been run recently that impact infrastructure and watch the result of that command together.
 
Building a Bot
You can define your Lex bot and set up all of these components from the Lex Console. You can start with one of the samples or you can create a custom bot:
building_a_bot
Utterances and slots can be defined
Utterances_slots
Bot Customization Settings
Bot_Settings
Test your bot interactively and refine it until it works as desired:
Testing
Then you can generate a callback URL for use with Facebook (and others on the way):
callback_url
The Problem
Unfortunately, AWS Lex and AWS Lambda have a versioning system that is linear in nature. Lambda’s are microcosmic units of script that usually perform very few functions. A single developer can make modifications without conflict in the versioning. If each lambda is a contained development process – with small stable team working on it – a linear versioning model with the alias tagging system can work effectively. However, chat bots can also be rather intricate. They require constant modifications, changes, and updates due to the fact that deal with more inputs from users. They are trained by a manual review process – a buggy interface that reports all the “utterances” that it did not recognize. This interface is updated once per 24 hours, but this process can be also result in inevitable bugs.
The interface that manages the chat bot, has several versioning on various components. Consequently, the operations team maybe unsure of what version of the bot is need of modification. The interface revisions are funneled to the version coined $LATEST. This means – definitively – the bot is a one person at a time interface. If the operations team are in are working simultaneously, they have a high likelihood of unknowingly overwriting each other on the same intent or slot. If one developer builds their version, the $LATEST needs to be updated since they last refreshed the interface. As a result, what they end up testing fails, and huge delays are induced in development.
To make matters worse, the lambda’s that are used to fulfill the Lex bot intents, are independently versioned. They have a $LATEST too, which of course suffers the same clobbering problem. However, once the developer go to setting alias’s to lambda versions – you cannot select between these in the Lex interface. The whole mechanism to set these things up with more than a single developer is cumbersome. Amazon Lex, like Lambda, is essentially a configuration that is built/packaged and served at a versioned endpoint. Alias’s can be assigned to each of those versions. All of the configuration and the deployment is part of the Lex interface. Its not clear from the interface, a good way to parallelize development. One developer can clobber another without even knowing. Unfortunately, a broad development team working in this singular interface will not work. The intent state is constantly being overwritten and there is no medium for a team member modify or commit a feature without another erasing their work.
Solution
To combat this issue, the organization can shift their focus towards making internal modifications. Amazon Lex is primarily composed of slots, intents, bots, and lambdas. The slots, Intents, and Bots can be listed YAML definitions, while lambda can be as a YAML and associated script file. If this is done using those YAML definitions the developer can edit the state, and then automatically commit that state to version, assign alias to it. After doing so then automated test cases can be made on the bot.
Since the YAML and scripts can be checked into a version control system, the team can now use git branches tracked to various aliases. This methodology gives developers the leeway  features to be worked on and tested and eventually merged into the master branch, which will then update the PROD alias on the bot. Without scaffolding to manage this parallel development, it would be impossible to develop large chatbots without significant effort in development effort synchronization. The team should set their AWS credentials in the ~/.aws directory. Once this is done, the bot can execute the action of deploying the script that aids in translating to make the conversion becoming $LATEST version of the bot and lambdas on AWS.
 
 
 
 
 

Stelligent Amazon Pollycast
Voiced by Amazon Polly