Chabots are being increasingly adopted in various domains such as e-commerce, customer service, eHealth or to support internal enterprise processes, among others. Many of these scenarios are susceptible to arise security risks of both the user and the system. For instance, we may need to add security when we need:
- To disable potential queries depending on the user (e.g. a bot for a Human Resource Intranet must be careful not to disclose private data, such as salaries, unless the request comes from an authorized person).
- To execute different behaviors depending on the user. For instance, a chatbot embedded into an e-learning system will provide different answers depending on the user who queries the marks (teacher or student).
- To provide different information precision for the same query depending on the user privileges. For instance, a weather or financial chatbot may provide a more detailed answer to paying users.
Access control (AC) is the selective restriction of access to a place or other resource. More specifically, access-control is a mechanism aimed at assuring that the resources within a given software system are available only to authorized parties, thus granting Confidentiality and Integrity properties on resources.
Basically, access-control consists of assigning subjects (e.g., system users) the permission to perform actions (e.g., read, write, connect) on resources(e.g., files, services). The most popular model for access-control is Role-based Access-Control (RBAC) where permissions are not directly assigned to users (which would be time-consuming and error-prone in large systems with many users), but granted to roles. Then, users are assigned to one or more roles, thus acquiring the respective permissions.
Unfortunately, we don’t have concrete solutions proposing how to add access-control policies as part of the definition of a chatbot or, in general, any conversational interface.
Until now. In this research work, we propose an extension to chatbot creation languages to add new access-control primitives to enable the definition of more secure chatbots.
These secure chatbots could be then deployed on top of the existing chatbot framework where the security access-control rules will be automatically enforced. We discuss in our work a couple of possible strategies to achieve this, depending on how much control we have on the internals of the chatbot runtime.
Access control in chatbots is far from being a solved problem. But it is an important one that deserves more attention. We’re now extending our research work to improve the usability of the chatbot access-control language extension and facilitate its implementation. In the meantime, happy to learn more about what you’d like to see / do with access-control on bots!