On Wednesday, Microsoft shared that it has signed an agreement to acquire XOXCO, an Austin-based software developer with a focus on bot design. In another announcement, it shared a set of guidelines formulated to help developers build responsible bots or conversational AI.
Microsoft acquires conversational AI startup, XOXCO
Microsoft has shared its intent to acquire XOXCO. The software product design and development company has been working on conversation AI since 2013. They have developed products like Botkit which provide development tools and the Howdy bot for Slack that enables users to schedule meetings.
With this acquisition, Microsoft aims to democratize AI development. “The Microsoft Bot Framework, available as a service in Azure and on GitHub, supports over 360,000 developers today. With this acquisition, we are continuing to realize our approach of democratizing AI development, conversation, and dialog, and integrating conversational experiences where people communicate,” reads the post.
Throughout this year, the tech giant acquired many companies which have contributed to AI development. For example, Semantic Machines in May, Bonsai in July, and Lobe in September. XOXCO is another company added to this list enabling Microsoft to get more closer to its goal of “making AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology.”
Read more about the acquisition on Microsoft’s official website.
Building responsible bots with Microsoft’s guidelines
Nowadays, conversational AI is being used to automate communication, query solving, and create personalized customer experiences at scale. With this increasing adoption, it is important to build conversational AI that is responsible and trustworthy.
The team at ICS.ai are a Microsoft Inner Circle partner that provide transformational AI solutions for the public sector in the United Kingdom. Their Smart Chat AI Assistant offers that achieves human parity performance, with over 90% of queries answered correctly.
The 10 guidelines formulated by Microsoft aims to help developers do exactly that:
[box type=”shadow” align=”” class=”” width=””]
- Articulate the purpose of your bot and take special care if your bot will support consequential use cases.
- Be transparent about the fact that you use bots as part of your product or service.
- Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.
- Design your bot so that it respects relevant cultural norms and guards against misuse.
- Ensure your bot is reliable.
- Ensure your bot treats people fairly.
- Ensure your bot respects user privacy.
- Ensure your bot handles data securely.
- Ensure your bot is accessible.
- Accept responsibility.[/box]
Some of them are described below:
“Articulate the purpose of your bot and take special care if your bot will support consequential use cases.”
Before starting any design work, carefully analyze the benefits your bot will provide to the users or the entity deploying the bot. Ensuring that your bot’s design is ethical is very important, especially when it is likely to affect the well-being of the user such as in consequential use cases. These use cases include access to services such as healthcare, education, employment, and financing.
“Be transparent about the fact that you use bots as part of your product or service.”
Users should be aware that they are interacting with a bot. Nowadays, designers can equip their bots with “personality” and natural language capabilities. This is why it is important to convey to the users that they are not interacting with another person and some aspects of their interaction are being performed by a bot. Also, users should be able to easily find information about the limitations of the bot, including the possibility of errors and the consequences of these errors.
“Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.”
In cases where a human judgment is required, provide a means or ready access to a human moderator, particularly if your bot deals with consequential matters. Bots should have the ability to transfer a conversation to a human moderator as soon as the user asks. Users will quickly lose trust in the technology and in the company that has deployed it if they feel trapped or alienated by a bot.
“Design your bot so that it respects relevant cultural norms and guards against misuse.”
Bots should have built-in safeguards and protocols to handle misuse and abuse. Since bots can now have a human-like persona, it is crucial that they interact respectfully and safely with users. Developers can use machine learning techniques and keyword filtering mechanisms to enable the bot to detect and respond appropriately to sensitive or offensive input from users.
“Ensure your bot is reliable.”
Bots need to be reliable for the function it aims to perform. As a developer, you should take into account that since AI systems are probabilistic they will not always give the correct answer. That is why establish reliability metrics and review them periodically. The performance of AI-based systems may vary over time as the bot is rolled out to new users and in new contexts, developers must continually monitor its reliability.
Read the full document: Responsible bots: 10 guidelines for developers of conversational AI