Artificial Intelligence

Image for post
Image for post
Photo by Ben Neale on Unsplash

With the advent of AI, it has become imperative for the progression of a parallel field, Explainable AI (XAI), to foster trust and human-interpretability in the workings of these intelligent systems. These explanations help the human collaborator understand the circumstances that led to any unexpected behavior in the system and allows the operator to make an informed decision. This article summarizes the paper that demonstrates an approach to generating natural language real-time rationale from autonomous agents in sequential problems and evaluates their humanlike-ness.


Using the context of an agent that plays Frogger, a corpus of explanations is collected and fed to a neural rationale generator to produce rationales. These are then studied to measure user perceptions of confidence, humanlike-ness, etc. An alignment between the intended rationales and the generated ones was noticed, with users preferring a detailed description that accurately described the mental model of the agent. …

Around the end of 2018, the field of Artificial Intelligence had been revolutionized overnight when Google Brain open-sourced BERT, which it believed was a “state-of-the-art” pretraining mechanism for NLP. Achieving a nearly perfect accuracy of 93.2% when tested on a questionnaire based on a bunch of Wikipedia articles, BERT bragged being the most robust model that ever existed (at least for NLP). Until recently, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of Hong Kong, and Singapore’s Agency for Science, Technology, and Research birthed TextFooler a trailblazing but simple baseline for adversarial text generation. …

Shreyashee Sinha

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store