Gender bias is the most studied fairness issue in Natural Language Processing. In their recent EMNLP-2020 article, Vargas & Cotterell show that within word embedding space, gender bias occupies a linear subspace.
Fairness is a central aspect of the ethically responsible development and application of Artificial Intelligence. Because humans may be biased, so may be Machine Learning (ML) models trained on data that reflects human biases.
In their NeurIPS-2020 article, Wang et al. discuss the injection of backdoors into a model during model training. In an FL setting, the goal of a backdoor is to corrupt the global (federated) model such that it mispredicts on a targeted sub-task.
Sustainability is an essential aspect of ethically responsible development and application of AI. Therefore, Tetrai tracks its environmental impact — e.g. within customer projects — and compensates for it.
In their NeurIPS-2019 paper, Zhu et al. describe an attack against Federated Learning (FL): they show how gradients — which are exchanged either in a client-server or a peer-to-peer FL setting — leak the (private) training data.
Sustainability is an ethical value of the responsible development and application of Artificial Intelligence. In their paper presented at ACL-2019, Strubel et al. showed how expensive it is to train large language models.
In this blog entry, we go into more detail about another central aspect of responsible development and application of Artificial Intelligence (AI): the explainability of decisions made by AI.
In the “best theme paper” of the ACL-2020, the authors argue that language models (e.g. ,Google's BERT) that are trained only on unannotated language data cannot learn meaning.
Transformers are Neural Networks that are characterized by their ability to capture context very well and — in contrast to recurrent models such as Long Short-Term Memories networks (LSTMs) — are nonetheless well parallelized.