We summarize outstanding articles from Data Science, Machine Learning, Natural Language Processing or Computer Vision, highlight aspects of Artificial Intelligence that have been neglected so far, and report from AI in practice.
Gender bias is the most studied fairness issue in Natural Language Processing. In their recent EMNLP-2020 article, Vargas & Cotterell show that within word embedding space, gender bias occupies a linear subspace.
Fairness is a central aspect of the ethically responsible development and application of Artificial Intelligence. Because humans may be biased, so may be Machine Learning (ML) models trained on data that reflects human biases.
In their NeurIPS-2020 article, Wang et al. discuss the injection of backdoors into a model during model training. In an FL setting, the goal of a backdoor is to corrupt the global (federated) model such that it mispredicts on a targeted sub-task.
Sustainability is an essential aspect of ethically responsible development and application of AI. Therefore, Tetrai tracks its environmental impact — e.g. within customer projects — and compensates for it.
In their NeurIPS-2019 paper, Zhu et al. describe an attack against Federated Learning (FL): they show how gradients — which are exchanged either in a client-server or a peer-to-peer FL setting — leak the (private) training data.
Sustainability is an ethical value of the responsible development and application of Artificial Intelligence. In their paper presented at ACL-2019, Strubel et al. showed how expensive it is to train large language models.