From detecting anomalies in the landscape of medical images to drone footage to the influencing of elections, machine learning algorithms are transforming radically how we make sense in society. Deep neural net algorithms condense the features of a scene to an output of meaning – such as “a man is throwing a frisbee in a park”, “a woman is standing at the border fence with a crowd in the background”, “the protesters are gathering in the city square”. They reduce the intractable difficulties and the undecidability of what could be happening in a scene into a single meaning that is informing decisions and actions. Is that hate speech or freedom of speech, are people pickpocketing or cuddling, is this a protestor or a terrorist?
More about the event: https://www.hiig.de/en/events/louise-…
In order to learn how to make distinctions, however, today’s algorithms require interactions with us and our data. The training and adaptation of algorithms take place through the attributes of our lives and the lives of others. This is problematic because the meaning of our relationships with other beings, how they come to make sense, precisely cannot be condensed. How do we begin to locate these aspects within the algorithm’s programme of sense-making in the digital society? Are there counter-methods available to us that resist the clustering of human attributes via machine learning? What remains in the digital society of that which is unattributable, that which cannot be translated into a single numeric output? #algorithm #digitalsociety
Her book, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others, is due out with Duke University Press in May 2020.
thanks for this, hope the trend of recording lectures/presentations outlives the lockdowns