Algorithms: generating a public dialogue
By Stéphane Goldstein
On 1 October, I attended a workshop that presented the findings of an initiative aimed at improving the fairness and transparency of online platform algorithms [1]. The UnBias project, bringing together researchers from the universities of Oxford, Nottingham and Edinburgh, ran over two years from 2016 and notably featured user group studies to understand the concerns and perspectives of citizens with regards to algorithms.
Through its findings, UnBias clearly seeks to influence the ethical and regulatory environments of algorithms. Educational approaches are a major feature of this intention, and in this respect, a key output of the project is a Fairness Toolkit which seeks “to promote awareness and stimulate a public civic dialogue about how algorithms shape online experiences and to reflect on possible changes to address issues of online unfairness. The tools are not just for critical thinking, but for civic thinking – supporting a more collective approach to imagining the future as a contrast to the individual atomising effect that such technologies often cause”. This approach, with its encouragement of citizens’ reflection on how online information is mediated and delivered, bears a clear relationship to information literacy.
Central to the toolkit is a nicely-designed form (“TrustScape”) to enable any individual to visualise their perceptions of algorithmic bias, data protection and online safety (although anyone can fill the form, UnBias is particularly keen to tap into the views of young people). The form invites people to comment on how they experience bias, unfairness or loss of trust; what they feel about this; how they believe these issues are being addressed; and how things might be done better. The intention is to scan these completed, anonymised forms and make them available online on the UnBias website, to create a library of citizens’ reflections.
This is not just about charting public opinion: UnBias seeks to encourage key stakeholders – the ICT industry, policymakers, regulators, public agencies and others – to engage in their own reflections on the collated citizens’ views. Stakeholder may record their views on a separate form (“Metamap”) indicating, for instance, whether citizens’ comments are recognised and what responses might be offered. These forms too would be available online, thereby building a record of a dialogue between users, creators and facilitators of information.
The toolkit also incorporates a rather neat card game to help build awareness of how bias and unfairness can occur in algorithmic systems, and to reflect on how individuals might be affected.
All in all, this is an interesting and potentially valuable initiative which, if successful, should generate a much-needed public debate on the challenge of addressing information bias. The onus is on developing a constructive dialogue that allows the articulation of citizens’ concerns and that provides channels for helping to address such concerns. This could provide a template for fostering similar exchanges relating to other societal issues around the use and mis-use of information.
[1] As defined by the project, “An algorithm is a process or list of rules to follow in order to complete a task, like: solving a problem, making a decision or doing a calculation. When an algorithm is written, the order of its instructions is critical: it determines the result of the process. Algorithms are essential to the way computers process data. Their design is often influenced by other factors such as laws and values deemed important to society. Algorithms are ubiquitous in everyday life. They are embedded in the software of our personal computers and devices as well as in the wider infrastructures facilitating and controlling modern day life.”
Photo: London, Excel Court, Whitcomb Street – © Stéphane Goldstein