Fairness
When data is used in algorithms there may be a risk that it is used in discriminatory ways, for example in credit scoring.
How important is it that everyone is fairly treated when data is used to gain access to a service?
How important is it that everyone is fairly treated when data is used to gain access to a service?
Why the contribution is important
The Scottish Government has committed to engaging with citizens and public, private and third sector organisations and is interested to hear your thoughts on this topic.
by Sophie_ScotGov on December 08, 2020 at 08:52AM
Posted by Ingrid December 10, 2020 at 10:48
Report this Comment (Requires Log In)
Posted by Anthony December 14, 2020 at 14:27
Report this Comment (Requires Log In)
Posted by SOCITM December 16, 2020 at 17:54
Key Themes: Intelligibility, transparency, trustworthiness and accountability
• Transparency: Including traceability, explainability and communication – as Smart Information Systems can be involved in high-stakes decision-making, it is important to understand how the system achieves its decisions. Transparency, and concepts such as explainability, explicability, and traceability relate to the importance of having (or being able to gain) information about a system (transparency), and being able to understand or explain a system and why it behaves as it does (explainability).
• Accountability: Auditability, minimisation and reporting of negative impact, internal and external governance frameworks, redress, and human oversight. Given that Smart Information Systems act like agents in the world, it is important that someone is accountable for the systems’ actions. Furthermore, an individual must be able to receive adequate compensation in the case of harm from a system (redress). We must be able to evaluate the system, especially in the situation of a bad outcome (audibility). There must also be processes in place for minimisation and reporting of negative impacts, with internal and external governance frameworks (e.g., whistleblowing), and human oversight.
Areas of Focus:
• Traceability: the data sets and the processes that yield the AI system’s decision should be documented
• Explainability: the ability to explain both the technical processes of an AI system and the related human decisions
• Interpretability: Helping to give users confidence in AI systems, safeguarding against bias, meeting regulatory standards or policy requirements and overall improving system design
• System Accountability: Any system, and those who design it, should be accountable for the design and impact of the system. As a minimum this should include that you can:
• Ensure that systems with significant impact are designed to be auditable;
• Ensure that negative impacts are minimised and reported;
• Ensure internal and external governance frameworks;
• Ensure redress in cases where the system has significant impact on stakeholders;
• Ensure human oversight when there is a substantial risk of harm to human values.
see more at https://socitm.net/inform/collection-digital-ethics/
Report this Comment (Requires Log In)
Posted by SOCITM December 16, 2020 at 18:02
Consider whether a technology could produce or magnify unequal outcomes, and if so how to mitigate this.
Key Themes: Combating algorithmic bias, equitable treatment, consistency, shared benefits, shared prosperity, fair decision outcomes
• Diversity, non-discrimination, and fairness: Avoidance and reduction of bias, ensuring fairness and avoidance of discrimination, and inclusive stakeholder engagement
• Because bias can be found at all levels of the AI and data analytics systems (datasets, algorithms, or users’ interpretation), it is vital that this is identified and removed. Systems should be deployed and used with an inclusionary, fair, and non-discriminatory agenda. Requiring the developers to include people from diverse backgrounds (e.g., different ethnicities, genders, disabilities, ideologies, and belief systems), stakeholder engagement, and diversity analysis reports and product testing, are ways to include diverse views in these systems.
Areas of Focus:
• Avoidance of unfair bias: Take care to ensure that data sets used by AI systems do not suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models.
• Accessibility and universal design: systems should be user-centric and designed in a way that allows all people to use solutions and services, regardless of their age, gender, abilities or characteristics.
• Society and democracy: the impact of the system on institutions, democracy and society at large should be considered
• Auditability: the enablement of the assessment of algorithms, data and design processes.
• Minimisation and reporting of negative impacts: measures should be taken to identify, assess, document, minimise and respond to potential negative impacts of AI systems
• Trade-offs: when trade-offs between requirements are necessary, a process should be put in place to explicitly acknowledge the trade-off, and evaluate it transparently
• Redress: mechanism should be in place to respond when things go wrong.
see more at https://socitm.net/inform/collection-digital-ethics/
Report this Comment (Requires Log In)
Posted by DonaldClark December 18, 2020 at 12:02
Report this Comment (Requires Log In)
Posted by LesleyLaird December 18, 2020 at 12:18
Diversity, inclusion and equality factors should be taken into account to ensure fairness. Consideration should be given checks and balances that will ensure avoidance and reduction of bias, enabling fairness and avoidance of discrimination, including inclusive stakeholder engagement taking into account the intersectionality aspects of data collection.
Accepting that bias can already be found at all levels of the AI and data analytics systems (datasets, algorithms, or users’ interpretation), how will this be identified and removed.
Systems, to be accurate, need to be used with an inclusionary, fair, and non-discriminatory approach. This will require a diverse workforce in the creation, review and monitoring of how data methods are created, captured, analysed and then used to shape / modify current and future policy and practice.
Particular emphasis should consider the bias already built in to many data systems in relation to women. Women should also not be seen as a homogenous group - just as wider society is not. Therefore reviewing current systems to audit out current bias in data capture models and interpretation and analysis will need to be identified and rectified, otherwise current bias will continue to be perpetuated.
Report this Comment (Requires Log In)
Posted by simonbarrow December 18, 2020 at 15:42
Report this Comment (Requires Log In)