ABSTRACT

In her insightful and worldly acclaimed work on epistemic injustice, Miranda Fricker argues that people can be distinctively wronged in their capacity as knowers. Much of the discussion around the notion of epistemic injustice has revolved around power relations between different groups of people. In this chapter we would like to take a different perspective on epistemic injustice, by applying it to the context of human/ICT interactions. New technologies may be a source of epistemic harm by depriving people of their credibility about themselves. The massive gathering of big data about our own identity and behaviour creates a new asymmetry of power between algorithms and humans: algorithms are perceived today as being better knowers of ourselves than we are, thus weakening our entitlement to be credible about ourselves. We argue that these new cases of epistemic injustice are, under many aspects, more centrally epistemic than other cases described in the literature because they wrong us directly in our epistemic capacities and not only in our dignity as knowledge givers. The examples of epistemic harm we will discuss undermine our epistemic confidence about our self-knowledge, a kind of knowledge that has been considered for a long time as markedly different from all other kinds of knowledge because of its infallibility and self-presentness. We are diminished as knowers, especially in the most intimate part of our epistemic competence. This is the case for both kinds of injustice that Fricker defines: testimonial and hermeneutical. But, before presenting a specific case, we would like to explain why we think that the ICT examples are more centrally epistemic than other case analyses in the literature and how they may help to contribute to answer some objections raised about the “epistemic” nature of the injustice committed towards knowledge givers and to illustrate in a clearer way the idea of epistemic objectification.