Burrell, J. (2016) ‘How the machine “thinks”: Understanding opacity in machine learning algorithms’, Big Data & Society, 3(1), pp. 1–12. doi: 10.1177/2053951715622512.

Burrell’s analysis of machine-learning opacity provides a crucial theoretical bridge between technical design and social consequence by demonstrating that algorithmic inscrutability is not a single problem but a stratified epistemic condition. She distinguishes three forms of opacity: deliberate corporate or state secrecy, public and professional technical illiteracy, and a deeper opacity generated by the scale, dimensionality and mathematical optimisation of machine-learning systems themselves. This third form is the most philosophically consequential, because even transparent code may fail to yield humanly intelligible reasons once a model has learned statistical relations across vast data structures. Her examples are instructive: the neural-network diagram on page 6 visualises handwritten-digit recognition as layered connections between input, hidden and output nodes, while page 7 shows that the machine’s internal weighting patterns do not correspond to familiar human categories such as curves, bars or diagonals. The spam-filtering case further reveals the gap between semantic interpretation and statistical classification: a Nigerian scam email is not recognised through narrative genre, intention or deception, but through weighted lexical fragments such as “please”, “money” or “contact”. The case synthesis therefore unsettles simplistic calls for transparency: disclosure, auditing and computational literacy remain necessary, yet insufficient where machine reasoning resists translation into human explanation. Ultimately, Burrell reframes algorithmic accountability as an interdisciplinary obligation: legal scholars, social scientists, computer scientists, domain experts and affected publics must jointly evaluate not merely code, but the classificatory systems through which life chances are increasingly governed.