AI-Mediated Public Decision-Making and Democratic Exclusion: Governance Risks and Accountability Frameworks
Abstract
Artificial intelligence is increasingly embedded within public administration, shaping welfare allocation, migration control and regulatory enforcement. Existing debates on AI governance have primarily focused on bias mitigation, transparency and risk-based compliance. While these approaches have advanced oversight mechanisms, they remain largely oriented toward managing harm at the level of system performance. This article advances a distinct analytical claim: democratic exclusion in AI-mediated public decision-making is infrastructural rather than merely output-based.
By conceptualising AI systems as governance infrastructures, the paper argues that exclusion may arise from the architectural embedding of optimisation logics within public authority. Algorithmic systems can pre-structure access pathways, recalibrate discretion and redistribute justificatory responsibility prior to individual decisions. These transformations may not be fully captured by bias detection or rights-impact assessments.
Drawing on constitutional principles and governance theory, the article identifies structural risks associated with epistemic asymmetry, automated filtering and procedural compression. A comparative analysis of European Union and United Kingdom regulatory trajectories demonstrates that both risk-based and principle-based frameworks remain predominantly compliance-oriented. The paper concludes by proposing democratic inclusion safeguards that operate at the level of institutional embedding, emphasising domain-sensitive justification, substantive oversight and inclusion monitoring. Democratic resilience in algorithmically mediated governance depends not only on technical robustness, but on preserving the justificatory foundations of public authority.
Keywords:
Algorithmic governance; democratic exclusion; public administration; accountability; algorithmic regulation; EU AI ActData Availability Statement
This study does not involve the use of empirical datasets. All sources supporting the findings of this research are publicly available and cited within the manuscript.
Copyright Notice & License:
All articles published in AI & Future Society are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Authors retain copyright and grant the journal the right of first publication. This license permits anyone to copy, distribute, remix, adapt, and build upon the work—even commercially—provided proper credit is given to the original author(s) and the source, a link to the license is provided, and any changes are indicated.

This work is licensed under a Creative Commons Attribution 4.0 International License.
References
Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/z38bg31
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
Bowker, G. C., & Star, S. L. (1999). Sorting things out: Classification and its consequences. MIT Press.
Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
Busch, P. A., & Henriksen, H. Z. (2018). Digital discretion: A systematic literature review of ICT and street-level discretion. Information Polity, 23(1), 3–28. https://doi.org/10.3233/ip-170050
Citron, D. K. (2008). Technological due process. Washington University Law Review, 85(6), 1249–1313.
Coglianese, C., & Lehr, D. (2017). Regulating by robot: Administrative decision making in the machine-learning era. Georgetown Law Journal, 105, 1147–1223.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), COM(2021) 206 final.
Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633–705.
Lodge, M., & Mennicken, A. (2017). The importance of regulation of and by algorithm. Centre for Analysis of Risk and Regulation, London School of Economics and Political Science (Discussion Paper No. 85).
Mendes, J. (2021). Administrative law and the European Union. Oxford University Press.
Parliamentary Interrogation Committee Childcare Allowance. (2020). Unprecedented injustice (Ongekend onrecht). House of Representatives of the Netherlands.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2018). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society, 20(1), 293–310. https://doi.org/10.1177/1461444816661553
Ranchordás, S. (2022). Automated administrative decision-making and administrative law. Cambridge University Press.
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59–68). https://doi.org/10.1145/3287560.3287598
Star, S. L., & Ruhleder, K. (1996). Steps toward an ecology of infrastructure: Design and access for large information spaces. Information Systems Research, 7(1), 111–134. https://doi.org/10.1287/isre.7.1.111
UK Government. (2023). A pro-innovation approach to AI regulation. Department for Science, Innovation and Technology.
Veale, M., & Zuiderveen Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112. https://doi.org/10.9785/cri-2021-220402
Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158

