agenda

My research agenda and ideas that excite me


ongoing projects

projects I am leading or co-leading

Discursive Datasets. with Amy Zhang (UW) and Ranjay Krishna (UW).

  • Supported by the Mary Gates Scholarship – application (Winter 2024).

LLMs for Philosophers. with Jared Moore (Stanford), Rose Novick (UW philosophy), and Amy Zhang (UW CSE).

Political Autonomy on Social Platforms. with Katie Yurechko (Wash. & Lee, Oxford). We set forth a framework for understanding and analyzing the pre-/political experience and autonomy of users in social platforms.


exciting ideas and directions

Kernels of research ideas I’m excited about. If any of these excite you too, please shoot me an email at andreye [at] uw [dot] edu!

Tools for Metaphilosophy

  • Can we expand the modalities in which we do philosophy beyond just the text document? An HCI-type study of how syncing music with philosophical texts can both allow philosophers as writers to unlock new dimensions of expression and as readers to develop deeper understandings of ideas.
  • Data cards are now a standard in machine learning datasets: any new dataset should answer standard questions about its sources, creation process, etc. Can we develop ‘metaphilosophy cards’ for every philosophy article which clarifies the more ground-setting assumptions taken for granted by the community? – (What is the form of knowledge you are producing? What is your methodology – intuition-mapping, analytical, etc.?) Can this improve accessibility of articles across philosophical discourse?
  • Can an interactive tool in a text editor providing diverse philosophical perspectives on philosophers while they write help bridge the analytic-continental divide? If one analytic philosopher takes a tool-produced suggestion and incorporates a continental perspective and vice versa, can this sort of openness spread throughout the community? (In short: can we build a tool which provides the principal momentum for a meme of divide-crossing in philosophy?)

Philosophy for AI

  • An exploration of what “selfhood” means for AI – what does it mean when models say “As an AI language model…”? What might it mean to negate the sycophantic, servile, mirror-like nature many current language models have been aligned to?
  • Critique of the quasi-utilitarian focus on “preferences” in alignment, borrowing from the Frankfurt School’s critique of preference-centric consumer capitalism.
  • Serious HCI-style study of the use of LLMs and VLMs for philosophers.
  • An application of the “male gaze” to vision models (see: Laura Mulvey, “Visual Pleasure and Narrative Cinema”)
  • Borges and AI, but with Baudrillard, Nietzsche, and/or Foucault.
  • Developing Vilém Flusser’s notion of technical images for computer vision. See: Into the Universe of Technical Images.
  • Investigating if computer vision (and/or language modeling) is guilty of what Donna Haraway calls the ‘god trick’, and building information systems which reflect Haraway’s maxim that objectivity is partial perspective. See: Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective and A Cyborg Manifesto. “Situated Cameras, Situated Knowledges” is a great start.
  • What happens if we take Iris Murdoch’s notion of ‘moral vision’ literally? Murdoch says that “moral differences are differences in vision” – what we need is not a “renewed attempt to specify the facts but rather a fresh vision”. What does this mean for computer vision?