Andre Ye

I am an undergraduate at the University of Washington studying philosophy and computer science. My interest is in the philosophy of technology and AI – specifically political/ethical, ontological, and cultural dimensions. Some questions which excite me are: How to models reflect and distort cultural differences among humans? Does AI stand for universality, and what ethical and political difficulties arise? What are complications with value pluralism in AI?
Currently, I am working with Prof. Ranjay Krishna, Prof. Amy Zhang, and PhD student Sebastin Santy on cross-cultural perceptual differences in vision understanding models; as well as with Jared Moore and Mark Pock on social intentionality in AI. Previously, I worked in the Social Futures Lab on uncertainty representation in segmentation models, the Najafian Lab on the segmentation of kidney structures, and Deepgram on curriculum learning in large transcription models. I have written a few essays in philosophy, two books on deep learning, a little bit of fiction, and many data science articles.
Outside of academics, I enjoy listening to lectures by Slavoj Zizek (my favorite undead philosopher), playing piano, learning Russian and French, and thinking about swimming in the eternally incomplete UW pool. I’m always up for a chat, so feel free to reach out.

Composite image of The Disintegration of the Persistence of Memory (Dali), Edward Bellamy (GAN), Guernica (Picasso), still from Battleship Potemkin (Eisenstein), Relativity (Escher).
Philosophy Interest Statement. Aphorism 127 in Adorno’s Minima Moralia declares, “Intelligence is a moral category.” Every rational and intellectual endeavor expresses an ethical orientation. One usually takes from this that we need a meditation on the ethical waste and peril of human intellectual histories. But intelligence is no longer obviously an anthropocentric feature. My academic interest is in subjecting non-human intelligences (“AI”) to this very sort of meditation. Is the only ethical significance of AI the one that humans imbue into it? Might AI’s persistent exteriority to “HI” (Human Intelligence) better capture truths which themselves elude the rational (human) ego? Can AI ‘do’ philosophy, and does it represent a break in the history of philosophy?
news
Sep 23, 2023 | I will be a TA for the Transition School English Literature course during the 2023-2024 academic year, and a TA for CSE 160 this quarter. |
---|---|
Sep 21, 2023 | I received the HCOMP/CI ‘23 Student Scholarship! |
Aug 7, 2023 | My article “The Wartime State and the Cigarette: Darkness and Temporality in Pale Horse, Pale Rider” has been listed in the 2022 Katherine Anne Porter society bibliography on Porter scholarship. |
Aug 7, 2023 | My paper on Confidence Contours mentored by Jim Chen and Amy Zhang has been accepted into AAAI HCOMP’23 in Delft, Netherlands! |
May 31, 2023 | I presented a poster on the Confidence Contours at Allen School Masters and Undergraduate Research Showcase! View the poster here. |
