Independent research and field-building. Remote-first, headquartered in Boston.
Society Ethics Tech
AI systems are increasingly embedded in the environments where people think, decide, and form beliefs. Recommendation algorithms shape what information reaches us. Chatbots participate in our reasoning. Summarization tools decide what matters and what gets left out. Persuasion research shows that AI-generated arguments can shift beliefs more effectively than human-authored ones, raising urgent questions about autonomy, manipulation, and the conditions under which genuine understanding is possible.
This internship investigates what happens to human agency when AI systems mediate cognition at scale. The intern will survey current research on AI persuasion, sycophancy, epistemic dependence, and the erosion or preservation of reflective capacity. Relevant work includes Google Jigsaw's framework on structural agency and the five dimensions of human agency in the AI era; the MIT Media Lab Advancing Humans with AI (AHA) program's research on human flourishing, overreliance, and skill atrophy; the Reflective Agency Framework (RAF) developed by Kim et al. at MIT, which identifies conditions under which AI systems preserve or erode a person's ability to interpret their own experience; and Thomas Costello's experimental work on AI-driven belief change and the psychology of persuasion.
The intern will produce a structured review, annotated bibliography, or working paper draft examining these pressures and the emerging design responses to them. The work sits under JOPRO's Society Ethics Technology working group and connects to two adjacent programs: DigiNEST, which examines narrative agency and how story technologies shape self-understanding, and Data x Direction, which investigates responsible AI from technical and policy perspectives. The intern would engage with both teams where relevant, but the focus here is broader than narrative or data ethics alone: it concerns the conditions under which people can think clearly, form their own judgments, and maintain meaningful autonomy in AI-saturated environments.
Undergraduate or graduate students in cognitive science, philosophy (especially philosophy of mind, epistemology, or ethics), psychology, HCI, STS, or related fields. The ideal candidate is interested in questions about autonomy, belief formation, and the cognitive effects of AI systems rather than (or in addition to) AI policy or governance. Familiarity with any of the referenced research programs is a plus but not required.
This internship builds on work already underway in JOPRO's DigiNEST program, which has published on reflective agency and AI-mediated narrative. See: 'Whose Reflection Is It? Agency, Meaning, and the AI Systems Mediating Our Inner Lives' (JOPRO blog, April 2026).
Or email start@jopro.org with your statement of interest and CV.