Museum Interpolations - Terminal Sump

Museum Interpolations

This project uses image models to generate visual paths between artifacts in museum collections. Taking high-resolution photographs of two objects, say a bronze vessel and a ceramic figure or a textile and a carved relief, and it produces the intermediate steps: images that sit between them, inheriting features from both.

We developed the methodology to work with any documented collection of artifacts. The Harvard CAMLab has generously supported us as we build out the project, but the code and processes are designed to be generalizable.

The interpolated artifacts aren't invented by the process so much as revealed by it. The original objects, by existing, define a space of possible intermediates; the model traverses that space and renders what it finds. We display the results as single path progressions as grid projections, where diagonal paths create artifacts that sit between multiple pairs of sources at once.

What emerges are objects no culture actually produced, but that carry recognizable traces of the cultures that did. Allowing for speculative archaeology.

Generative Art

When thinking about making art with generative image models I treat artists as my palette. I find painters, illustrators, photographers, designers, etc. whose work the image model has absorbed well enough to interpolate coherently. Then I combine them in prompts, as if they were fictional co-authors of images that exist in the spaces between their styles.

These images aren't collaborations with the named artists, but they're also not conjured from nothing. And I certainly would not claim any kind of creative ownership of the work for myself alone. The artists' bodies of work, by existing in the training data, define a space of possible combinations. The images were always latent there; the prompt is what makes them visible. I am exercising more curatorial and exploratory skills to identify images that move me, images that the original artists unknowingly made possible.

Lunacy

Lunacy is my first full series using this approach. The prompts are minimal: the string "Lunacy by " followed by a string of artist names. I chose the term for its etymology: lunaticus, moonstruck, the old belief that the moon periodically causes madness. That connection to both lunar imagery and disordered perception anchors the series in a visual space I find compelling, one that lets the different artist styles shine through while giving them common ground.

Select artists below to explore the collection.

MindBench.ai Logo

MindBench.ai

Millions of people already use AI as a mental health tool — talking to chatbots about anxiety, depression, suicidal thoughts, things they would not tell anyone else. MindBench.ai is a research initiative I'm building at the Division of Digital Psychiatry at Beth Israel Deaconess Medical Center, in partnership with NAMI: a publicly available evaluation framework and resource that will examine AI systems across technical infrastructure, conversational dynamics, clinical knowledge, and reasoning. The work is shaped at every step by clinicians, engineers, researchers, and people with lived experience of mental illness.

Currently early-stage — the first iterations of the evaluation framework are currently being tested.

Stack React · TypeScript · Vite · Tailwind · Express · Prisma · PostgreSQL · AWS S3
Cloze Logo

Cloze

Cloze is open-source research infrastructure for studying human-AI conversations. The public discourse around LLMs and mental health has focused on AI therapy, but the evidence base for safe, effective AI-delivered therapy doesn't exist yet — and Cloze is built for the controlled studies that need to happen first. It supports cognitive skills training, provider simulation, between-session support tools, and qualitative research on human-AI interaction, all on shared infrastructure with a layered prompt system: a universal safety floor (crisis protocols, forbidden content, persona guardrails) that providers cannot override, with study-specific configuration stacked above it. Multi-provider models (OpenAI, Anthropic, Google, on-device Ollama), three study flow types (always available / phased / recurring), and IRB-compliant participant management.

Stack Flask · SQLAlchemy · PostgreSQL · Tailwind · Multi-LLM · AWS · Cloudflare
mindLAMP logo

mindLAMP

mindLAMP is the Division of Digital Psychiatry's open-source platform for mobile-health research and clinical care, used at Harvard, Mount Sinai, Oxford Health, and 40+ research sites across five continents. It pairs a multilingual participant mobile app with a clinician/researcher web dashboard, supporting validated cognitive games, surveys, and passive sensor streams — GPS, accelerometer, heart rate, wearables. Cortex, its open-source analysis pipeline, turns the raw data into peer-reviewed behavioral features for studies of schizophrenia, depression, anxiety, bipolar disorder, and dementia.

I inherited the platform and now do feature work, maintenance, and modernization across the dashboard, server, mobile clients, and infrastructure. Currently working on a cognitive game library enhancement, authentication modernization, and database migration

Stack React · TypeScript · Swift · Kotlin · Node.js · Python · AWS
Favicon for Matt Flathers' portfolio website

Library of Babble

Library of Babble is this site itself. My personal media library built to organize and showcase my reading, watching, and creative work. It serves as both a functional tool for tracking books, movies, and TV shows, and a portfolio demonstrating full-stack development.

Features include curated collections and shelves, cross-content pairings, a quote system with community engagement, and a generative art showcase. The site is designed with a dark theme and responsive layouts throughout.

Stack Flask, SQLAlchemy, PostgreSQL, AWS S3/CloudFront, Jinja2

Discord Interpolation Tracker

Museum Interpolations generates hundreds of images per session. There are different artifact pairings, different weight ratios, different model versions. Managing that output by hand doesn't scale. This tool automates it.

It's a Discord bot that monitors the channels where we run Midjourney, parses the thread structure to identify which artifacts are being blended and at what weights, and watches for emoji reactions that mark the results worth keeping. When someone approves an image, the bot downloads it, files it by source pairing, and logs the full provenance: artifacts, weights, model version, who approved it, when.

The result is a structured archive of every experiment — searchable, auditable, and organized without anyone having to manually save or rename a file. It turned what was becoming a bottleneck into something that runs in the background.

Stack Python, Discord, Docker
Drag to pan, scroll to zoom
Zoomed image