Back to all projects
In Development

Atlas - AI Learning Experiment

Exploring AI-Assisted Card Creation

An experimental system exploring the right balance between AI assistance and human agency in creating atomic spaced repetition cards from conversations.

aieducationspaced-repetitionresearchexperiment

Overview

Atlas is my experiment in finding the right balance between AI assistance and human effort in creating effective spaced repetition cards. The goal isn't to replace human thinking, but to explore how AI can guide better card creation while preserving the mental effort that facilitates true learning.

The Problem

Creating good spaced repetition cards is cognitively demanding but essential for deep learning. Andy Matuschak and Nielsen in their discussions advocate for atomic, connected cards as crucial for genuine learning - not just memorization. However, the card creation process itself is often where learning happens, so simply automating it would remove valuable cognitive work.

Both researchers emphasize that effective cards must be atomic (focused on single concepts), connected (linked to broader learning), and crafted to inspire genuine thinking rather than rote memorization. This work is anchored in deep psychological and cognitive research about how we actually learn and retain information.

My Approach

Atlas experiments with a workflow where users create learning projects, AI fills knowledge gaps with milestone suggestions, and conversations naturally generate card opportunities. The system follows atomic prompt principles and emphasis on deep learning through carefully crafted questions.

The core challenge is using AI to guide better card creation without removing the effort that builds learning. It's about finding the right balance - where does AI assistance help, and where does it interfere with the learning process itself?

Key Features

  • Project-based learning structure with AI-suggested milestones
  • Conversational interface that identifies card-worthy moments
  • Card creation guided by atomic prompt principles - following research-backed approaches
  • AI assistance balanced with human cognitive effort - preserving the "desirable difficulty"
  • Two-phase creation process: propose → human review → confirm
  • Emphasis on connected, meaningful cards over isolated facts
  • Experimental approach to AI-guided learning - this is research, not a finished product

Visual Progress

Screenshots coming soon - still in active development phase

Technical Implementation

Python FastAPI backend with multi-agent architecture, PostgreSQL + pgvector for semantic search, React frontend, multi-LLM support. Built as a research tool to explore learning effectiveness rather than just another productivity app.

Current Progress

Core workflow implemented: project creation, milestone generation, conversational card detection, and atomic card creation. Still very much in experimental phase - every step currently uses AI generation with carefully designed prompts.

I'm struggling with making the process transparent, allowing for proper tracing and future experimentation, and making the workflow more flexible to involve human input at the right moments. Each iteration teaches me something new about the balance between automation and human agency.

Challenges & Learnings

The fundamental tension: How much AI assistance helps vs. hinders learning?

Drawing from cognitive research on "desirable difficulties" - some effort in card creation is essential for learning. I'm exploring where AI guidance genuinely helps and where it interferes with the learning process itself.

The goal is learning, not efficiency - though ideally we can achieve both.

What's Next

Continue experimenting with different levels of AI assistance. Test whether AI-guided card creation actually improves learning outcomes compared to purely human-created cards.

This isn't about building the next big learning app - it's about learning how AI can genuinely support deeper learning without replacing the cognitive work that makes learning effective.


Private research experiment - not yet ready for broader use while I refine the approach.