Test Forge
Test Forge¶
Test Forge is an AI-based sample testing engine that ingests course materials, generates custom questions, and helps instructors understand how students are learning. It uses LLMs, RAG, and analytics to turn raw content into structured assessments.
The goal is to give educators a data-driven, instructor-in-the-loop way to create practice exams and assignments.
Where This Project Fits¶
flowchart LR
Instructor[Instructor] --> TFApp[Test Forge App]
TFApp --> TF[Test Forge<br/>Question Engine]
TF --> Psephos[Psephos<br/>Survey Engine]
Psephos --> Students[Students]
UMACS[UMACS<br/>Auth Service] --> TFApp
UMACS --> TF
UMACS --> Psephos
- Test Forge App is the frontend where instructors configure and review questions
- Psephos stores questions and delivers them as surveys/quizzes
- UMACS provides auth for Test Forge, Test Forge App, and Psephos
What You Could Work On¶
- Build pipelines that ingest and chunk course materials (PDFs, slides, repos)
- Design and evaluate question generation strategies and Bloom’s taxonomy controls
- Implement instructor feedback loops (approve/edit/reject) and track model performance
- Experiment with RAG architectures, vector stores, and evaluation metrics
- Integrate Test Forge with downstream systems (Psephos, UniPoll, LMSes)
Core Concepts & Tech¶
- Backend: Python, FastAPI, MongoDB
- AI stack: LangChain (or similar), vector DB (e.g., Qdrant), LLMs via Ollama or cloud providers
- Key ideas: RAG, question templating, difficulty modeling, feedback loops
See the linked wiki for deeper design docs and scope discussions.