Skip to content

AI Assessment Ecosystem

The AI Assessment ecosystem focuses on generating, curating, and delivering AI-assisted practice questions and assessments.

Key projects:

  • Test Forge – backend engine that uses LLMs and RAG to generate questions from course materials.
  • Test Forge App – instructor-facing frontend to configure generation and review questions.
  • Psephos – survey engine for delivering questions as polls/quizzes.
  • UMACS – central auth service for instructors and students.
  • SkillNet (future) – maps questions to skills and tracks student competency.

High-Level Architecture

flowchart TD
  Instructor[Instructor] --> TFApp[Test Forge App]
  TFApp --> Upload[Upload Course Materials]
  Upload --> TF[Test Forge<br/>AI Question Engine]

  TF --> Questions[Generated Questions & Metadata]
  Questions --> Review[Review & Edit in Test Forge App]
  Review --> Psephos[Psephos<br/>Survey Engine]
  Psephos --> Delivery[Delivery as quizzes / polls]

  Students[Students] --> Delivery

  UMACS[UMACS<br/>Auth Service] --> TFApp
  UMACS --> TF
  UMACS --> Psephos

  TF --> SkillNet[SkillNet<br/>Skill Mapping]
  SkillNet --> Insights[Skill Insights & Analytics]

Typical Flow

  1. Content Ingestion – Instructors upload or link course materials via Test Forge App.
  2. Question GenerationTest Forge processes materials and generates candidate questions, tagged with topics/skills.
  3. Instructor Review – Instructors review, edit, and approve questions in Test Forge App.
  4. Assessment Delivery – Approved questions are pushed into Psephos and delivered via survey or polling UIs (e.g., UniPoll or course tools).
  5. Analytics (future) – Responses and performance metrics feed into SkillNet to provide skill-level insights.

Throughout this flow, UMACS handles authentication and authorization for both instructors and students.