Projects

↩ Warp back to the vault ↩ View all projects            

Anchor: Case Study

Overview

Anchor, the capstone project for my Bachelor of Science degree in Software Development, is a web-based platform paired with a browser extension and companion Discord bot that reimagines film viewing as an interactive layer of captured moments. It allows users such as film enthusiasts, critics, and media professionals to mark and revisit specific points in films, turning passive viewing into a structured system of reference and reflection.

 

Rather than relying on memory or scattered notes, Anchor builds a continuous record of engagement that connects interpretation directly to time.

 

Stack: JavaScript, Node.js, discord.js, PostgreSQL

Problem

There is no built-in mechanism for capturing and annotating specific moments during film playback in a structured, time-indexed way.

 

Existing tools such as note-taking apps or screenshot utilities exist, but they are disconnected from the film timeline and lack persistent synchronisation with playback context.

 

As a result, it becomes difficult to organise, retrieve, and reliably revisit meaningful moments across viewing sessions, particularly as collections of observations grow over time.

System Architecture

Anchor is built as a layered system composed of a browser extension, backend service layer, relational database, object storage, and external metadata integration. Each layer is responsible for a specific part of the capture-to-consumption pipeline, with all higher-level features derived from a shared, normalised Memo data model.

Browser Extension Overlay (Capture Layer)

 

The browser extension operates as the real-time capture layer and runs directly inside Plex Web. It continuously reads the active playback state, including the current timestamp and available film metadata such as title, and binds this context at the moment a Memo is created.

 

Alongside this, it collects user input, optional tags, and screenshot data through a lightweight overlay interface. Each interaction is treated as an atomic capture event and sent immediately to the backend without interrupting playback.

 

Backend Service Layer (Core Orchestration)

 

The backend service layer acts as the central orchestration point for the system. It validates incoming Memo requests, enforces business rules such as required fields, timestamp integrity, and visibility constraints, and coordinates persistence logic.

 

A key responsibility of this layer is film normalisation through integration with TMDb, where incoming film references are resolved into canonical Film entities. This prevents duplication and ensures consistent identity across the platform.

 

The backend also manages relationships between Memos, users, and films, coordinating writes across multiple related tables within a single coherent transaction flow.

Relational Database Layer

 

The relational database is structured around a normalised model centred on the Memo entity, which links directly to Users and Films while supporting multiple relational extensions. These include a many-to-many tagging system via junction tables, a social graph for user friendships, and curation systems such as favourites and custom lists.

 

Each Memo stores core temporal and contextual data, including timestamp values, textual notes, visibility state, foreign key relationships, and a reference to externally stored screenshot assets.

 

This structure supports both direct retrieval for user archives and complex aggregation queries that power timelines, discovery feeds, and analytical views.

Object Storage Layer (Media Handling)

 

Media assets such as screenshots are handled through a dedicated object storage layer, decoupling binary data from the relational database. Each image is stored immutably and referenced via persistent URLs linked to Memo records, improving scalability and preventing large media payloads from impacting query performance.

 

External Metadata Layer (TMDb Integration)

 

External metadata integration is handled through TMDb, which functions as a normalisation and enrichment layer rather than a primary data store. It is used during Memo creation to resolve film identity and retrieve standardised metadata such as title, release year, director, and poster information.

 

This ensures all Memos tied to the same film converge on a shared reference entity, enabling consistent aggregation and cross-user analysis.

 

Main Web Application (Consumption Layer) 

 

The main web application functions as the consumption layer of the system. It renders personal archives, search and filtering tools, screenshot galleries, film-level engagement timelines, and social profile views.

It also powers analytical interfaces such as the Pulse Graph and aggregated activity views across personal, friends-only, and global scopes.

 

All views are generated dynamically from the underlying Memo dataset, meaning no derived analytics are precomputed or stored separately. This allows real-time reflection of user activity across the system.

 

TL;DR: How It Works

 

When a user watches a film in Plex Web, the browser extension remains connected to the player and continuously accesses the current playback state and film context.

 

At the moment a Memo is created, it captures the current timestamp, film title, user input, and optional screenshot in a single action.

 

This data is sent as a structured event to the backend, where it is validated and linked to the correct Film record. Once stored, the Memo becomes immediately available across the system, appearing in personal archives, social feeds, and aggregated film-level views.

 

Each interaction is treated as an atomic event flowing from viewing to capture to system-wide availability.

System Capabilities

Time-Based Capture System

 

Anchor enables users to capture film moments as persistent, time-indexed Memos anchored to precise points in playback. These can be revisited, searched, filtered, and organised across viewing sessions, forming a structured personal archive of cinematic reference points over time.

 

Social Layer and Identity

 

Users can build social connections that support sharing Memos within different visibility scopes, including private, friends-only, and public contexts. This enables multiple modes of interaction, from individual reflection to shared discussion, while preserving contextual meaning around each captured moment.

 

Three-Tier Visibility Model

 

Memos exist within a flexible visibility system that governs how content is shared and accessed across the platform. This supports private capture, semi-shared interaction within trusted connections, and fully public contribution to broader discourse.

 

Curation and Organization

 

The system allows users to curate captured moments through lightweight tools such as favouriting and custom lists. These enable flexible grouping and recontextualisation of Memos independent of their original film source.

 

Engagement and Discovery Systems

 

Anchor supports exploration of viewing behaviour through aggregated interfaces that surface captured moments across films and users. Timeline-based and visual browsing tools allow navigation at the level of individual scenes while also revealing broader patterns of attention across content.

UX/UI Design Process

A dedicated UI/UX phase was conducted over one week to define Anchor’s interaction model prior to development. The process began with storyboarding key user flows, particularly in-context capture during film playback and the transition from passive viewing to active annotation.

 

This was followed by high-fidelity wireframing in Figma to establish layout structure and interaction patterns across the browser extension and web application.

 

The work then progressed into high-fidelity design, where core surfaces of the product were fully mapped out. This included the Memos page with search, sorting, and filtering systems; the Memo creation flow; the Pulse Graph visualisation interface; and the browser extension UI.

 

Each interface was designed to maintain low-friction capture while supporting deeper exploration of stored Memos through structured discovery tools.

 

The outcome was a cohesive interaction system spanning capture, organisation, and analysis, with consistent design patterns across both extension and web application. This directly informed frontend implementation and ensured alignment between intended and actual user workflows.

Building the MVP

Anchor was developed through a four-sprint progression, evolving from isolated backend routes into a fully integrated, demo-ready system with social, analytical, and discovery layers.

 

Early development focused on establishing core infrastructure: independent API routes for creating and retrieving Memos, followed by a unified monorepo architecture connecting backend, frontend, and extension components.

 

The system was then aligned with the original interaction design through a complete capture flow integrated into Plex Web, including timestamp binding and screenshot persistence.

 

The final sprint transformed the project into a complete MVP by introducing social and analytical systems that give Memos meaning at scale.

Technical Highlights

  • Postgres-backed relational model supporting Memos, friendships, favourites, and lists

  • Viewer-aware aggregation queries powering global, friends, and personal analytics views

  • Timestamp clustering logic for film-level engagement visualisation

  • Optimistic UI patterns for social interactions such as favourites and visibility changes

  • Seeded demo dataset enabling immediate exploration without onboarding friction

  • Chrome extension integration with Plex Web for real-time timestamp extraction and capture binding

  • Discord bot companion that mirrors key Anchor activity, enabling notifications, Memo sharing, and lightweight interaction flows within Discord servers

Conclusion

Anchor explores film viewing as a structured, time-indexed social memory system, where individual reactions become part of a broader collective archive.

 

The platform demonstrates how real-time browser capture, relational modelling, and layered aggregation can transform passive media consumption into an interactive, navigable dataset.

Future Work

A near-term milestone is beta testing, with a live demo planned for Summer 2026, focused on gathering feedback from users interacting with the system in real viewing contexts. This phase will refine capture workflows, social interactions, and performance under realistic usage conditions.

 

Longer term, Anchor will expand to additional browsers and integrate with the desktop Plex application, extending accessibility beyond the current web and extension ecosystem.

 

A further direction involves integrating Anchor into film production and post-production pipelines. This would extend the timestamped annotation system beyond audience viewing into editing environments, enabling structured feedback on sequences, cuts, and revisions as part of a collaborative review process.

↩ Warp back to the vault ↩ View all projects