MTG Card Overlay
Learning OpenCV, ArUco markers, and real-time video overlays
Tech Stack
MTG Card Overlay
Magic: The Gathering has always fascinated me for its mix of art, strategy, and collectible value. One of the most compelling aspects of the game is how physical cards have digital counterparts, marketplaces, and constantly shifting prices. I wanted to explore that connection in a hands-on way â could I point a camera at a card and have its price pop up, live, as part of the image?
This project became my first real dive into OpenCV and ArUco markers, combining computer vision with a simple overlay system to render dynamic price tags next to each card.
Overview
At its core, the program runs a webcam feed, detects ArUco markers printed onto card sleeves, and overlays a clean price tag for each identified card. The tags rotate and scale with the card so they always appear aligned and readable, even when the card is tilted or moved around.
The process looks like this:
- Capture a live 1080p camera feed.
- Detect markers each frame using OpenCVâs
arucomodule. - Match marker IDs against a local
CARD_DATABASEto retrieve card info and a stored price. - Compute orientation and scale so the overlay aligns neatly with the cardâs edge.
- Render a semi-transparent black rectangle with white text and alpha-blend it back into the frame.
The end result is a live video where cards carry their own floating labels â a little like augmented reality, but lightweight and marker-based.
Why I Built It
I set out to learn by doing. This was my first time working with OpenCV at any depth, and Magic cards gave me a fun and personal use case. They are consistent in shape, easy to sleeve with markers, and come with a natural hook: every card has metadata (price, set, rarity) that you might want to visualize.
Instead of tackling something abstract, I wanted a project where I could clearly see results on screen. Watching a price tag snap into place next to a moving card made all the debugging worth it.
What I Built
- ArUco detection pipeline: Initialized from a predefined dictionary, tuned to recognize specific marker IDs.
- Overlay system: Price labels drawn into an RGBA buffer, rotated with the cardâs angle, then composited back onto the camera feed with alpha blending.
- Dynamic text scaling: Font size adjusts to the marker size so labels remain readable at different distances.
- Geometric placement: Labels are offset along the card edge using vector math, keeping them consistently aligned.
- Local card database: A simple dictionary mapping marker IDs to card info and prices.
Tools Behind the Scenes
- Python 3 as the runtime environment
- OpenCV for video capture, marker detection, text rendering, and transforms
- NumPy for vector math and geometry
- ArUco markers for uniquely identifying cards in the video feed
What I Learned
This project was equal parts exploration and education:
- How to use OpenCVâs
arucomodule to detect and track markers in real time. - How to work with coordinates, vectors, and angles to place overlays precisely.
- The importance of alpha blending for clean UI against live video.
- Techniques for keeping text readable at different scales and distances.
- How even a basic local database can connect real-world objects to digital metadata.
Takeaway
This started as a simple exercise in learning OpenCV, but it became a prototype for lightweight AR on the tabletop. It showed me how marker-based tracking can bridge the physical and digital worlds, and it opened up ideas for future experiments in gaming, trading cards, and interactive installations.
Gallery