markupR
v2.6.8: Context-aware capture + BYOK reliability updates

You see it. You say it.
Your AI fixes it.

You know exactly what's wrong. markupR captures it as a desktop-native recording workflow, then hands your agent structured context: transcript, frames, cursor location, active window, and focused-element hints when available.

Other platforms & versions (opens in new tab)

macOS install note: notarization approval is in progress. If first launch shows an unidentified-developer warning, use right-click → Open once.

If markupR helps, optional support keeps it independent and shipping fast.

Start with
+ + F

Quick visual walkthrough

See the full flow in a few seconds.

You record and mark shots. After stop, markupR aligns narration with frames and attaches context cues (cursor/window/focus) so your agent can fix with less interpretation.

Session active: recording + manual shot marks 00:42

Shot markers confirm instantly while you narrate. Assembly happens after stop.

Output: Structured markdown your coding agent acts on. Also works in GitHub, Linear, and Slack.

How it works

Three steps from bug to AI fix.

you
Record + Narrate
Start a session and describe what you see in real time.
markupR
Mark + Enrich
Manual shots confirm instantly, then AI aligns transcript + frames and attaches capture context metadata after stop.
you
Feed Your Agent
Paste the markdown into your AI coding agent. It has everything it needs to fix what you described.
⌘⇧F start
talk
⌘⇧F stop
⌘V paste into your agent

Why it matters

Feedback without the friction.

Stop retyping what you already see.

You found the bug. You can see it right there. Now you have to context-switch into writing mode, describe it in text, screenshot it, crop it, drag it into the right spot. markupR lets you just say it. Your agent gets what it needs to fix it.

You talk at 150 wpm. You type at 60.

A five-minute narration gives your AI agent more precise context than twenty minutes of writing. markupR places screenshots at exact moments and adds cursor/window/focus hints -- so the agent sees what you saw, where your attention was, and when.

Choose local privacy or cloud convenience.

markupR supports both local Whisper transcription and OpenAI BYOK transcription. Stay fully local when privacy is critical, or use cloud transcription when you want fast setup and reliable results.

Under the hood

Built for developers who read source.

  • Dual transcription paths Use local Whisper for on-device processing, or OpenAI BYOK for fast, reliable post-session transcription.
  • Intelligent frame extraction Records your full screen, then uses transcript timestamps to pull the exact frames that match what you were describing.
  • Context-aware shot metadata Each marked shot can include cursor coordinates, active app/window, and focused-element hints (best effort), so reports carry more than pixels.
  • macOS native Lives in your menu bar. No dock icon. No browser tab. Launches at login if you want it to.
  • Structured Markdown output Sections, headings, and screenshots assembled into context your AI agent already understands. Feed to Claude Code, Cursor, Copilot -- or paste into GitHub and Slack.
  • Open source, MIT licensed Read the code, fork it, break it, fix it. No telemetry, no tracking, no analytics.
feedback-session.md
## Feedback Session — Feb 5, 2026

### Button sizing issue
The submit button is way too small on mobile.
I'm trying to tap it and keep hitting the cancel
link underneath. Needs more vertical padding,
maybe 12px minimum tap target.
![Screenshot at 0:34](screenshots/fb-001.png)

### Loading state feels janky
After the spinner disappears, the content just
pops in with no transition. There's a visible
layout shift — the sidebar jumps left by about
20 pixels.
![Screenshot at 1:12](screenshots/fb-002.png)

### Nav highlight is wrong
I'm on the Settings page but the Dashboard tab
is still highlighted. Looks like the active state
isn't updating on route change.
![Screenshot at 1:45](screenshots/fb-003.png)

For automation and existing recordings

Install from your terminal.

Same pipeline, no desktop app required. Process existing screen recordings from CI/CD, scripts, or AI agents.

The CLI runs the full markupR pipeline: transcription, frame extraction, context enrichment, and markdown generation. Point it at a video file and get structured context your agent can act on.

terminal
$ npx markupr analyze ./recording.mov
# Or install globally:
$ npm install -g markupr
$ bun install -g markupr

MCP Server

Give your AI agent eyes and ears.

Add markupR as an MCP server and your AI coding agent can capture screenshots, record your screen with voice, and receive context-enriched reports -- all mid-conversation.

claude_desktop_config.json
{
  "mcpServers": {
    "markupR": {
      "command": "npx",
      "args": ["--yes", "--package", "markupr", "markupr-mcp"]
    }
  }
}
capture_screenshot Grab the current screen instantly with context cues (cursor/app/focus when available).
capture_with_voice Record screen + microphone for a set duration. Get a full context-enriched report back.
analyze_video Process any existing .mov or .mp4 into structured Markdown with extracted frames.
analyze_screenshot Feed a screenshot to the pipeline for AI-enhanced analysis.
start_recording Begin an interactive session. Narrate while you work.
stop_recording End the session. The full pipeline runs and returns your report.

Works with Claude Code, Cursor, Windsurf, and any MCP-compatible client. Zero install -- npx handles everything.

Integrations

Push feedback where your team works.

Output templates, watch mode, and direct integrations with GitHub Issues, Linear, and CI/CD.

GitHub Issues & Linear

Push structured feedback directly into your issue tracker. Screenshots, transcript, and timestamps land formatted and ready to assign.

GitHub Action for CI/CD

Add visual feedback to every PR. The action runs markupR in your workflow and posts structured reports as PR comments automatically.

Output Templates & Watch Mode

Control the shape of every report with customizable templates. Watch mode monitors directories and processes recordings as they appear.

Choose Your Setup

Same feedback loop. Two ways to run it.

Record what you see, narrate what is wrong, and generate structured context your AI agent can act on. Use BYOK for full control, or choose premium for hosted key management.

Open source (BYOK)

Bring your own OpenAI + Anthropic keys. Full control, same report quality, fully transparent stack.

Premium (touchless)

Exactly the same workflow and output, but keys are hosted for you. No API setup screens, no key management.

Switch anytime

Move between BYOK and premium as your workflow evolves. No report format changes required.

FAQ

Common questions.

What is markupR?
markupR is an open-source developer tool that records your screen and voice simultaneously, then uses an intelligent pipeline to produce structured Markdown with screenshots placed at the right moments. The output is purpose-built for AI coding agents.
How does markupR work?
Press a hotkey to start recording, narrate what you see, and stop. markupR transcribes your narration, aligns key frames, and enriches shot markers with context hints (cursor, active window, focused element when available), then assembles everything into agent-ready Markdown.
What is the MCP server?
The MCP (Model Context Protocol) server lets AI coding agents trigger markupR directly mid-conversation. Add it with npx --package markupr markupr-mcp in your agent's config and it can capture screenshots, record sessions, and receive structured reports without you switching apps.
Does markupR require an internet connection?
No. markupR runs local Whisper transcription by default, so everything stays on your device. Cloud transcription via OpenAI is available as an optional upgrade for those who prefer it.
What AI coding agents work with markupR?
Any agent that reads Markdown or supports MCP. This includes Claude Code, Cursor, Windsurf, GitHub Copilot, and any MCP-compatible client. You can also paste the output into chat interfaces, GitHub issues, or Slack.
Is markupR free?
Yes. markupR is MIT-licensed open source. The desktop app, CLI, and MCP server are free, and BYOK mode keeps you in control.
What platforms does markupR support?
The desktop app runs on macOS (Apple Silicon and Intel) and Windows. The CLI and MCP server run on macOS and Linux via npm/npx.
How is markupR different from screen recording?
Screen recorders give you a video file. markupR gives you an agent-ready report: aligned transcript, extracted frames, and context cues like cursor position and active window so your agent can execute with less ambiguity.
Why does macOS warn on first launch?
During notarization rollout, some builds may show an unidentified-developer warning on first open. This is expected for unsigned transitions. Open once via right-click → Open, then markupR launches normally after trust is granted.