You see it. You say it.
Your AI fixes it.
You know exactly what's wrong. markupR captures it as a desktop-native recording workflow, then hands your agent structured context: transcript, frames, cursor location, active window, and focused-element hints when available.
Other platforms & versions (opens in new tab)
macOS install note: notarization approval is in progress. If first launch shows an unidentified-developer warning, use right-click → Open once.
If markupR helps, optional support keeps it independent and shipping fast.
Quick visual walkthrough
See the full flow in a few seconds.
You record and mark shots. After stop, markupR aligns narration with frames and attaches context cues (cursor/window/focus) so your agent can fix with less interpretation.
Shot markers confirm instantly while you narrate. Assembly happens after stop.
Output: Structured markdown your coding agent acts on. Also works in GitHub, Linear, and Slack.
How it works
Three steps from bug to AI fix.
Why it matters
Feedback without the friction.
Stop retyping what you already see.
You found the bug. You can see it right there. Now you have to context-switch into writing mode, describe it in text, screenshot it, crop it, drag it into the right spot. markupR lets you just say it. Your agent gets what it needs to fix it.
You talk at 150 wpm. You type at 60.
A five-minute narration gives your AI agent more precise context than twenty minutes of writing. markupR places screenshots at exact moments and adds cursor/window/focus hints -- so the agent sees what you saw, where your attention was, and when.
Choose local privacy or cloud convenience.
markupR supports both local Whisper transcription and OpenAI BYOK transcription. Stay fully local when privacy is critical, or use cloud transcription when you want fast setup and reliable results.
Under the hood
Built for developers who read source.
- Dual transcription paths Use local Whisper for on-device processing, or OpenAI BYOK for fast, reliable post-session transcription.
- Intelligent frame extraction Records your full screen, then uses transcript timestamps to pull the exact frames that match what you were describing.
- Context-aware shot metadata Each marked shot can include cursor coordinates, active app/window, and focused-element hints (best effort), so reports carry more than pixels.
- macOS native Lives in your menu bar. No dock icon. No browser tab. Launches at login if you want it to.
- Structured Markdown output Sections, headings, and screenshots assembled into context your AI agent already understands. Feed to Claude Code, Cursor, Copilot -- or paste into GitHub and Slack.
- Open source, MIT licensed Read the code, fork it, break it, fix it. No telemetry, no tracking, no analytics.
## Feedback Session — Feb 5, 2026
### Button sizing issue
The submit button is way too small on mobile.
I'm trying to tap it and keep hitting the cancel
link underneath. Needs more vertical padding,
maybe 12px minimum tap target.

### Loading state feels janky
After the spinner disappears, the content just
pops in with no transition. There's a visible
layout shift — the sidebar jumps left by about
20 pixels.

### Nav highlight is wrong
I'm on the Settings page but the Dashboard tab
is still highlighted. Looks like the active state
isn't updating on route change.

For automation and existing recordings
Install from your terminal.
Same pipeline, no desktop app required. Process existing screen recordings from CI/CD, scripts, or AI agents.
The CLI runs the full markupR pipeline: transcription, frame extraction, context enrichment, and markdown generation. Point it at a video file and get structured context your agent can act on.
MCP Server
Give your AI agent eyes and ears.
Add markupR as an MCP server and your AI coding agent can capture screenshots, record your screen with voice, and receive context-enriched reports -- all mid-conversation.
{
"mcpServers": {
"markupR": {
"command": "npx",
"args": ["--yes", "--package", "markupr", "markupr-mcp"]
}
}
}
Works with Claude Code, Cursor, Windsurf, and any MCP-compatible client. Zero install -- npx handles everything.
Integrations
Push feedback where your team works.
Output templates, watch mode, and direct integrations with GitHub Issues, Linear, and CI/CD.
GitHub Issues & Linear
Push structured feedback directly into your issue tracker. Screenshots, transcript, and timestamps land formatted and ready to assign.
GitHub Action for CI/CD
Add visual feedback to every PR. The action runs markupR in your workflow and posts structured reports as PR comments automatically.
Output Templates & Watch Mode
Control the shape of every report with customizable templates. Watch mode monitors directories and processes recordings as they appear.
Choose Your Setup
Same feedback loop. Two ways to run it.
Record what you see, narrate what is wrong, and generate structured context your AI agent can act on. Use BYOK for full control, or choose premium for hosted key management.
Open source (BYOK)
Bring your own OpenAI + Anthropic keys. Full control, same report quality, fully transparent stack.
Premium (touchless)
Exactly the same workflow and output, but keys are hosted for you. No API setup screens, no key management.
Switch anytime
Move between BYOK and premium as your workflow evolves. No report format changes required.
FAQ
Common questions.
What is markupR?
How does markupR work?
What is the MCP server?
npx --package markupr markupr-mcp in your agent's config and it can capture screenshots, record sessions, and receive structured reports without you switching apps.