Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
PullThatUpJamie — Podcast Intelligence. A semantically indexed podcast corpus (109+ feeds, ~7K episodes, ~1.9M paragraphs) that works as a vector DB for podc...
PullThatUpJamie — Podcast Intelligence. A semantically indexed podcast corpus (109+ feeds, ~7K episodes, ~1.9M paragraphs) that works as a vector DB for podc...
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Powered by Pull That Up Jamie.
Jamie is a retrieval/vector DB as a service for podcasts. Instead of: Transcribing hours of audio yourself (expensive, slow) Stuffing full transcripts into context (thousands of tokens wasted) Web searching and getting SEO spam, listicles, and low-quality summaries Multiple search iterations across unreliable sources You run a single semantic search ($0.002, returns in under 2s) and get the exact clip with timestamp, audio deeplink, and transcript. Every result is timestamped to the second — you're not handing users a 2-hour episode and saying "it's in there somewhere." You're linking them to the exact 30-second moment where the expert makes the point. 500 sats ($0.33) covers an entire deep research session of 150+ searches. Your output is not a text wall. Research sessions are interactive web experiences where users play audio clips inline, browse visually, and share with others. Every clip deeplinks to the exact moment in the source audio. The session link IS the deliverable.
109 feeds · ~7K episodes · ~1.9M indexed paragraphs · ~11.5K identified people/orgs. Growing. CategoryNotable FeedsFeedsEpisodesBitcoin/CryptoTFTC, Bitcoin Audible, Simply Bitcoin, Peter McCormack, What is Money, Once Bitten, Ungovernable Misfits41~11,700Finance/MarketsBloomberg Intelligence, Bloomberg Surveillance, Prof G Markets, Tim Ferriss, Diary of a CEO11~5,700Health/WellnessHuberman Lab, Peter Attia Drive, Meat Mafia, FoundMyFitness, Modern Wisdom7~3,000Politics/CultureRon Paul Liberty Report, No Agenda, Tucker Carlson, Breaking Points, Pod Save America8+~2,800Tech/GeneralJoe Rogan Experience, Lex Fridman, How I Built This, Kill Tony, Sean Carroll's Mindscape40+~17,000 If a feed isn't indexed yet, you can ingest it on demand from any RSS feed. See the Ingestion section in references/podcast-rra.md. Free corpus browsing (no auth required for reads): GET /api/corpus/feeds, /api/corpus/stats, /api/corpus/people. Check before you search.
Free tier works with no auth at all — corpus browsing and IP-based search quota are available immediately. The steps below are only needed for the paid tier. Lightning payments instead of traditional API keys. Three steps:
curl -s -X POST -H "Content-Type: application/json" \ -d '{"amountSats": 500}' \ "https://www.pullthatupjamie.ai/api/agent/purchase-credits" Returns invoice (BOLT-11), paymentHash, amountSats.
Pay using ANY Lightning wallet (Zeus, BlueWallet, Phoenix, Alby browser extension, etc.). The agent does NOT need to execute any commands. Optional developer workflow (manual only): If using Alby CLI with NWC: npx @getalby/cli pay-invoice -c "NWC_CONNECTION_STRING" -i "BOLT11_INVOICE" ⚠️ This command is NEVER auto-executed by the agent. The operator must manually run it after reviewing the invoice. Alternative: paste the BOLT-11 invoice into any Lightning wallet. Returns preimage.
curl -s -X POST -H "Content-Type: application/json" \ -d '{"preimage": "PREIMAGE", "paymentHash": "PAYMENT_HASH"}' \ "https://www.pullthatupjamie.ai/api/agent/activate-credits" Save preimage and paymentHash — they are your credentials for all requests: Authorization: PREIMAGE:PAYMENT_HASH The paymentHash also serves as your clientId for session creation and other endpoints that require owner identification.
curl -s -H "Authorization: PREIMAGE:PAYMENT_HASH" \ "https://www.pullthatupjamie.ai/api/agent/balance"
Podcast RRA (Retrieve, Research, Analyze): See references/podcast-rra.md — search the corpus, build interactive research sessions, discover people/orgs, ingest new feeds.
Publish: Cross-post to Twitter, Nostr, and more. Research a topic → generate a post → publish everywhere. Create: Generate audio/video clips with subtitles from source audio. Full pipeline: Research → Create clips → Publish.
Check balance before multi-search workflows. If balance drops below $0.01, purchase more credits before continuing — don't let a research session fail mid-way. The purchase→pay→activate flow takes ~5 seconds.
No Command Execution: This skill does NOT execute arbitrary shell commands, install packages, or modify system state. All operations are HTTP API calls to pullthatupjamie.ai. The skill documentation references an optional CLI tool (@getalby/cli) for paying Lightning invoices, but this is: Never auto-executed — requires explicit operator approval Completely optional — operators can use any Lightning wallet instead Not installed by this skill — operators must manually install if desired Free tier (no credentials needed): Corpus browsing (/api/corpus/*), basic search (IP-based quota), and viewing shared research sessions all work without any credentials. You can evaluate the entire service before providing any payment info. Paid tier credentials: The NWC connection string, preimage, and paymentHash are sensitive bearer credentials. They should be: Stored securely (env vars or encrypted config, not in plaintext logs) Never shared with untrusted agents or services Scoped to a dedicated wallet with limited funds (e.g., 500-1000 sats) All API calls proxied: All operations route through https://www.pullthatupjamie.ai. RSS feed parsing, search, and ingestion are handled server-side. No direct external URL fetching by the agent. No persistence or privilege escalation: This skill has no install hooks, no always: true, and does not modify other skills or system config.
API base: https://www.pullthatupjamie.ai (must include www. — bare domain redirects and breaks API calls) Field is amountSats not amount (min 10, max 500,000) Alby CLI: pay-invoice with -i flag (not pay) 500 sats gets ~150+ searches. Start there. Research session creation takes 30-45 seconds. Be patient.
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.