Feature · parse

Paste any job description. Structured data in under two seconds.

Paste a job description. The parser returns the fields you would otherwise type by hand. Company, title, experience level, required skills, salary range, and the keywords an ATS will match against. Each one lands in the correct field in the tracker, so you are not re-reading the same posting three times to remember what it asked for.

In · Raw job posting
Stripe is hiring a Senior
Frontend Engineer on the
Payments team. React,
Next.js, TypeScript.
$190K to $240K plus equity
Parse
Out · Structured JSON
company: Stripe
title: Senior Frontend Engineer
level: senior
skills: [React, Next.js, TS]
salary: $190K to $240K

How to use job description parser

3 steps. Each one runs in seconds. Scroll in order.

  1. Step 01

    Paste the raw posting

    Copy the full job description out of LinkedIn, Indeed, Glassdoor, or a company careers page and drop it into the box. No character limit on free or paid plans.

  2. Step 02

    The model extracts the fields

    The posting goes to a fast model tuned with a tight extraction prompt. Output is a strict JSON shape: company, title, experience level, required skills, nice-to-have, salary range if listed, ATS keywords.

  3. Step 03

    Every field lands in your tracker

    The parsed record populates the job form in one pass. You can edit anything before saving. The full raw JD stays attached to the record so later steps (tailoring, keyword match, rejection analysis) can read it.

What the output actually looks like

One real example, same input reshaped two ways.

Raw JD (first few lines)
Stripe is hiring a Senior Frontend Engineer on the Payments team. You will partner with product and design to turn Figma prototypes into shippable UI using React, Next.js, TypeScript, and our internal design system. You will care about Core Web Vitals, WCAG 2.1 AA, and writing Jest + Playwright tests for everything you ship. Experience in fintech or a regulated environment is a plus. Salary range: 190k to 240k plus equity.
Parsed JSON (abbreviated)
{
  "company_name": "Stripe",
  "job_title": "Senior Frontend Engineer",
  "experience_level": "senior",
  "key_skills": ["React", "Next.js", "TypeScript"],
  "nice_to_have": ["fintech", "regulated environment"],
  "salary_range": "$190K to $240K + equity",
  "ats_keywords": ["Core Web Vitals", "WCAG 2.1 AA", "Jest", "Playwright", "design system"]
}
Takeaway: Every field above is what the downstream AI uses. The resume tailor reads key_skills. The keyword match scores against ats_keywords. The rejection analyzer considers experience_level. Clean extraction is what makes the whole loop work.

Under the hood

The mechanics nobody should have to guess at.

A small fast model, not a heavyweight one

Extraction is a classification task, not a creative one. A small, fast model handles it in roughly a second and costs pennies per parse. A larger model would be overkill and slow the paste-to-saved flow.

Strict JSON shape

The prompt enforces a JSON schema so we never have to parse free-form paragraphs out of the model. If a field is not in the posting (say, salary is absent) we return null for that field instead of guessing.

Compressed JD for downstream steps

The tailor step does not need the full 2,000-word posting; it needs the 800 tokens that carry the skills and phrasing. The parser produces a summary alongside the structured fields so later tailor calls stay cheap.

Works with any board format

The parser is format-agnostic. LinkedIn reels, Indeed bullet lists, company careers pages with three paragraphs of "about us" before the role. All supported. The model is trained to ignore company boilerplate.

What it deliberately does not do

Honest limits read as trust signals. Hiding them does the opposite.

  • 01We do not fetch a posting from a URL. Paste the text. This keeps the feature deterministic and avoids scraping.
  • 02We do not parse PDF job descriptions inside the web app. Paste the text from the PDF.
  • 03We do not claim 100% field accuracy. Unusual postings (single-paragraph descriptions, screenshots pasted as text) can miss fields. Everything is editable before save.

Common questions

How long does a parse take?

Roughly one to two seconds on a typical JD. Very long postings (3,000+ words) can push toward three seconds. Still under the threshold where you feel the wait.

What model runs this?

A small, fast model tuned for extraction. A larger general-purpose model would buy us nothing here and cost more. Free and Pro tiers use the same parser.

Is there a character limit?

No soft limit. Very large pastes (over 20,000 characters) get truncated before parsing, but we keep the full text on the record so tailoring still sees everything.

Can I edit the parsed fields before saving?

Yes. Everything is a regular form field after parsing. If the model missed the experience level or misread the salary range, type over it before you save the job.

Does the Chrome extension use the same parser?

Yes. When you click the extension icon on a LinkedIn, Indeed, or Glassdoor posting, it reads the visible JD, sends it to the same parse endpoint, and saves a pre-filled job in your tracker.

Try Job description parser inside the product

Create a free account in under a minute. First job tracked, first tailored resume, and first keyword breakdown all happen inside the onboarding flow.

Create a free accountTry the demo firstAll features