/* global React */
// Shared article renderer used by every blog post + whitepaper page.
const { PageShell, BLOG_POSTS, WHITEPAPERS } = window.Promptive;

// Long-form article body — keyed by slug. Written to feel real, not placeholder.
const ARTICLE_BODIES = {
  'geo-is-the-new-seo': [
    { kind: 'lede', t: 'For two decades, rank on page one was the proxy for attention. Every brand, every agency, every tracking tool bought into the same contract: show up in the ten blue links or lose. That contract is dissolving.' },
    { kind: 'p', t: 'In Q1 2026, 58% of English-language informational searches in the US ended without a single click. Google\'s AI Overview absorbed the intent. Perplexity does the same thing by design. ChatGPT and Claude sit one layer further in — they don\'t even render links until you ask. The blue link isn\'t a dying medium. It\'s the wrong medium.' },
    { kind: 'p', t: 'If the answer is where the decision happens, then measurement has to move with it. GEO — generative engine optimization — is the name we\'ve converged on for the new discipline. The name matters less than what it asks you to measure.' },
    { kind: 'h2', t: 'What actually changed' },
    { kind: 'p', t: 'Three things, in roughly this order:' },
    { kind: 'ol', items: [
      'The engine became the editor. A model chooses what to include, what to leave out, and what to call you. A website used to answer a query; now the website is one of twenty sources the answer quotes from.',
      'First-mention bias became severe. In our data, being named first in an answer delivers ~3.1x the downstream click rate of being named second and ~9x of being named fourth. The distribution is more winner-take-most than the SERP ever was.',
      'The training-data window became a competitive moat. Anything authored before a model\'s cutoff is absorbed; anything after has to earn citations at inference time. New brands fight harder for air.',
    ]},
    { kind: 'quote', t: 'The answer isn\'t a ranking anymore. It\'s a narrative. You\'re either in it, or you\'re losing.' },
    { kind: 'h2', t: 'The measurement stack, re-grounded' },
    { kind: 'p', t: 'Here\'s the substitution table we use when teams migrate from an SEO-shaped dashboard to a GEO-shaped one:' },
    { kind: 'table', rows: [
      ['SEO metric', 'GEO equivalent', 'Why it\'s different'],
      ['Keyword rank', 'Answer position (01–05)', 'Position inside the answer list, not on a page'],
      ['Organic impressions', 'Engine visibility', 'Share of answers that mention you across prompts'],
      ['CTR from SERP', 'Click from cited source', 'Only grounded answers yield clicks at all'],
      ['Backlinks', 'Engine-trusted citations', 'Reddit and Quora punch above their weight here'],
      ['Brand sentiment (social)', 'LLM sentiment (−1 to +1)', 'Graded by the model itself, tuned to a rubric'],
    ]},
    { kind: 'h3', t: 'Three metrics to put on the exec dashboard' },
    { kind: 'p', t: 'You don\'t need fifteen. You need three, reported weekly:' },
    { kind: 'ul', items: [
      'Share of voice — the percentage of tracked prompts where you\'re cited at all, averaged across all four engines.',
      'First-mention rate — the percentage of those prompts where you\'re named first. This is the metric most correlated with pipeline in our beta customers.',
      'Net sentiment — the rolling 14-day average sentiment score. Treat a drop of more than 0.15 as an incident.',
    ]},
    { kind: 'h2', t: 'What to do this quarter' },
    { kind: 'p', t: 'Pick one category prompt that matters to your business — something like "best CRM for small business" if you\'re HubSpot, or "top AI visibility platforms" if you\'re us. Track it across all four engines for two weeks. That\'s your baseline.' },
    { kind: 'p', t: 'Then rewrite one page. Just one. Make it the definitive answer to that prompt — structured, sourced, opinionated. Re-sample the engines. If your position moves, you have a playbook. Scale from there.' },
    { kind: 'p', t: 'The teams that are winning in GEO right now are not the teams with the biggest budgets. They\'re the teams who measure honestly and rewrite quickly. The infrastructure to do both is cheap and sitting in your tab right now.' },
  ],

  'why-chatgpt-keeps-recommending-your-competitor': [
    { kind: 'lede', t: 'A Promptive beta customer opened the dashboard, saw their top category prompt, and stared at it for a long minute. ChatGPT was recommending their biggest competitor first, every time. They were in position 06. Twenty-one days later they were in position 01. Here is exactly what they did.' },
    { kind: 'p', t: 'We\'ve now watched this playbook run three times in full — three different companies, three different categories, same four-act structure. This piece is a reverse-engineering of what moves the needle and what doesn\'t, with the numbers, caveats, and one thing we got badly wrong the first time.' },
    { kind: 'h2', t: 'Act 1 — Diagnose the specific prompt' },
    { kind: 'p', t: 'Most teams try to move "visibility" in the abstract. That\'s too coarse to be actionable. You need a single prompt with the following properties: high commercial intent, stable phrasing, and at least three competitors currently ranking above you.' },
    { kind: 'p', t: 'For our beta customer — a mid-market analytics tool — the prompt was "best AI-native analytics platform." Not "top analytics tools" (too broad) and not "analytics software for B2B SaaS" (too long-tail). It took us forty minutes of dashboard inspection to pick the right one.' },
    { kind: 'quote', t: 'You can\'t optimize for "the machine layer." You can optimize for one answer at a time.' },
    { kind: 'h2', t: 'Act 2 — Understand the citation graph' },
    { kind: 'p', t: 'Every answer for this prompt cited somewhere between four and eleven sources. When we laid them all out, three patterns jumped:' },
    { kind: 'ol', items: [
      'A Reddit thread on r/analytics with 1,200 upvotes was cited in 7/12 ChatGPT answers.',
      'A 2024 G2 listicle was cited in 6/12 answers across all engines.',
      'The competitor\'s own comparison page ranked in 4/12 and was the only first-party source regularly quoted.',
    ]},
    { kind: 'p', t: 'Our customer had zero presence in any of the three. Their own blog was cited exactly once across twelve answers.' },
    { kind: 'h2', t: 'Act 3 — Three targeted interventions' },
    { kind: 'p', t: 'We picked three — not ten — and ran them in parallel over fourteen days.' },
    { kind: 'h3', t: 'Intervention 1: rewrite the comparison page' },
    { kind: 'p', t: 'The existing comparison page was a feature grid — cold, undifferentiated, buried two clicks deep. We rewrote it as a structured argument: two-paragraph opening, explicit positioning statement, honest acknowledgment of the competitor\'s strengths, then three specific use cases where our customer wins. Added schema markup. Moved it to the top of the footer.' },
    { kind: 'p', t: 'Within six days, Claude started quoting the new page directly.' },
    { kind: 'h3', t: 'Intervention 2: seed one high-quality Reddit thread' },
    { kind: 'p', t: 'The thread was not promotional. An employee (disclosed) asked a genuine technical question about query performance at scale and shared their own experience comparing three tools. It got answered, debated, and upvoted to 340 by day seven. By day twelve, ChatGPT was citing it.' },
    { kind: 'p', t: 'We had overbuilt this in an earlier attempt at another company — eight threads across four subreddits, two of which got flagged as spam. One real thread beats eight engineered ones, every time.' },
    { kind: 'h3', t: 'Intervention 3: pitch one credible third-party listicle' },
    { kind: 'p', t: 'We identified three newsletters in the space that had been quoted by at least one engine in the past 90 days. Pitched all three with a specific angle, different from each. One bit, published a short roundup, and the engines absorbed it within a week.' },
    { kind: 'h2', t: 'Act 4 — Measure honestly' },
    { kind: 'p', t: 'At the 21-day mark, the numbers were:' },
    { kind: 'table', rows: [
      ['Metric', 'Day 0', 'Day 21'],
      ['ChatGPT rank', '06', '01'],
      ['Claude rank', '04', '02'],
      ['Gemini rank', '05', '03'],
      ['Perplexity rank', '08', '04'],
      ['Sentiment (4-engine avg)', '+0.08', '+0.46'],
      ['Citations / week', '11', '54'],
    ]},
    { kind: 'p', t: 'The rank on Perplexity moved slowest — Perplexity weights recency and citation diversity differently, and the interventions that moved ChatGPT moved it less. That\'s a real pattern we\'ve seen twice more since.' },
    { kind: 'p', t: 'The thing we\'d tell anyone starting this playbook: pick one prompt, resist the urge to run ten interventions, measure weekly, and give it twenty-one days before you judge whether it\'s working. The teams that blow this up are the ones who either change the metric after a week or deploy thirty changes at once and can\'t attribute anything.' },
  ],

  'sentiment-scoring-under-the-hood': [
    { kind: 'lede', t: 'When a Promptive sentiment score flips from +0.48 to +0.12 overnight, our customers ask us one question: is that real? This is a long answer to that question — the rubric, the grader, the calibration loop, and the checks we run to avoid the single worst failure mode of LLM-graded metrics.' },
    { kind: 'h2', t: 'The rubric' },
    { kind: 'p', t: 'Every mention is graded on a −1.0 to +1.0 scale using a three-axis rubric: affect (how positive/negative is the tone), stance (is the engine recommending, neutral, or cautioning), and framing (is the brand positioned as category leader, viable option, or afterthought).' },
    { kind: 'p', t: 'The final score is a weighted blend. We publish the rubric in full — we\'d rather customers argue with our definition than guess at it.' },
    { kind: 'h3', t: 'Why a continuous scale' },
    { kind: 'p', t: 'The obvious move is three buckets: positive, neutral, negative. We tried it for the first six weeks and hated it. Most real mentions live in the gray — "solid choice for small teams" and "a reasonable option if you\'re on a budget" are very different sentiments but both get bucketed as positive. A continuous scale preserves that signal.' },
    { kind: 'h2', t: 'The grader' },
    { kind: 'p', t: 'We grade with Claude Haiku. Not because it\'s the best writer, but because it\'s the most consistent at structured output and the cheapest to run at our volume. Every mention is graded twice — once with the full prompt rubric, once with a minimal version — and we flag anything where the two disagree by more than 0.2 for human review.' },
    { kind: 'p', t: 'Roughly 2.3% of mentions get flagged. Roughly 0.7% turn out to be genuinely ambiguous. The rest resolve to one or the other side on a second pass.' },
    { kind: 'h2', t: 'Calibration' },
    { kind: 'p', t: 'The thing that wrecks LLM-graded metrics is unacknowledged drift. A model update changes the scorer\'s prior, every score shifts 0.05, and a customer panics about a trend that doesn\'t exist.' },
    { kind: 'p', t: 'Our calibration loop: every Monday, the grader re-scores a frozen set of 400 manually-rated mentions. If the mean absolute error moves more than 0.03, we ship a correction coefficient and note it in the changelog.' },
    { kind: 'quote', t: 'The published metric is the grader score plus a monotonic correction. We\'d rather be consistent than pure.' },
    { kind: 'h2', t: 'What the number means' },
    { kind: 'p', t: 'A sentiment of +0.42 across four engines, averaged over a week, means the models describe the brand in language that a reasonable reader would call "enthusiastic but measured." +0.70+ is rare and usually means the brand has captured a strong definitional slot (e.g. "Figma is the industry standard"). Below −0.20 is a problem that warrants a human investigation.' },
  ],

  // Short stubs for the rest so every linked post resolves
  'the-answer-is-the-funnel': [
    { kind: 'lede', t: '58% of searches end without a click. If the answer is the funnel, then every asset you own should be evaluated against a different question than the one you\'ve been asking.' },
    { kind: 'p', t: 'This piece reframes the content-strategy conversation from "rank for this keyword" to "end up inside this answer," and walks through what changes when you do.' },
    { kind: 'h2', t: 'The new question' },
    { kind: 'p', t: 'Every page on your site has a job. In the link-era, that job was to rank, get clicked, and convert. In the answer-era, the job is to get quoted — directly or by proxy — inside a generative response. That\'s a very different brief for a writer.' },
  ],
  'reddit-is-training-data': [
    { kind: 'lede', t: 'Reddit is no longer a channel — it\'s a corpus. OpenAI pays for it. Google ranks it. Every model cites it. Here is what that means for how you show up there.' },
    { kind: 'p', t: 'The mechanics of credible UGC seeding, the line between contribution and manipulation, and the subreddits that move visibility the fastest.' },
  ],
  'five-prompts-every-brand-should-track': [
    { kind: 'lede', t: 'Category, comparison, objection, use-case, and alternative. These five prompt types together uncover 90% of the competitive surface area for most B2B brands.' },
    { kind: 'p', t: 'This is the taxonomy we recommend new customers start with — build it in week one, add nuance from there.' },
  ],
  'state-of-llm-search-q1-2026': [
    { kind: 'lede', t: 'Our quarterly report on the four-engine landscape: who is gaining share, where sentiment is softening, what categories are most contested, and one surprise in the data.' },
    { kind: 'p', t: 'Pulled from 1.2M mentions across 420 tracked brands over Q1 2026.' },
  ],
  'agencies-running-aio-audits': [
    { kind: 'lede', t: 'Three founder-led agencies productized AI-visibility audits into $4–8k engagements in the last six months. This is what they sell, how they price it, and the four-deliverable template you can adapt tomorrow.' },
    { kind: 'p', t: 'Includes the proposal template, the competitive-matrix slide, and the page-rewrite recommendations that drive the second phase of the engagement.' },
  ],

  // Whitepapers — longer, section-heavy
  'measuring-the-answer-layer': [
    { kind: 'lede', t: 'A 42-page framework for taking AI-visibility measurement from side-project to operating rhythm. This is the executive summary.' },
    { kind: 'h2', t: 'Chapter 1 — The data model' },
    { kind: 'p', t: 'Five primitives: brand, prompt, engine, answer, mention. The paper defines each, shows how they relate, and walks through the twelve derived metrics that fall out.' },
    { kind: 'h2', t: 'Chapter 2 — The scoring rubric' },
    { kind: 'p', t: 'Visibility, position, sentiment, citation-trust, competitor-diff. Each scored on a published scale with calibration checks. This is the chapter most customers reference in their own internal docs.' },
    { kind: 'h2', t: 'Chapter 3 — Cadence and cross-functional ownership' },
    { kind: 'p', t: 'Who owns the dashboard. When alerts fire. How the weekly review meeting runs. What an end-of-quarter exec summary looks like. Includes three org-chart diagrams from real Promptive customers.' },
    { kind: 'p', t: 'Download the full PDF for the complete framework, templates, and a worksheet your team can fill out in a single 90-minute session.' },
  ],
  'citation-economy': [
    { kind: 'lede', t: '1.2M citations analyzed. 68% of all cited domains come from just 400 sources. Here is the ranked list and what it means for your content plan.' },
    { kind: 'h2', t: 'The 400' },
    { kind: 'p', t: 'Seven categories dominate: UGC platforms (Reddit, Quora), review sites (G2, Capterra), first-party vendor pages, category-specific blogs, news publishers, Wikipedia, and academic/white papers. The mix varies sharply by engine.' },
    { kind: 'h2', t: 'What it means' },
    { kind: 'p', t: 'If you\'re not on the 400, you\'re on the 60% — the long tail that shows up once or twice and doesn\'t compound. The paper walks through how to identify which of the 400 matter for your category and what to do to earn a slot.' },
  ],
  'rewriting-for-llms': [
    { kind: 'lede', t: 'Twelve before/after page rewrites that moved rank. Concrete patterns, not abstractions.' },
    { kind: 'h2', t: 'The definition-first pattern' },
    { kind: 'p', t: 'Lead with a clean, quotable definition of what you are. Engines grab it almost every time. Simple, boring, works.' },
    { kind: 'h2', t: 'Structured comparison' },
    { kind: 'p', t: 'Never let the engine guess what you are versus your competitors. State it explicitly, acknowledge their strengths, specify where you win. Counterintuitive: acknowledging their strengths makes you get cited more, not less.' },
  ],
  'enterprise-governance-aio': [
    { kind: 'lede', t: 'Four Fortune 500 marketing orgs operationalized LLM monitoring in the last twelve months. This paper documents their workflows, escalation paths, and the org changes that made it stick.' },
    { kind: 'h2', t: 'Who owns it' },
    { kind: 'p', t: 'In two of the four, it sits inside brand. In the other two, inside demand gen. Not a single case lived inside SEO — though SEO was the team that raised the flag first.' },
    { kind: 'h2', t: 'How they bought' },
    { kind: 'p', t: 'Includes the vendor RFP template used by a Fortune 100 CPG brand — eleven criteria, weighted, with scoring rubrics. Adapt for your own procurement process.' },
  ],
};

function renderBlock(b, i) {
  switch (b.kind) {
    case 'lede': return <p key={i} className="lede">{b.t}</p>;
    case 'h2':   return <h2 key={i}>{b.t}</h2>;
    case 'h3':   return <h3 key={i}>{b.t}</h3>;
    case 'p':    return <p key={i}>{b.t}</p>;
    case 'quote':return <blockquote key={i}>{b.t}</blockquote>;
    case 'ul':   return <ul key={i}>{b.items.map((x,j) => <li key={j}>{x}</li>)}</ul>;
    case 'ol':   return <ol key={i}>{b.items.map((x,j) => <li key={j}>{x}</li>)}</ol>;
    case 'table': {
      const [head, ...rows] = b.rows;
      return (
        <table key={i}>
          <thead><tr>{head.map((h,j) => <th key={j}>{h}</th>)}</tr></thead>
          <tbody>{rows.map((r,j) => <tr key={j}>{r.map((c,k)=><td key={k}>{c}</td>)}</tr>)}</tbody>
        </table>
      );
    }
    default: return null;
  }
}

function Article({ slug, kind }) {
  const source = kind === 'whitepaper' ? WHITEPAPERS : BLOG_POSTS;
  const meta = source.find(p => p.slug === slug);
  const body = ARTICLE_BODIES[slug] || [];
  if (!meta) return <div className="wrap" style={{padding:60}}>Article not found.</div>;

  const related = source.filter(p => p.slug !== slug).slice(0, 3);
  const isWhite = kind === 'whitepaper';

  return (
    <PageShell active="content">
      <div className="wrap">
        <div className="article-hero">
          <div className="crumbs" style={{fontFamily:'var(--fs-mono)', fontSize:11, color:'var(--dim)', letterSpacing:'.1em', textTransform:'uppercase', display:'flex', gap:8}}>
            <a href="/" style={{color:'var(--dim)'}}>Promptive</a><span>/</span>
            <a href={isWhite ? '/whitepapers' : '/blog'} style={{color:'var(--dim)'}}>{isWhite ? 'Whitepapers' : 'Blog'}</a><span>/</span>
            <span>{meta.category}</span>
          </div>
          <h1 style={{color:'var(--ink)'}}>{meta.title}</h1>
          {isWhite && meta.subtitle && (
            <p style={{fontSize:18, color:'var(--ink-2)', marginTop:14, maxWidth:'60ch', fontFamily:'var(--fs-mono)'}}>{meta.subtitle}</p>
          )}
          <div className="byline">
            {!isWhite && <span><b>{meta.author}</b> · {meta.role}</span>}
            <span>{meta.date}</span>
            {!isWhite && <span>{meta.read} read</span>}
            {isWhite && <span>{meta.pages} pages</span>}
          </div>
        </div>

        <div className="article-cover" style={{background: meta.color}}>
          {meta.glyph}
        </div>

        <div className="article-layout" style={{marginTop:48}}>
          <aside className="article-meta-col">
            <div className="block">
              <h5>{isWhite ? 'Download' : 'Share'}</h5>
              {isWhite ? (
                <a className="btn" style={{marginTop:6}} href="#">Get the PDF →</a>
              ) : (
                <div style={{display:'grid', gap:6}}>
                  <a href="#" style={{fontFamily:'var(--fs-mono)', fontSize:12}}>[LI] LinkedIn</a>
                  <a href="#" style={{fontFamily:'var(--fs-mono)', fontSize:12}}>[X]  X/Twitter</a>
                  <a href="#" style={{fontFamily:'var(--fs-mono)', fontSize:12}}>[MD] Markdown</a>
                  <a href="#" style={{fontFamily:'var(--fs-mono)', fontSize:12}}>[→]  Permalink</a>
                </div>
              )}
            </div>
            <div className="block">
              <h5>In this piece</h5>
              {body.filter(b => b.kind === 'h2').map((b, i) => (
                <div key={i} style={{fontSize:12, color:'var(--ink-2)', padding:'4px 0'}}>— {b.t}</div>
              ))}
            </div>
            <div className="block">
              <h5>Updated</h5>
              <div style={{fontSize:12, color:'var(--ink-2)'}}>{meta.date}</div>
            </div>
          </aside>

          <main className="article-body">
            {body.map(renderBlock)}
            <div style={{marginTop:56, padding:28, border:'1px solid var(--ink)', background:'var(--cream-2)', display:'grid', gap:14}}>
              <div style={{fontFamily:'var(--fs-mono)', fontSize:11, color:'var(--dim)', letterSpacing:'.1em', textTransform:'uppercase'}}>// try it yourself</div>
              <div style={{fontFamily:'var(--fs-display)', fontSize:40, textTransform:'uppercase', lineHeight:.95}}>
                See what the engines say about you.
              </div>
              <p style={{fontSize:14, color:'var(--ink-2)', maxWidth:'52ch'}}>
                Start a free Promptive trial — 14 days of full data, no card, five minutes to first insight.
              </p>
              <div style={{display:'flex', gap:10}}>
              <a className="btn" href="/signup">Start free trial →</a>
              <a className="btn ghost" href="/pricing">See pricing</a>
              </div>
            </div>
          </main>

          <aside className="article-meta-col">
            <div className="block">
              <h5>More like this</h5>
              <div style={{display:'grid', gap:14, marginTop:8}}>
                {related.map(r => (
                  <a key={r.slug} href={`/${isWhite ? 'whitepaper' : 'blog'}-${r.slug}`} style={{display:'block', borderLeft:'2px solid var(--ink)', paddingLeft:10}}>
                    <div style={{fontSize:10, textTransform:'uppercase', letterSpacing:'.1em', color:'var(--orange)'}}>{r.category}</div>
                    <div style={{fontFamily:'var(--fs-mono)', fontSize:13, fontWeight:700, marginTop:4, lineHeight:1.3, color:'var(--ink)'}}>{r.title}</div>
                  </a>
                ))}
              </div>
            </div>
          </aside>
        </div>
      </div>
    </PageShell>
  );
}

window.Promptive = Object.assign(window.Promptive || {}, { Article, ARTICLE_BODIES });
