Skip to content
Published on

Wiring Notion, Slack, and Linear with an LLM — A 2026 Hands-On Guide to Building Glue Code Instead of Paying Zapier

Authors

Prologue — The day Zapier hit 350 dollars

There was a small internal tool. When a Linear issue moved to Done, it gathered GitHub PRs and commit messages, posted a summarized Slack notification, and added a row to a Notion changelog database. It started as a 5-step Zapier Zap. The month the bill hit 350 dollars I drew the line — this is a 200-line script.

This post is that 200 lines, written from scratch. The friends we're wiring together:

  • Notion API — the Data Sources API released in August 2025 changed the data model. A database is no longer a single table.
  • Slack API — Bolt 4 + Socket Mode + chat.postMessage + incoming webhooks. When to use which.
  • Linear GraphQL API@linear/sdk 4.x, webhook signatures, project and membership model.
  • LLM glue — drop Claude or GPT in the middle to classify, summarize, draft. For a flat mapping an LLM is overkill, but "compress 7 PRs into one changelog line" is exactly what LLMs do well.

This post is a companion to last week's Zapier vs n8n piece, but that one was a snapshot comparison; this is the hands-on build.

Flow:

  1. Prep — tokens, secrets, trigger model
  2. Notion API 2026 — the Data Sources API shock
  3. Slack API — Bolt 4 + Socket Mode hands-on
  4. Linear GraphQL — SDK and webhooks
  5. The LLM glue pattern — when to drop one in, when to leave it out
  6. The full script — Linear closure → Slack → Notion
  7. Secrets management — 1Password CLI, Doppler, Vercel env
  8. Operating it — rate limits, retries, observability
  9. The honest comparison — trade-offs versus Zapier and n8n
  10. Anti-patterns

By the end you should feel ready to write two or three of your team's small automations yourself.


Chapter 1 · Prep — tokens, secrets, trigger model

1.1 Token types in one line each

PlatformToken namePermission modelWhere to issue
NotionInternal Integration SecretCapability toggles + explicit page/database sharenotion.so/profile/integrations
SlackBot User OAuth Token (xoxb-) + App Token (xapp-)Per scope (chat:write, channels:history, ...)api.slack.com/apps
LinearPersonal API Key or OAuthWorkspace-wide or user-scopedlinear.app/settings/account/security
AnthropicAPI KeyWorkspace + usage limitsconsole.anthropic.com

The big gotcha: a Notion integration cannot access pages that haven't been explicitly shared with it. Creating the integration isn't enough — you have to open each target page or database and add the integration from the menu. This is where new developers most often get stuck.

1.2 Trigger model — webhooks vs polling vs event subscriptions

All three platforms support all three, but each has a recommended path.

  • Notion: webhooks have been GA since 2024. You can subscribe to events like database.content_updated, page.created, comment.created. Even so, polling is still common — Notion's change notifications are partial, so following "every change on this page" usually means re-fetching and diffing anyway.
  • Slack: the Events API plus webhooks, and for the firewall-friendly case there's Socket Mode (WebSocket). For local development and small internal tools Socket Mode is dramatically easier.
  • Linear: webhooks are first-class. Issues, projects, comments, cycles — almost every change has a webhook. Polling with GraphQL is rare.

Our pipeline is Linear webhook → Slack notification → Notion row, so the starting point is a tiny server that accepts Linear webhooks.

1.3 Where do you run this thing

Three common choices:

  • Vercel Functions / Cloudflare Workers — serverless. 100 to 300 ms cold start, generous free tier, secrets ride as env vars. Downside: background work is hard. You must respond within 30 seconds.
  • Bun/Node single process on systemd or Fly.io — always on, basically required if you want Socket Mode. Costs 5 to 10 dollars a month.
  • Lambda + EventBridge — for high traffic with strict SLOs. Setup overhead is real.

The hands-on assumes Bun + Fly.io. It's the simplest model for a 200-line tool.


Chapter 2 · Notion API 2026 — the Data Sources API shock

2.1 The data model changed

In August 2025 Notion shipped "multi-source databases" and rolled out matching API changes. A single database can now hold multiple data sources, and the API surface had to follow.

  • Old (2024-09-25 and earlier): a database = a single table. You queried pages (rows) with databases.query.
  • New (2025-09-03 onward): a database contains one or more data sources. Pages are children of a data source, not directly of the database.

New endpoints:

  • GET /v1/data_sources/:id — data source metadata.
  • POST /v1/data_sources/:id/query — query pages. Replaces POST /v1/databases/:id/query.
  • POST /v1/data_sources — create a new data source.

If you want to leave existing code untouched, pin Notion-Version: 2022-06-28 and it'll keep working for a while — but you can't use the new multi-source features that way.

2.2 Minimum snippet — append a row to a database

import { Client } from '@notionhq/client'
const notion = new Client({
  auth: process.env.NOTION_TOKEN!,
  notionVersion: '2025-09-03',
})

await notion.pages.create({
  parent: { data_source_id: process.env.NOTION_CHANGELOG_DATA_SOURCE_ID! },
  properties: {
    Title: { title: [{ type: 'text', text: { content: title } }] },
    Date: { date: { start: dateIso } },
    Summary: { rich_text: [{ type: 'text', text: { content: summary } }] },
    'Linear Issue': { url: linearIssueUrl },
    PRs: { rich_text: [{ type: 'text', text: { content: prUrls.join('\n') } }] },
  },
})

Key points:

  • parent is data_source_id, not database_id. Old code used database_id and still works in compatibility mode.
  • properties keys must be the exact column names from the database, including case and spacing.
  • The title column name isn't always "Title". Each database has its own title column, so in production you usually fetch the schema once via GET /v1/data_sources/:id and look up the title column name.

2.3 Finding your data source ID

The Notion UI shows the database ID but not the data source ID. One shell command does the trick.

curl -X GET "https://api.notion.com/v1/databases/$DATABASE_ID" \
  -H "Authorization: Bearer $NOTION_TOKEN" \
  -H "Notion-Version: 2025-09-03"

The first element of the response's data_sources array carries the id. For single-source databases there's exactly one.

2.4 Rate limits

Notion guarantees an average of three requests per second. Short bursts are allowed but sustained traffic gets rate_limited errors (status: 429). Mitigations:

  • Read the Retry-After header and sleep for that long.
  • Cap concurrency at three with something like p-limit.
  • If a single workflow creates 100 pages, push them through a queue instead of slamming the endpoint.

Chapter 3 · Slack API — Bolt 4 + Socket Mode hands-on

3.1 Which SDK

In Node and Bun the de facto standard is @slack/bolt. v4 shipped in 2025 and bundles the Web API, Events API, Socket Mode, and interactive components (buttons, modals) into one object.

In this workflow Slack does two things:

  1. Posts the issue-closed notification to a channel — chat.postMessage.
  2. Optionally lets a user press "skip changelog" to cancel the Notion write — Block Kit action + Events API.

3.2 The simplest start — an incoming webhook

If you only need to post, you don't even need an SDK.

// slack-webhook.ts
const WEBHOOK_URL = process.env.SLACK_WEBHOOK_URL!

export async function postToSlack(text: string) {
  const r = await fetch(WEBHOOK_URL, {
    method: 'POST',
    headers: { 'content-type': 'application/json' },
    body: JSON.stringify({ text }),
  })
  if (!r.ok) {
    throw new Error(`Slack webhook failed: ${r.status} ${await r.text()}`)
  }
}

The catch: a webhook is bound to one channel at install time. If you need to vary the channel at runtime, switch to a bot token plus chat.postMessage.

3.3 Bot token + chat.postMessage

The skeleton of a Block Kit message — header, section, context, action button.

import { WebClient } from '@slack/web-api'
const slack = new WebClient(process.env.SLACK_BOT_TOKEN!)

await slack.chat.postMessage({
  channel,
  text: `${issueTitle} closed`, // used for notification previews
  blocks: [
    { type: 'header', text: { type: 'plain_text', text: issueTitle } },
    { type: 'section', text: { type: 'mrkdwn', text: summary } },
    {
      type: 'actions',
      elements: [{
        type: 'button',
        text: { type: 'plain_text', text: 'Skip changelog' },
        style: 'danger',
        action_id: 'skip_changelog',
        value: issueUrl,
      }],
    },
  ],
})

text is the notification preview, so don't leave it blank. Full version in Chapter 6.

3.4 Receiving interactions with Socket Mode

To handle the "Skip changelog" button you need to process inbound events. Internal tools usually sit behind a corporate firewall, and Socket Mode is dramatically easier.

import { App } from '@slack/bolt'
const app = new App({
  token: process.env.SLACK_BOT_TOKEN!,
  appToken: process.env.SLACK_APP_TOKEN!, // xapp-...
  socketMode: true,
})

app.action('skip_changelog', async ({ ack, body }) => {
  await ack()
  const issueUrl = (body as any).actions[0].value
  await markSkip(issueUrl) // tag in KV, consult right before the Notion write
})

await app.start()

Three reasons Socket Mode is nice: no inbound port, no ngrok or cloudflared tunnel, local dev and production share the same code. One downside — horizontal scaling is awkward. Two instances mean every event arrives twice. At internal-tool scale, a non-issue.

3.5 Slack rate limits

  • chat.postMessage is per-channel Tier 4, 100 calls per minute, you basically don't have to worry.
  • Global tiers like users.list cap at 50 a minute, so cache those if you call them often.
  • 429 responses include Retry-After. Sleep that long and retry.

Chapter 4 · Linear GraphQL — SDK and webhooks

4.1 The SDK is the right answer

Linear maintains an official TypeScript SDK, @linear/sdk. 4.x is the stable line. It puts typed methods on top of the GraphQL schema, so you don't have to hand-write queries.

import { LinearClient } from '@linear/sdk'
const linear = new LinearClient({ apiKey: process.env.LINEAR_API_KEY! })

const issue = await linear.issue(issueId)
const [comments, attachments] = await Promise.all([
  issue.comments(),
  issue.attachments(),
])
// issue.identifier === 'ENG-1234', issue.url, issue.title, ...
// PRs ride in as attachments
const prUrls = attachments.nodes
  .filter((a) => a.url.startsWith('https://github.com/') && a.url.includes('/pull/'))
  .map((a) => a.url)

4.2 Webhooks — never skip signature verification

Linear sends an HMAC-SHA256 signature in the Linear-Signature header. Always verify it — otherwise anyone can fake a payload and trigger your automation.

import { createHmac, timingSafeEqual } from 'node:crypto'
function verifySig(raw: string, sig: string, secret: string) {
  const expected = createHmac('sha256', secret).update(raw).digest('hex')
  return expected.length === sig.length &&
    timingSafeEqual(Buffer.from(expected), Buffer.from(sig))
}

The Bun receiving pattern is identical to the serve({...}) in the Chapter 6 full script. The rule is return 200 fast — Linear retries if your response takes more than five seconds, and after enough failures it disables the webhook. Push heavy work onto queueMicrotask and reply immediately.

4.3 Catching the moment something went to Done

Webhook payloads describe the action and the current data but don't give you the previous state. Two options:

  1. Local state cache — record the last-seen state per issue in KV and diff. Most accurate.
  2. State ID branching — check whether the payload's data.state.id equals the Done state's ID. The downside: you miss regressions like Done back to In Progress.

The hands-on uses option 2. The Done state ID is stable per workspace, so look it up once.

const DONE_STATE_IDS = new Set([process.env.LINEAR_DONE_STATE_ID!])

function isClosed(event: any) {
  return event.type === 'Issue' && DONE_STATE_IDS.has(event.data?.state?.id)
}

4.4 Linear rate limits

  • The GraphQL API uses a 1500 complexity points per hour model. A simple query is one point; a 100-item paginated query is ten.
  • Check the X-RateLimit-Requests-Remaining response header.
  • The SDK auto-retries once on a 429. You still own idempotency.

Chapter 5 · The LLM glue pattern — when to use one and when to skip it

5.1 What LLMs are actually better at

Boils down to three things.

  1. Summarization — collapsing 7 PRs and 30 commit messages into one changelog sentence.
  2. Classification — short branches like "is this issue bug/feature/chore/security".
  3. Drafting — writing the Slack title and body in a consistent voice.

Almost everything else is better served by a regex or a static map. Reach for an LLM and you sign up for cost, latency, and non-determinism.

5.2 A summarization call — Claude Sonnet 4.5 example

The skeleton of a summarization call:

import Anthropic from '@anthropic-ai/sdk'
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY! })

const resp = await anthropic.messages.create({
  model: 'claude-sonnet-4-5',
  max_tokens: 400,
  messages: [{ role: 'user', content: prompt }],
})
const text = resp.content
  .filter((b) => b.type === 'text')
  .map((b) => (b as any).text)
  .join('')
const json = text.match(/\{[\s\S]*\}/)?.[0]

Cost sense: Sonnet 4.5 runs about 3 dollars per million input tokens, 15 dollars per million output. Each call is roughly 0.002 to 0.01 dollars. 100 a day stays under 30 dollars a month. The full prompt plus fallback shows up in Chapter 6.

5.3 Picking a model

  • Short classification branches: a small model like Haiku, or GPT-4o-mini. Fast and cheap.
  • Summaries and drafts: Sonnet or GPT-4o. The quality jump is noticeable.
  • Reasoning: if you genuinely need it, use the reasoning mode. But workflow automation rarely calls for it.

5.4 Where not to put an LLM

  • "If issue closed, send a notification." A clear branch.
  • State conversions (open becomes "Open").
  • Routing (this channel vs that channel) — a static lookup is enough.

If the mapping is deterministic, an LLM only adds occasional mistakes you'll spend hours tracking down.

5.5 Validate the output — JSON is not trustworthy

LLMs sometimes wrap JSON in chatter or drop required fields. Always safeParse with a Zod schema and fall back to something safe (category chore, summary equal to the issue title). The Chapter 6 script shows this pattern inline.


Chapter 6 · The full script — Linear closure → Slack → Notion

Combining the pieces. Single Bun process, about 200 lines.

// server.ts
import { serve } from 'bun'
import { LinearClient } from '@linear/sdk'
import { WebClient } from '@slack/web-api'
import { Client as NotionClient } from '@notionhq/client'
import { createHmac, timingSafeEqual } from 'node:crypto'
import Anthropic from '@anthropic-ai/sdk'
import { z } from 'zod'

const env = z
  .object({
    LINEAR_API_KEY: z.string(),
    LINEAR_WEBHOOK_SECRET: z.string(),
    LINEAR_DONE_STATE_ID: z.string(),
    SLACK_BOT_TOKEN: z.string(),
    SLACK_CHANNEL: z.string(),
    NOTION_TOKEN: z.string(),
    NOTION_CHANGELOG_DATA_SOURCE_ID: z.string(),
    ANTHROPIC_API_KEY: z.string(),
  })
  .parse(process.env)

const linear = new LinearClient({ apiKey: env.LINEAR_API_KEY })
const slack = new WebClient(env.SLACK_BOT_TOKEN)
const notion = new NotionClient({ auth: env.NOTION_TOKEN, notionVersion: '2025-09-03' })
const anthropic = new Anthropic({ apiKey: env.ANTHROPIC_API_KEY })

function verifySig(raw: string, sig: string) {
  const expected = createHmac('sha256', env.LINEAR_WEBHOOK_SECRET).update(raw).digest('hex')
  return expected.length === sig.length && timingSafeEqual(Buffer.from(expected), Buffer.from(sig))
}

async function summarize(args: {
  issueTitle: string
  description: string
  prTitles: string[]
}) {
  const prompt = `Summarize this Linear issue closure as one sentence for a changelog. Also classify.

Title: ${args.issueTitle}
Description:
${args.description.slice(0, 2000)}
PRs:
${args.prTitles.map((t) => '- ' + t).join('\n')}

Output JSON only: {"summary":"...","category":"bug|feature|chore|security"}`
  const resp = await anthropic.messages.create({
    model: 'claude-sonnet-4-5',
    max_tokens: 300,
    messages: [{ role: 'user', content: prompt }],
  })
  const text = resp.content
    .filter((b) => b.type === 'text')
    .map((b) => (b as any).text)
    .join('')
  const m = text.match(/\{[\s\S]*\}/)
  const schema = z.object({
    summary: z.string().min(3).max(280),
    category: z.enum(['bug', 'feature', 'chore', 'security']),
  })
  const parsed = m ? schema.safeParse(JSON.parse(m[0])) : null
  if (!parsed || !parsed.success) {
    return { summary: args.issueTitle, category: 'chore' as const }
  }
  return parsed.data
}

async function handleIssueClosed(event: any) {
  const issueId = event.data.id
  const issue = await linear.issue(issueId)
  const attachments = await issue.attachments()
  const prUrls = attachments.nodes
    .filter((a) => a.url.includes('github.com') && a.url.includes('/pull/'))
    .map((a) => a.url)
  const prTitles = attachments.nodes
    .filter((a) => prUrls.includes(a.url))
    .map((a) => a.title || a.url)

  const { summary, category } = await summarize({
    issueTitle: issue.title,
    description: issue.description ?? '',
    prTitles,
  })

  await slack.chat.postMessage({
    channel: env.SLACK_CHANNEL,
    text: `${issue.identifier} closed: ${issue.title}`,
    blocks: [
      {
        type: 'header',
        text: { type: 'plain_text', text: `${issue.identifier} closed` },
      },
      { type: 'section', text: { type: 'mrkdwn', text: `*${issue.title}*\n${summary}` } },
      {
        type: 'context',
        elements: [
          { type: 'mrkdwn', text: `Category: ${category}` },
          { type: 'mrkdwn', text: `<${issue.url}|Linear>` },
          ...prUrls.map((u) => ({ type: 'mrkdwn' as const, text: `<${u}|PR>` })),
        ],
      },
    ],
  })

  await notion.pages.create({
    parent: { data_source_id: env.NOTION_CHANGELOG_DATA_SOURCE_ID },
    properties: {
      Title: { title: [{ type: 'text', text: { content: issue.title } }] },
      Date: { date: { start: new Date().toISOString().slice(0, 10) } },
      Summary: { rich_text: [{ type: 'text', text: { content: summary } }] },
      Category: { select: { name: category } },
      'Linear Issue': { url: issue.url },
      PRs: { rich_text: [{ type: 'text', text: { content: prUrls.join('\n') } }] },
    },
  })
}

const DONE_STATE_IDS = new Set([env.LINEAR_DONE_STATE_ID])

serve({
  port: Number(process.env.PORT ?? 3000),
  async fetch(req) {
    if (req.method !== 'POST') return new Response('only POST', { status: 405 })
    const raw = await req.text()
    const sig = req.headers.get('linear-signature') ?? ''
    if (!verifySig(raw, sig)) return new Response('bad sig', { status: 401 })
    const event = JSON.parse(raw)
    const closed =
      event.type === 'Issue' &&
      event.action === 'update' &&
      DONE_STATE_IDS.has(event.data?.state?.id)
    if (closed) {
      queueMicrotask(() => handleIssueClosed(event).catch((e) => console.error('handler failed', e)))
    }
    return new Response('ok')
  },
})

One file holds the whole workflow. Dependencies:

bun add @anthropic-ai/sdk @linear/sdk @notionhq/client @slack/web-api zod

Deploying to Fly.io:

fly launch --no-deploy
fly secrets set LINEAR_API_KEY=... LINEAR_WEBHOOK_SECRET=... # ...
fly deploy

Chapter 7 · Secrets management — 1Password, Doppler, Vercel

7.1 Never do this

  • Commit .env to git. Roughly 30% of accidental leaks start here.
  • Paste tokens into Slack channels. Retention of 90+ days makes them practically permanent.
  • Use a production token in local development. One mistake away from a real incident.

7.2 1Password CLI — great cost-to-benefit for solo developers

Put references in .env.tpl like LINEAR_API_KEY=op://dev-secrets/linear-prod/password, then run op run --env-file=.env.tpl -- bun run server.ts. The plaintext secret never touches disk.

7.3 Doppler — when a team needs to share

Doppler is a hosted secrets manager. Per-workspace secrets, injected through doppler run -- bun run server.ts. The big win: the same interface works in CI/CD. Teams of five or fewer fit the free tier.

7.4 Vercel and Cloudflare environment variables

If you're deploying to a serverless platform, dropping secrets into the dashboard is the simplest move. The downside is per-environment manual sync. To automate CI injection, run vercel env pull once.

7.5 Secret rotation

Quarterly. The four tokens follow nearly the same dance — issue a new one, swap the env var, disable the old. The one exception is Notion, where the old token dies immediately on rotation, so keep two integrations side by side briefly if you need zero downtime. If you don't automate it, humans do it — and humans forget every time.


Chapter 8 · Operating it — rate limits, retries, observability

8.1 Retry policy

All three APIs serve transient 5xx and 429s. The policy in three lines:

  • 5xx: exponential backoff, three attempts maximum.
  • 429: honor Retry-After. If missing, 1 second then 2 then 4.
  • 4xx other than 429: do not retry. That's a bug in your code.

A small helper is just a for loop with a try/catch that branches on status. Write it once and wrap every API call.

8.2 Idempotency

Webhooks are retried. The same issue-closed event arriving twice means two Notion rows. Two defenses:

  • Record each event's event.id in KV; ignore if seen.
  • Query for an existing row with the same Linear issue URL before creating a new one.

Default policy: assume duplicates always.

8.3 Observability and alerting

Structure your logs with one console.log(JSON.stringify({...})) line. Grep with fly logs. Invest an hour in OpenTelemetry and it gets much better — the Honeycomb free tier is generous.

The most common failure pattern: an automation breaks and nobody notices for a week. Catch at the top of the handler and post one :rotating_light: workflow failed: ${msg} line to a channel like #alerts-dev.


Chapter 9 · The honest comparison — trade-offs vs Zapier and n8n

9.1 When building it yourself wins

  • The logic is not flat — five branches, conditional matching, an LLM call, an external KV. Doing that in an iPaaS GUI is a debugging nightmare.
  • Task cost is exploding — Zapier charges per task, so a high-volume workflow gets expensive quickly. Custom code is a 5-dollar infra line.
  • Policy bans secrets from a SaaS — compliance teams reject Zapier holding your OAuth tokens for a reason.
  • You need fast deploy and rollback — code: git revert. GUIs have weak change history.

9.2 When Zapier or n8n wins

  • Many varied workflows — 30 automations as 30 scripts is operational pain. n8n in one place is better.
  • Non-developers need to touch it — marketing wants to tweak a trigger condition? You need a GUI.
  • Frequent new SaaS connections — Zapier's 9,000+ integrations are powerful. Rolling your own means learning each SDK and auth flow.
  • You can't afford maintenance time — code you wrote is code you keep. A one-person team carries that on their back.

9.3 Cost simulation — 100 workflow runs a day

ItemZapiern8n self-hostedCustom code
Hosting0 (SaaS)5 USD/month (Fly.io)5 USD/month (Fly.io)
Task cost3,000 tasks at 0.02 USD = 60 USD/month00
LLM APInot includednot included30 USD/month (Claude)
Dev time2 hours6 hours10 hours
Maintenance/month0.5 hours1 hour0.5 hours
Total monthly60 USD+5 USD+35 USD+

Drop the LLM and custom code costs 5 dollars a month. Spend the time once and the marginal cost is effectively zero.

9.4 Recommendation

  1. Five or fewer automations: build it yourself. You'll learn faster.
  2. Five to thirty: n8n self-hosted. Use the integration catalog.
  3. Thirty or more with non-developer touch: Zapier or Make.
  4. Heavy compliance, can't share SaaS tokens: custom code with an in-house key manager.

Epilogue — checklist and anti-patterns

Hands-on checklist

  • Secrets are never in git
  • Linear webhook signatures are verified
  • An idempotency key is stored in KV
  • LLM output goes through a Zod schema
  • Failure pings an alerts channel
  • Retries do not run on 4xx (those are your bugs)
  • The Notion integration is explicitly shared with the target database
  • The Slack bot is invited to the channel (otherwise not_in_channel)
  • All six env vars are set in production
  • Logs are structured

Anti-patterns

  • An LLM on every branch — non-determinism stacks up. Simple maps don't need an LLM.
  • No Notion-Version header — without one you're routed to the latest version and break on migrations. Always pin it.
  • Heavy work inside the webhook — anything that can't finish within five seconds belongs on a queue. Otherwise the webhook gets disabled.
  • Retry storms — retrying 4xx means pushing your bug onto an external API. Alert and stop.
  • Hardcoded channel and database IDs — keep them in env vars. Moving services later should require no code edits.
  • Two Socket Mode instances — they double-process events. A single instance is the right answer at internal-tool scale.
  • Trusting JSON output — LLMs sometimes wrap it in commentary. Always parse and validate.
  • Skipping rotation — quarterly. Put it on the calendar, otherwise it doesn't happen.
  • PII straight into the LLM — don't drop customer emails or PII into the prompt. Mask or use an in-house model.
  • Swallowing errors — empty try/catch blocks are the enemy. At minimum, log them structured.

Next post

The next piece is a hands-on for wiring OpenTelemetry, Tempo, and Grafana into an internal automation. Spend five minutes adding distributed tracing to this 200-line script and you can answer "why did this take 5 seconds yesterday and 50 seconds today?" You'll see the Anthropic, Slack, Linear, and Notion call latencies on a single screen.


References