Skip to content

Engineer SOP

Role: Full-Stack Product Engineer
Prerequisites: Complete the Meta SOP first
Visual reference: Whimsical Flowchart

You own a feature end-to-end. PM creates the high-level BE/DE tickets and assigns them to you. You refine them, build the back-end, wire the front-end, add PostHog, verify it works, and merge it. No hand-offs except to DE/DS for specialist data work.


Phase 0 — Planning

How Tickets Reach You

PM creates high-level BE and DE tickets for the work they can see to reach minimum criteria. These are intentionally kept high-level — enough context for you to understand the goal, not a detailed spec. All ticket creation happens via the Linear MCP in Cursor.

PM assigns tickets directly to you. There are no sprints — you have a personal backlog. Pick up the next highest-priority item and work it to completion before starting the next. Priorities on an epic level per dev live in the Feature Readiness Dashboard.

Refining Your Tickets

You are handed a FE PR alongside the high-level tickets. Your first job is to use and test the front-end to fully understand what's needed:

  1. Pull the FE branch and run it locally
  2. Click through every screen, button, and flow — treat it like a user
  3. Identify what's missing: API calls returning nothing, buttons that don't work, empty states with no data, features PM may have missed
  4. Compare what you found against the PM tickets — do they cover everything needed to make this functional?
  5. If gaps exist, create additional tickets via the Linear MCP:
Using the Linear MCP, create sub-tickets under [parent issue ID] 
for the following gaps I found while testing the FE PR: [list].
  1. Update each ticket with your refined implementation plan so PM has visibility:
Using the Linear MCP, update [issue ID] with the refined 
implementation plan: [your breakdown].

Writing a Test

Before building, write a test for the feature using the browser MCP. This validates the FE PR behaves as expected and gives you a regression check as you wire up the BE:

Using the browser MCP, navigate to [localhost URL] and test 
the [feature name] flow:

1. [Step through the user journey]
2. Verify [expected UI state / data / behaviour]
3. Report what works, what's broken, and what's missing data

Run this test again after wiring up the BE to confirm the feature works end-to-end.

Use the Top-Performing Model

Always configure Cursor to use the highest-ranked model for web development as measured by the Code Arena Leaderboard. At the time of writing, this is Claude Opus 4.6. Check the leaderboard periodically — using a weaker model when a better one is available means slower work and worse output.

Cursor Plan Mode

With all five repos in workspace, open Cursor Plan Mode:

Using the cred-wiki as architecture context, create an implementation 
plan for: [paste feature/outcome]

Cover: FE components (Storybook), API endpoints (which repo + why), 
new tables required, DE/DS flags, PostHog analytics events needed, 
build/merge sequence, multi-repo dependencies.

Phase 1 — Building

The Sequence

1. Pick up ticket from backlog
    ↓
2. Refine the ticket
    ↓
3. Update Linear with your plan
    ↓
4. Pull FE PR
    ↓
5. Test locally
    ↓
6. Create gap tickets if needed
    ↓
7. Write browser MCP test for the feature
    ↓
8. Open PR early (when started, not when done)
    ↓
9. Build BE + wire FE
    ↓
10. CI automation handles conflicts and code review
    ↓
11. Add PostHog analytics events (every feature)
    ↓
12. Enable PostHog feature flag (hidden from customers)
    ↓
13. Tests pass + manual verification — does it solve the original problem?
    ↓
14. Fix any bugs found with Cursor
    ↓
15. Merge
    ↓
16. Enable feature flag (internal only, customers can't see it)
    ↓
17. PM checks and moves Feature Readiness ticket to MVP
    ↓
18. PM & leadership test on MVP, make edits in new PRs
    ↓
19. Flag QA if complex or risky (post-MVP)

Cloud Agents for Execution

You cannot effectively work on multiple branches locally. Use cloud agents as the default:

  1. Plan locally in Cursor Plan Mode with cred-wiki context
  2. Send execution to cloud agents to open a new PR
  3. Test the PR via review apps — BE review apps require the review-app PR label; FE Vercel previews trigger automatically when you add the deploy-preview PR label (no push needed), or on any commit containing [deploy] (see Review Apps and Vercel Preview Deployments)
  4. If lots of issues, pull locally for focused iteration

This replaces the need for local multi-branch work in most cases.

Front-End Tooling

Everyone does FE. Use the Design Power Stack tools (set up per Meta SOP prerequisites) when working on front-end:

  • Stitch — generate reference screens before building: "Generate a dashboard screen for [feature] with [describe style]"
  • 21st.dev — pull pre-built components: prefix with /ui or describe what you need
  • UI/UX Pro Max — get design system context automatically on design requests
  • Nano Banana — generate visual mockup references: "Generate a mockup of [describe UI]"
  • Cursor Design Mode — toggle in bottom-right corner. Select a component, tweak styles visually (border, spacing, colors), click "Apply". Say "apply this to all related components" to cascade.

Component rules (applies to all FE work):

  • Always use exact ShadCN component names (e.g., "InputGroup", "Badge", "Accordion") — not "input box" or "dropdown"
  • Always use only existing Storybook components. If one doesn't exist, flag it — do not create one-off styled components.
  • Use design token names in prompts (e.g., "use accent-foreground for the text color") — not color descriptions
  • When restyling existing components, keep all event logic. Only change visual styling — do not replace components with ShadCN primitives, as this breaks event bindings. Add to prompts: "retain all existing event logic and only change the styling."
  • If Cursor uses the wrong component or invents one, screenshot the correct component from Figma or Storybook and paste it into the chat
  • If you need a Figma component as a code reference, use the Figma-to-Code plugin to export React code and paste it into Cursor

Multi-Repo Merge Order

  1. cred-dbt — table must exist in BigQuery first
  2. cred-model-api — must be live in dev before commercial can consume it
  3. cred-api-commercial — if passing through to commercial GraphQL
  4. cred-web-commercial — always last

How to Create a DBT Table

Use this when you need a new table for enrichment data (brand interactions, scoring, people/company data).

Step 1 — Get Source Schemas

Using the BigQuery MCP, find the full schema for [table name] 
in the [dataset] dataset and return it as JSON.

Or manually:

  1. GCP Console
  2. BigQuery
  3. Select table
  4. Schema
  5. Select all
  6. Copy JSON

Step 2 — Plan the Model

Plan a new DBT table in the dbt repo. Source schemas: [paste JSON].

- Table name: [e.g. person_brand_interaction_detail]
- Location: models/cred_entity/ (final table, not intermediate)
- One row per [e.g. brand interaction between a person and a company]
- Columns: [describe or ask Cursor to recommend from schemas]
- Join logic: [describe]
- Only include people already matched in our system

Follow existing naming and folder conventions in the dbt repo.
Show me the plan before writing any code.

Review the plan, then tell Cursor to implement it.

Step 3 — Test in DBT Cloud

Warning

Do this in the DBT Cloud UI directly — not via the DBT MCP. You need to visually verify the data before trusting it.

  1. Log in at cloud.getdbt.com (credentials in 1Password under DBT dev account)
  2. Switch to your branch
  3. Find your model
  4. Click Preview
  5. Check the data looks correct
  6. Fix any errors with Cursor, re-preview until clean
  7. Click Build
  8. Verify the table exists in BigQuery

Self-certification: once Build succeeds and data looks correct, open the PR. No specialist sign-off needed.

Step 4 — Merge DBT PR to Main First

Production sync only sees tables on main. Merge dbt before setting up the sync job or the API PR.

Step 5 — Set Up the Postgres Sync Job

Via Cursor:

Using the GCP Cloud Scheduler MCP, plan how to sync [table name] 
to Postgres. Find the existing sync job most related to 
[your table's domain] and show me what changes are needed to 
add [table name] to its table_names parameter. 
Show me the plan before making any changes.

Review the plan, then tell Cursor to execute it. After execution, check GCP observability logs to confirm the sync succeeded.

Manually:

  1. GCP Console
  2. Cloud Scheduler
  3. Find most relevant existing job
  4. Add your table to table_names
  5. Save
  6. Force Run
  7. Verify in Workflows
  8. Check table in DB Beaver

Key parameters: dbt_command, schema_name (Postgres), dataset_name (BigQuery), table_names, threads (use 1 for a single model).

DB Beaver: Credentials in 1Password under model API dev. Database = cred_dev, schema = cred_entity.


How to Create a Commercial Table

Commercial tables live in Postgres directly — no BigQuery, no DBT.

Plan a new Postgres table in api-commercial for [purpose].

- Table name: [e.g. transcripts]
- Scope: per [user / workspace / tenant]
- Columns: [describe]
- Follow existing migration conventions
- Will also need: GraphQL type, resolver, and seed data for local testing

Show me the full plan — migrations, schema, resolver — before writing any code.

Review the plan, then tell Cursor to implement it.

Testing: Set up a dev replica using the internal guide at:
Review Apps Guide

Warning

Never point your local environment at the live dev database. A reset command will wipe it.


How to Create an API Endpoint

Model API (our data)

Prerequisite: DBT table must already be in Postgres dev.

Plan a new GraphQL endpoint in model-api for [purpose].

Data source: Postgres table [table_name] in cred_entity schema.
Credentials are in the .env file.

Parameters: [e.g. company_id, person_id]
Response fields: [e.g. post_url, interaction_type, date, count]

Follow existing resolver and schema patterns. 
This is CRED data — model-api, not api-commercial.
Show me the plan — schema, resolver, types — before writing any code.

Review the plan, then tell Cursor to implement it.

Testing locally:

  1. Check out branch
  2. Confirm .env populated
  3. Ask Cursor to start server
  4. Open localhost:[port]/graphql
  5. Add Authorization: Bearer [API_TOKEN] header
  6. Run query

If slow (>2 seconds): ask Cursor whether to add indexes or pre-calculate aggregations.

Commercial API (customer data)

Plan a new GraphQL endpoint in api-commercial for [purpose].
[Describe data source, parameters, return fields.]
Follow existing resolver and schema patterns.
Show me the plan before writing any code.

Review the plan, then tell Cursor to implement it. Test via GraphQL playground on dev or against the dev replica.


How to Connect an API to the Front-End

Prerequisite: API must be deployed to dev. If not live yet, mock the data and wire it when it is.

Plan how to connect the GraphQL endpoint [name] to the front-end 
in web-commercial.

Endpoint is on [model-api / api-commercial] and returns [describe fields].

UI requirement: [describe what the user should see]

Use only existing Storybook components. If a component doesn't exist, 
flag it — do not create a one-off styled component.

Handle: loading state, empty state (users with no data), and error state.
Show me the plan — which components, hooks, queries — before writing any code.

Review the plan, then tell Cursor to implement it.

If a Storybook component is missing: flag to Design, mock the data, continue building logic, swap the component in when Design adds it.

Test locally: Data loads, all states render, pagination works if applicable, no console errors.


PostHog — Feature Flags & Analytics

Cursor skill: The posthog-analytics skill in web-commercial has the exact code patterns for enums, capture calls, flag utilities, and gating. Cursor will follow it automatically when you reference PostHog work.

Full reference: PostHog Guide and docs/posthog.md in web-commercial.

Adding Analytics Events — Step by Step

Every feature needs analytics events wired before merge.

1. Create or extend the event enum

Location: libs/shared/src/constants/<feature>-analytics.ts

  • Event names: snake_case, prefixed by feature area (people_, companies_, deals_)
  • Enum keys: UPPER_SNAKE_CASE
  • Property keys: snake_case values in a separate enum
  • Always include page_path if the event is user-triggered

2. Add posthog.capture() calls

  • Fire at meaningful user actions only — not on every render
  • Always wrap in try/catch with logger.error (never console.log)
  • Use enum references for event names and property keys, not string literals
  • Include context properties: page_path, entity IDs, counts, action metadata
  • Tracking must be non-blocking — failures must not break the user flow

3. Update docs/posthog.md in web-commercial

Add a table entry for each new event with: event name, enum key, properties, trigger, and file path.

4. Cursor prompt for planning

Plan the PostHog analytics events for [feature name] in web-commercial.

Key actions to track:
- [e.g. User clicks "Export to Salesforce"]
- [e.g. Export completes successfully]
- [e.g. Export fails with error]

Use the posthog-analytics skill and the existing event patterns in this codebase.
Show me the event names, properties, and where each fires before writing any code.

Review the plan, then tell Cursor to implement it.

Adding a Feature Flag — Step by Step

Every customer-facing feature ships behind a feature flag.

1. Add the flag to the enum

File: libs/shared/src/feature-flags/types.ts

  • Flag key format: kebab-case (e.g. new-salesforce-export-flow)
  • Add to the FeatureFlag enum only — never use the legacy files (constants/feature-flags.ts or constants/featureFlags.ts)

2. Create the flag in PostHog dashboard

  1. Go to Feature Flags -> New feature flag
  2. Use the exact string value from the enum (e.g. new-salesforce-export-flow) — case-sensitive
  3. Add a clear name, description, and owner
  4. Set rollout to 0% initially
  5. Target by workspace/company properties for controlled rollout
  6. Replicate the flag across dev/staging/prod environments using PostHog's copy capability

3. Gate the UI

  • Use useFeatureFlag(FeatureFlag.MY_NEW_FEATURE) for component-level gating
  • Use withFeatureFlag(Page, FeatureFlag.MY_NEW_FEATURE) for full page gating (redirects to /home when off)
  • Handle the loading state (undefined) before assuming on/off — return null or a skeleton, not a flash of wrong content

4. Cursor prompt for planning

Plan how to gate [feature name] behind a PostHog feature flag 
in web-commercial.

Flag key: [e.g. new-salesforce-export-flow]
Default: off (existing behaviour until flag is enabled)

Use the posthog-analytics skill and the existing feature flag patterns in this codebase.
Show me which components need the flag and the fallback behaviour 
before writing any code.

Review the plan, then tell Cursor to implement it.

Testing Locally

  • All feature flags return true by default in development (NODE_ENV=development)
  • To test with a flag off locally, add an override in DEV_FLAG_OVERRIDES in libs/shared/src/feature-flags/client.ts
  • PostHog runs with debug: true in non-production — all capture calls and flag evaluations are logged to the browser console
  • Open the browser console, trigger your actions, and confirm each event fires with the correct name and properties
  • Test with the flag both on and off — the feature must be fully hidden when off and functional when on

Testing on Review Apps

  • Backend: Add the review-app label to your PR to trigger a BE review app (see Review Apps Guide)
  • Frontend: Trigger a Vercel preview by adding the deploy-preview PR label (runs immediately via workflow) or including [deploy] in your commit message (see Vercel Preview Deployments)
  • In PostHog, enable the flag for the review app environment targeting your test workspace
  • Walk through the full user journey — confirm the feature is gated correctly
  • Open the PostHog event stream, trigger each tracked action, and verify events appear with correct names and properties
  • Toggle the flag off in PostHog and confirm the feature disappears cleanly with no broken UI or console errors
  • Verify workspace targeting — only intended workspaces should see the feature

Rollout Lifecycle

Flag created (0% rollout)
    → Internal only (CRED workspaces)
    → Alpha (select customer workspaces)
    → Beta (broader rollout)
    → GA (100%)
    → Flag removed from code and PostHog

Cleanup: Once at GA and stable, remove the flag from code and delete it from PostHog. Dead flags are tech debt.


CI Loop — PRs, Automation & Merging

Open PRs Early

Open a PR as soon as you have the first meaningful commit. CI automation runs immediately — Rainforest QA, use client checks, code quality. If anything fails, fix it with Cursor before handing off.

Checking CI Status with CircleCI MCP

Use the CircleCI MCP to check pipeline status and debug failures without leaving Cursor:

Using the CircleCI MCP, check the CI pipeline status for my 
latest commit on [branch name]. If any jobs failed, show me the 
build logs and test output so I can diagnose the issue.

Using Worktrees (For Comparison Only)

Use cloud agents for multi-branch work. Worktrees are a fallback for comparing branches side-by-side — not for parallel development.

git worktree add ../web-commercial-develop develop

Open each worktree in its own Cursor window on different ports to compare.

Merge Confidently — Medium to Large PRs

Don't break work into artificially small commits. Medium to large PRs are preferred — AI quality is high enough and CI automation resolves conflicts and comments. Always merge to develop with feature flags rather than keeping long-lived branches.

Deployment Cadence

Merges to develop stop on Fridays. Monday to Wednesday is QA testing on a stable staging environment. Production deploys on Wednesday. Plan your merges accordingly — get your PR into develop before Friday if you want it in the next deploy.

Risk-Based Code Reviews

Not every PR needs the same level of review:

Low-risk changes (e.g. exposing existing data via an API, UI tweaks, copy changes): can be merged without manual review if all automated AI checks pass.

High-risk changes (e.g. database migrations, creating new tables, auth changes, payment logic): require a manual code review from a senior back-end engineer before merging.

If you're unsure whether your change is high-risk, it probably is. Ask.

Playwright Test Failures Are Yours to Fix

When a Playwright E2E test fails on your PR, you own it — not QA. Follow the Playwright Test Ownership guide to determine whether the test needs updating or your code has a bug. Do not merge with a failing test.

Resolve Everything with Cursor Before Merging

Ask Cursor to resolve all build errors, conflicts, and code review comments. Do not fix these manually.

Resolve all build errors, merge conflicts, and code review comments 
on this PR, then merge it.

Never touch conflict markers manually. Never manually address code review comments.

Merge and Handoff to PM

1. Tests pass + manual verification
    ↓
2. Merge
    ↓
3. Enable feature flag (internal only)
    ↓
4. PM checks and moves Feature Readiness ticket to MVP
    ↓
5. PM & leadership test on MVP
    ↓
6. Make edits in new PRs

You merge once your tests pass and you've verified it works. Enable the feature flag so the feature is live internally but hidden from customers. PM then reviews, moves the Feature Readiness ticket to MVP, and PM & leadership test and iterate with new PRs. QA post-MVP only for complex or high-risk features.

Feature Flagging Before Merge

Before merging any FE work where APIs are not yet hooked up, feature flag all non-working UI elements. When adding a feature flag, also add PostHog analytics events to every interactive element in the same PR — it's the same piece of work. Merge to develop, toggle the flag off in PostHog, reload develop, and verify the flagged elements are hidden. If you cannot feature flag a change (e.g., entire layout restructure), keep it as an open PR and message Tom first.

Separate PRs for Component Fixes

If you notice a UI component needs fixing while working on a feature, create a separate PR for the component fix. Do not bundle fixes with feature work. After merging the fix, tell Cursor to create a Linear ticket for the fix and mark it as completed.

Updating an Existing PR

To update an existing PR with new changes, just tell Cursor: "commit these changes." It will push to the same branch and update the PR. You do not need to create a new PR for every change.

Test Restyled Components

If you're restyling something that already works, you must test that it still works — click through the flow, confirm events fire, check edge cases. New UI with no APIs just needs a visual check.

Use Customer Data for UX Improvements

Use customer data (transcript analysis, closed/lost deal reasons, customer complaints) to inform proactive usability improvements. Chat with this data in Claude first to identify UX pain points, then implement in Cursor.


Definition of Done

  • Regression test passes (Rainforest QA green)
  • Playwright E2E tests pass — if any fail, follow the Playwright Test Ownership process to diagnose and fix
  • Browser MCP test written and passing
  • PostHog analytics events wired up and verified firing on dev
  • PostHog feature flag enabled — internal only, hidden from customers
  • Manual verification — full user journey tested, original problem solved
  • Bugs found during testing fixed before merge
  • Loading, empty, and error states work
  • No console errors
  • For BE: table in Postgres dev, API returns correct data
  • For DBT: PR merged to main, sync job succeeded
  • Merged and feature flag live for internal testing
  • PM notified to check and move Feature Readiness ticket to MVP

Troubleshooting

Problem Fix
Can't start local environment Check .env is fully populated. Paste error into Cursor.
.env missing or empty values 1Password: search repo name, download. If that fails: ask Alex to share in Slack, copy it, delete the message immediately.
DBT Preview fails with credential error Message Alex — one-time BigQuery credential linking needed.
GCP permissions error Need Cloud Scheduler Job Runner — text Alex with your Google email.
Table not in Postgres after sync Ask Cursor: "Check GCP observability logs for the most recent build-and-sync execution and tell me where it failed."
Cursor tool limit warning Disable MCPs not in use. Expected behaviour.
Merge conflict "Fix these merge conflicts." Never edit manually.
API query slow Ask Cursor: "Should I add indexes or pre-calculate aggregations?"
PostHog events not firing Check event name matches exactly. Verify PostHog API key in .env. Ask Cursor to debug.
Feature flag not working Flag key in code must exactly match PostHog (case-sensitive). Check flag is enabled for the right environment.
Not seeing changes on localhost Open an incognito tab — browser cache is almost always the issue. Use Google Chrome with code and localhost windows side by side for ~5x faster iteration.
Cursor uses the wrong component or invents one Screenshot the correct component from Figma or Storybook and paste it into the Cursor chat as a visual reference. Use exact ShadCN component names.
Playwright E2E test failing on PR Follow the Playwright Test Ownership guide. Use Cursor: "Diagnose this Playwright test failure: [paste error]"
Build errors, conflicts, or code review comments Ask Cursor: "Resolve all build errors, merge conflicts, and code review comments on this PR." Do not fix manually.