Skip to content

Engineer SOP

Role: Full-Stack Product Engineer
Prerequisites: Complete the Meta SOP first
Visual reference: Whimsical Flowchart

You own a feature end-to-end. PM creates the high-level BE/DE tickets and assigns them to you. You refine them, build the back-end, wire the front-end, add PostHog, verify it works, and merge it. No hand-offs except to DE/DS for specialist data work.


Phase 0 — Planning

How Tickets Reach You

PM creates high-level BE and DE tickets for the work they can see to reach minimum criteria. These are intentionally kept high-level — enough context for you to understand the goal, not a detailed spec. All ticket creation happens via the Linear MCP in Cursor.

PM assigns tickets directly to you. There are no sprints — you have a personal backlog. Pick up the next highest-priority item and work it to completion before starting the next. Priorities on an epic level per dev live in the Feature Readiness Dashboard.

Refining Your Tickets

You are handed a FE PR alongside the high-level tickets. Your first job is to use and test the front-end to fully understand what's needed:

  1. Pull the FE branch and run it locally
  2. Click through every screen, button, and flow — treat it like a user
  3. Identify what's missing: API calls returning nothing, buttons that don't work, empty states with no data, features PM may have missed
  4. Compare what you found against the PM tickets — do they cover everything needed to make this functional?
  5. If gaps exist, create additional tickets via the Linear MCP:
Using the Linear MCP, create sub-tickets under [parent issue ID] 
for the following gaps I found while testing the FE PR: [list].
  1. Update each ticket with your refined implementation plan so PM has visibility:
Using the Linear MCP, update [issue ID] with the refined 
implementation plan: [your breakdown].

Writing a Test

Before building, write a test for the feature using the browser MCP. This validates the FE PR behaves as expected and gives you a regression check as you wire up the BE:

Using the browser MCP, navigate to [localhost URL] and test 
the [feature name] flow:

1. [Step through the user journey]
2. Verify [expected UI state / data / behaviour]
3. Report what works, what's broken, and what's missing data

Run this test again after wiring up the BE to confirm the feature works end-to-end.

Use the Top-Performing Model

Always configure Cursor to use the highest-ranked model for web development as measured by the Code Arena Leaderboard. At the time of writing, this is Claude Opus 4.6. Check the leaderboard periodically — using a weaker model when a better one is available means slower work and worse output.

Cursor Plan Mode

With all five repos in workspace, open Cursor Plan Mode:

Using the cred-wiki as architecture context, create an implementation 
plan for: [paste feature/outcome]

Cover: FE components (Storybook), API endpoints (which repo + why), 
new tables required, DE/DS flags, PostHog analytics events needed, 
build/merge sequence, multi-repo dependencies.

Phase 1 — Building

The Sequence

1. Pick up ticket from backlog
    ↓
2. Refine the ticket
    ↓
3. Update Linear with your plan
    ↓
4. Pull FE PR
    ↓
5. Test locally
    ↓
6. Create gap tickets if needed
    ↓
7. Write browser MCP test for the feature
    ↓
8. Open PR early (when started, not when done)
    ↓
9. Build BE + wire FE
    ↓
10. CI automation handles conflicts and code review
    ↓
11. Add PostHog analytics events (every feature)
    ↓
12. Enable PostHog feature flag (hidden from customers)
    ↓
13. Tests pass + manual verification — does it solve the original problem?
    ↓
14. Fix any bugs found with Cursor
    ↓
15. Merge
    ↓
16. Enable feature flag (internal only, customers can't see it)
    ↓
17. PM checks and moves Feature Readiness ticket to MVP
    ↓
18. PM & leadership test on MVP, make edits in new PRs
    ↓
19. Flag QA if complex or risky (post-MVP)

Multi-Repo Merge Order

  1. cred-dbt — table must exist in BigQuery first
  2. cred-model-api — must be live in dev before commercial can consume it
  3. cred-api-commercial — if passing through to commercial GraphQL
  4. cred-web-commercial — always last

How to Create a DBT Table

Use this when you need a new table for enrichment data (brand interactions, scoring, people/company data).

Step 1 — Get Source Schemas

Using the BigQuery MCP, find the full schema for [table name] 
in the [dataset] dataset and return it as JSON.

Or manually:

  1. GCP Console
  2. BigQuery
  3. Select table
  4. Schema
  5. Select all
  6. Copy JSON

Step 2 — Plan the Model

Plan a new DBT table in the dbt repo. Source schemas: [paste JSON].

- Table name: [e.g. person_brand_interaction_detail]
- Location: models/cred_entity/ (final table, not intermediate)
- One row per [e.g. brand interaction between a person and a company]
- Columns: [describe or ask Cursor to recommend from schemas]
- Join logic: [describe]
- Only include people already matched in our system

Follow existing naming and folder conventions in the dbt repo.
Show me the plan before writing any code.

Review the plan, then tell Cursor to implement it.

Step 3 — Test in DBT Cloud

Warning

Do this in the DBT Cloud UI directly — not via the DBT MCP. You need to visually verify the data before trusting it.

  1. Log in at cloud.getdbt.com (credentials in 1Password under DBT dev account)
  2. Switch to your branch
  3. Find your model
  4. Click Preview
  5. Check the data looks correct
  6. Fix any errors with Cursor, re-preview until clean
  7. Click Build
  8. Verify the table exists in BigQuery

Self-certification: once Build succeeds and data looks correct, open the PR. No specialist sign-off needed.

Step 4 — Merge DBT PR to Main First

Production sync only sees tables on main. Merge dbt before setting up the sync job or the API PR.

Step 5 — Set Up the Postgres Sync Job

Via Cursor:

Using the GCP Cloud Scheduler MCP, plan how to sync [table name] 
to Postgres. Find the existing sync job most related to 
[your table's domain] and show me what changes are needed to 
add [table name] to its table_names parameter. 
Show me the plan before making any changes.

Review the plan, then tell Cursor to execute it. After execution, check GCP observability logs to confirm the sync succeeded.

Manually:

  1. GCP Console
  2. Cloud Scheduler
  3. Find most relevant existing job
  4. Add your table to table_names
  5. Save
  6. Force Run
  7. Verify in Workflows
  8. Check table in DB Beaver

Key parameters: dbt_command, schema_name (Postgres), dataset_name (BigQuery), table_names, threads (use 1 for a single model).

DB Beaver: Credentials in 1Password under model API dev. Database = cred_dev, schema = cred_entity.


How to Create a Commercial Table

Commercial tables live in Postgres directly — no BigQuery, no DBT.

Plan a new Postgres table in api-commercial for [purpose].

- Table name: [e.g. transcripts]
- Scope: per [user / workspace / tenant]
- Columns: [describe]
- Follow existing migration conventions
- Will also need: GraphQL type, resolver, and seed data for local testing

Show me the full plan — migrations, schema, resolver — before writing any code.

Review the plan, then tell Cursor to implement it.

Testing: Set up a dev replica using the internal guide at:
Review Apps Guide

Warning

Never point your local environment at the live dev database. A reset command will wipe it.


How to Create an API Endpoint

Model API (our data)

Prerequisite: DBT table must already be in Postgres dev.

Plan a new GraphQL endpoint in model-api for [purpose].

Data source: Postgres table [table_name] in cred_entity schema.
Credentials are in the .env file.

Parameters: [e.g. company_id, person_id]
Response fields: [e.g. post_url, interaction_type, date, count]

Follow existing resolver and schema patterns. 
This is CRED data — model-api, not api-commercial.
Show me the plan — schema, resolver, types — before writing any code.

Review the plan, then tell Cursor to implement it.

Testing locally:

  1. Check out branch
  2. Confirm .env populated
  3. Ask Cursor to start server
  4. Open localhost:[port]/graphql
  5. Add Authorization: Bearer [API_TOKEN] header
  6. Run query

If slow (>2 seconds): ask Cursor whether to add indexes or pre-calculate aggregations.

Commercial API (customer data)

Plan a new GraphQL endpoint in api-commercial for [purpose].
[Describe data source, parameters, return fields.]
Follow existing resolver and schema patterns.
Show me the plan before writing any code.

Review the plan, then tell Cursor to implement it. Test via GraphQL playground on dev or against the dev replica.


How to Connect an API to the Front-End

Prerequisite: API must be deployed to dev. If not live yet, mock the data and wire it when it is.

Plan how to connect the GraphQL endpoint [name] to the front-end 
in web-commercial.

Endpoint is on [model-api / api-commercial] and returns [describe fields].

UI requirement: [describe what the user should see]

Use only existing Storybook components. If a component doesn't exist, 
flag it — do not create a one-off styled component.

Handle: loading state, empty state (users with no data), and error state.
Show me the plan — which components, hooks, queries — before writing any code.

Review the plan, then tell Cursor to implement it.

If a Storybook component is missing: flag to Design, mock the data, continue building logic, swap the component in when Design adds it.

Test locally: Data loads, all states render, pagination works if applicable, no console errors.


PostHog — Feature Flags & Analytics

Analytics Events — Every Feature, Before Merge

Plan the PostHog analytics events for [feature name] in web-commercial.

Key actions to track:
- [e.g. User clicks "Export to Salesforce"]
- [e.g. Export completes successfully]
- [e.g. Export fails with error]

Use the existing PostHog event pattern in this codebase and follow 
the existing event naming convention.
Show me the event names, properties, and where each fires before writing any code.

Review the plan, then tell Cursor to implement it.

Verify in PostHog: deploy to dev, trigger the actions manually, confirm events appear in the PostHog event stream before merging.

Feature Flags — Risky or Customer-Facing Features Only

Plan how to gate [feature name] behind a PostHog feature flag 
in web-commercial.

Flag key: [e.g. new-salesforce-export-flow]
Default: off (existing behaviour until flag is enabled)

Use the existing PostHog feature flag pattern in this codebase.
Show me which components need the flag and the fallback behaviour 
before writing any code.

Review the plan, then tell Cursor to implement it.

Create the flag in PostHog:

  1. Feature Flags
  2. New Feature Flag
  3. Set rollout to 0% initially
  4. Enable per environment as feature progresses through Alpha, Beta, then GA

Cleanup: Once at GA and stable, remove the flag from code and delete it from PostHog. Dead flags are tech debt.


CI Loop — PRs, Automation & Merging

Open PRs Early

Open a PR as soon as you have the first meaningful commit. CI automation runs immediately — Rainforest QA, use client checks, code quality. If anything fails, fix it with Cursor before handing off.

Checking CI Status with CircleCI MCP

Use the CircleCI MCP to check pipeline status and debug failures without leaving Cursor:

Using the CircleCI MCP, check the CI pipeline status for my 
latest commit on [branch name]. If any jobs failed, show me the 
build logs and test output so I can diagnose the issue.

Using Worktrees for Parallel Development

Use git worktrees to run localhost on multiple branches at the same time — compare your feature branch against develop, test a PR while building another, or keep a stable branch running while you experiment.

# Create a worktree for a second branch
git worktree add ../web-commercial-develop develop

# Run localhost on different ports in each
# Worktree 1 (feature branch): localhost:3000
# Worktree 2 (develop):        localhost:3001

Open each worktree in its own Cursor window. This is especially useful when wiring up the FE — you can see the working version and your in-progress version side by side.

Merge Often, Merge Small

Break work into the smallest mergeable unit and ship it behind a feature flag. Large, long-lived branches cause painful conflicts. Merge frequently — every meaningful chunk of work should be a separate PR, protected by a flag so incomplete work never reaches customers.

Deployment Cadence

Merges to develop stop on Fridays. Monday to Wednesday is QA testing on a stable staging environment. Production deploys on Wednesday. Plan your merges accordingly — get your PR into develop before Friday if you want it in the next deploy.

Risk-Based Code Reviews

Not every PR needs the same level of review:

Low-risk changes (e.g. exposing existing data via an API, UI tweaks, copy changes): can be merged without manual review if all automated AI checks pass.

High-risk changes (e.g. database migrations, creating new tables, auth changes, payment logic): require a manual code review from a senior back-end engineer before merging.

If you're unsure whether your change is high-risk, it probably is. Ask.

Conflicts

Fix these merge conflicts.

Never touch conflict markers manually.

Merging

Resolve any conflicts, fix any code comments, and merge this PR.

Merge and Handoff to PM

1. Tests pass + manual verification
    ↓
2. Merge
    ↓
3. Enable feature flag (internal only)
    ↓
4. PM checks and moves Feature Readiness ticket to MVP
    ↓
5. PM & leadership test on MVP
    ↓
6. Make edits in new PRs

You merge once your tests pass and you've verified it works. Enable the feature flag so the feature is live internally but hidden from customers. PM then reviews, moves the Feature Readiness ticket to MVP, and PM & leadership test and iterate with new PRs. QA post-MVP only for complex or high-risk features.


Definition of Done

  • Regression test passes (Rainforest QA green)
  • Browser MCP test written and passing
  • PostHog analytics events wired up and verified firing on dev
  • PostHog feature flag enabled — internal only, hidden from customers
  • Manual verification — full user journey tested, original problem solved
  • Bugs found during testing fixed before merge
  • Loading, empty, and error states work
  • No console errors
  • For BE: table in Postgres dev, API returns correct data
  • For DBT: PR merged to main, sync job succeeded
  • Merged and feature flag live for internal testing
  • PM notified to check and move Feature Readiness ticket to MVP

Troubleshooting

Problem Fix
Can't start local environment Check .env is fully populated. Paste error into Cursor.
.env missing or empty values 1Password: search repo name, download. If that fails: ask Alex to share in Slack, copy it, delete the message immediately.
DBT Preview fails with credential error Message Alex — one-time BigQuery credential linking needed.
GCP permissions error Need Cloud Scheduler Job Runner — text Alex with your Google email.
Table not in Postgres after sync Ask Cursor: "Check GCP observability logs for the most recent build-and-sync execution and tell me where it failed."
Cursor tool limit warning Disable MCPs not in use. Expected behaviour.
Merge conflict "Fix these merge conflicts." Never edit manually.
API query slow Ask Cursor: "Should I add indexes or pre-calculate aggregations?"
PostHog events not firing Check event name matches exactly. Verify PostHog API key in .env. Ask Cursor to debug.
Feature flag not working Flag key in code must exactly match PostHog (case-sensitive). Check flag is enabled for the right environment.