I tried Cursor on a sticky Sydney night after a client demo went sideways. I had to fix a finicky React form before the next morning. Copilot's inline suggestions kept missing the validation logic because it stayed in one file. Cursor's Composer pulled the schema, the form, and the hooks into one plan, swapped to Claude Sonnet, and proposed the right cross-file edits in two prompts. I closed the laptop before midnight, and that was the first moment I believed the hype.

The money story (and why it matters)

In November 2025, Anysphere raised about $2.3 billion at roughly a $29 billion valuation for Cursor. That is serious money for a code editor. The investor list reads like a tech index fund: Accel, Thrive, Andreessen Horowitz, Google, Nvidia. When rivals pile in together, you can feel the market signal: this thing is no toy.

What I see in real use

On my own projects, Cursor trims about 30% off routine work compared to Copilot. I tested it on a Django refactor and a TypeScript API rewrite over two separate weeks. Cursor stayed in flow across files instead of losing context. Copilot still shines on quick inline fixes, but it stumbles on multi-file edits where Cursor holds the thread.

(I learned this the hard way: Copilot once rewired a routing table in a way that quietly broke staging. Cursor has been calmer under pressure.)

The background agent is the sleeper feature. Let it lint, write a draft docstring, and nudge a unit test while you keep coding. It isn't magic, but it keeps me from bouncing between terminals.

Where Cursor still stumbles

It can over-edit if you give it a vague prompt. I once asked for "clean up the API errors" on an ASP.NET Core service and it rewrote the error envelope contract without asking. It also hesitates on deeply nested mono-repos unless you anchor it with file paths. The models rotate too; Sonnet and GPT-5 behave differently on the same repo, so you need to pick deliberately.

When Copilot still wins

  • Tiny inline fixes: Copilot feels lighter when you just need a one-line regex or a quick import.
  • Extremely strict style guides: Copilot respects local patterns more often without a long explanation.
  • Offline or flaky internet: Copilot caches enough to help; Cursor is less forgiving here.

Model choice is the secret weapon

Cursor is a VS Code fork that lets you swap models mid-stream. I default to Claude Sonnet for refactors, flick to GPT-5-low-fast for noisy logs, and keep Gemini handy for long architecture notes. When one model stalls, I switch without breaking stride. That flexibility matters when you are working odd Australian hours and an API endpoint throttles.

Personally, I live in the terminal with Claude Code and Codex. Claude Code gives me the agent-y bits you do not get in an IDE: command-centric prompts, sub-agents, and background bash terminals it kicks off and keeps an eye on. I stuck with Cursor for a while because the company was paying and politics favoured a single tool. Where I have landed now is heavy use of Claude Code and Opus 4.5 for the hardest problems. Opus 4.5 is too expensive to burn inside Cursor's quotas, so I keep Cursor for cross-file edits and Composer plans and lean on Opus directly when I need the bigger brain.

Vibe coding, minus the hangover

Vibe coding, meaning you describe intent and let the AI draft, can feel glorious until you inherit a mess you do not fully understand. I have cleaned up enough AI-generated code to be wary. Cursor makes vibe coding safer because you can inspect and edit across files in one go. You still need to read the diffs. (Please read the diffs.)

What Australian teams are actually doing

Two Sydney fintech clients have quietly rolled Cursor out to small squads. Their early patterns:

  • Solo devs pay for Pro out of pocket to get unstuck on migrations.
  • Teams pair Cursor with Swarmia to track pull request velocity and error rates.
  • Security leads are asking about data residency before wider rollout.

None of this is a top-down mandate yet; it's bottoms-up curiosity with guardrails.

I have also seen a Melbourne health tech team run a short bake-off. They kept Copilot for front-end tweaks and used Cursor for a messy consent-form refactor across web and mobile. The deciding factor was how easily Cursor let them hop between TypeScript and Kotlin in one thread without losing context. A Brisbane SaaS lead told me they were more interested in how few context switches Cursor caused than in raw speed. Fair call: tired brains make silly bugs.

Since drafting this, I tried Google's new Anti-Gravity for a week. It has some clever ideas, like the agent manager, the direct browser control, and even spinning up NanoBanana Pro images inside the IDE, but the performance was rough and the bugs were constant. More on that once Google hardens it; right now it is not ready for production.

Pricing, briefly

There is a free tier with the smaller model and a Pro tier around $20 per month with the good stuff (Claude Sonnet, GPT-4.1, GPT-5 variants). Enterprise pricing is negotiable. If you have 15 developers, you are looking at roughly $3,600 a year. That is one mid-range laptop for a 25% lift in throughput. Do the maths for your team.

If you need more headroom, Pro Plus and Ultra tiers bundle larger model quotas. I have burned through Pro in a few days on migration weeks because the quota is monthly, not per day. Budget for a few overage days in your first month; it's cheaper than pausing a migration mid-flight.

How to trial it without blowing things up

  1. Pick one gnarly task: a cross-service refactor, not a typo fix.
  2. Run a two-week pilot with Cursor Pro alongside Copilot. Track: time to complete, error rate, review comments, and your own frustration levels.
  3. Rotate models mid-task. See which one keeps context best for your codebase.
  4. Keep a human in the loop on merges. AI confidence does not equal correctness.

Metrics I track in pilots

  • Time from first prompt to passing tests.
  • Review churn: how many back-and-forth comments per pull request.
  • Rollbacks or hotfixes inside 48 hours of merge.
  • Model switches per task: if you are swapping constantly, tune prompts or pick a default model.
  • Developer sentiment: run a short retro at the end of week two (what slowed you down, what helped).

My cautions

  • Do not skip threat modelling. AI can suggest insecure patterns; I have seen it propose logging secrets, skipping CSRF, and downgrading TLS. You will own the fallout, not the model.
  • Watch vendor lock-in. Model flexibility is great until a favourite model gets rate limited; keep options open.
  • Avoid exact-speed bravado without your own numbers. Marketing charts are not your incident post-mortems.

Where I've landed

Cursor isn't replacing engineers. It's clearing the boring undergrowth so you can think. When I am on a deadline, I now reach for Cursor first, Copilot for quick inline fixes. If I am wrong about this, I will cop the ribbing from the team, but the time I have saved in the last month is too obvious to ignore. If Cursor vanished tomorrow, my delivery cadence would take a hit and I'd be scrambling to rewire my workflow.

Sources