top of page
Rechercher

150 Hours, 4 Apps: What I Learned About Fast & Scalable Vibe-Coding

  • Photo du rédacteur: Thibaut Bardout
    Thibaut Bardout
  • 30 sept.
  • 20 min de lecture

Dernière mise à jour : 7 oct.



ree




Who This Guide Is For


Compared to building everything manually, juggling design, frontend/back-end coding, and QA, this approach can be 10x faster and cheaper, but you’ll need guardrails. Lovable handles rapid MVPs brilliantly, BUT !!! complex workflows still need product discipline.

I built this guide is for anyone who wants to build real apps, no coding or PM experience required.


Maybe you’ve got an idea stuck in Notion, a client project that needs a mini-SaaS, or just want to see if you can actually build something that works.

I wrote this after 150 hours building 4 apps, so you can skip the mistakes and ship faster.

You’ll:

  • learn how to structure your project like a real product team 

  • learn how to make your app feel premium with small design & UX tricks that actually matter

  • avoid the “prototype graveyard” with the right mindset to ship and iterate fast

  • use ready-to-go AI prompts you can plug directly into your own app-building journey

  • use clear templates to map your ideas, data, and prompt flow




Note: As per 29/9, Lovable released lovable-cloud (beta), which integrates backend functionality (database, authentication, edge functions). This means Supabase is no longer the underlying default. However, the associated tips and best practices, scoping, ERD first, early RLS, page-by-page builds, and tight testing, remain valid and useful as durable principles.

Since Lovable ships fast, details may evolve. If something looks off, treat it as a version mismatch, not bad practice.



Key take-aways.


Bottom line: Lovable is an incredible accelerator to see value fast. Turning that momentum into a reliable B2B app still requires product discipline: clear specs, an ERD-first approach, RLS from day one, page-by-page delivery, and tight testing.


  • Start like a product team. Write a 1-page brief, map your data model, define roles/permissions, prioritize ideas, and document changes as you go.

  • Build in small increments. Ship page by page, feature by feature with precise prompts and acceptance tests each. Lock schema; if it must change, do a deliberate schema patch.

  • Own your backend. (in Supabase if your app still uses it, or within Lovable considering their latest lovable-cloud release) : design the schema, version migrations/seed data, and write policies in plain English.


The more “business-specific,” the more these tips matter. 


Complex business apps, like ones with approvals, different user roles, or custom reports, work much better when you take a few minutes to plan them first.Write a short project brief, sketch how the data connects, and test small parts before building the whole thing.

If you’re creating something common (like a login system or dashboard), you can move faster by reusing existing examples and keeping the flow simple.






Content:









What is Lovable (and Supabase)

ree

Lovable is an AI-powered platform that lets you build web apps from natural-language descriptions. This “vibe coding” approach means you describe what you want in plain English, and the AI generates functional code, frontend design, backend endpoints, and suggested data models.


Unlike classic no-code tools, Lovable focuses on production-grade code, not just mockups. After extensive testing, it’s clear this is the future of rapid app development: you (can) get something usable on day one, and you keep the flexibility to evolve it.


The more advanced your app becomes, the more you’ll want to understand (and shape) your backend schema, policies, and functions. So in fact, mastering projects with Lovable necessitate understanding deeply backend - as most as my experience went, within Supabase, but the same principles shall apply to Lovable-Cloud.




My REX

ree

Where Lovable shines: frontend generation is stunning. The “Aha moment!” arrives (incredibly) fast. Time-to-perceived-value is very very short, which is perfect for demos, stakeholder buy-in, and early product discovery.


But…  when piling features, adding specific processes or business logic, without proper product approach, it can stumble ! Credit burn loops for bug fixing. Data later fragility. Wrong Permisions, or mounting consistency debt.


Recent improvements: the move to Claude 4 Sonnet noticeably reduced errors and sped up prompt execution, making day-to-day generation smoother, but it’s not magic; tests and guardrails still matter.

Lovable Pros

Lovable Cons

✅ Beautiful, consistent UI generation

Backend is Supabase (not lovable-cloud)

✅ Frequent updates

Credit burn in bug-fix loops

✅ Strong GitHub integration

Broad prompts → hallucinations; need stepwise specs & acceptance tests

✅ Active community

Error handling improving, not magic

✅ 10~20× faster than hand-coding for simple apps

❌ Can struggle with complex business logic unless you codify invariants

✅ Comprehensive documentation

Support is light


Code export drift if repo hygiene is weak



When to use it vs. when not to (Decision Matrix)

ree

Not every app can be a Lovable app. Use this grid to quickly decide whether to go full throttle, or hold back.


Scope / Complexity

Team Skills

Compliance / Risk

Time-to-Market

Verdict

Small MVP / CRUD SaaS / prototypes

Solo PM / small dev team

Low compliance / no sensitive data

Need demo in days

Green-light

Multi-tenant apps / marketing sites / internal tools

Basic dev + product knowledge

Moderate compliance / internal policies

Launch in 1–2 weeks

⚠️ Amber – proceed with caution

Complex workflows / heavy analytics / multi-database orchestration

Strong dev + data ownership + product knowledge

High compliance / regulated domains

Tight deadlines

⚠️ Amber – only with strict guardrails

FinTech, healthcare or other data/risk sensitive industries

Weak product/DB ownership

Strict compliance / sensitive data

Any

not recommended


Key takeaways:

  • Green-light: Lovable shines when speed matters more than complexity. Perfect for early pre product market fit testing, prototypes, and simple SaaS apps.

  • ⚠️ Amber: Can work if you add discipline, break down workflows, define migrations, assign explicit ownership, and guard budgets. See all tips given below for best practices.

  • not recommended: Skip Lovable when compliance or sensitive data dominates. Manual engineering + expert review is safer.


Reality check: The vibe-coding hype is real, but market traffic and adoption swings can be brutal. AI builders are powerful accelerants, not miracle workers.




Best practices & pro tips

ℹ️ These are recommendations, not rules. They’ll save time/credits in most cases, but you can still ship without following every one.

The more context-specific the app, the more guidance the AI needs. B2B workflows (permissions, bespoke data models, compliance) benefit a lot from briefs, clear policies, database architecture and stepwise prompts.


Apps with well-known patterns are easier to generate. For B2C or “clear model” apps (e.g., “a Slack-like communication tool”), the AI can lean on abundant public patterns and examples, so scaffolding often comes out cleaner/faster. That said, you still need product discipline (auth, RLS, tests, consistent UI).

Note: This guide doesn’t dive into advanced data security, code migration, or integrating edge/automation services (e.g., n8n, OpenAI, Stripe).


List of tips:



Start with a project Brief (before you prompt)

ree

Symptom → You write a short “make me X” prompt, Lovable generates a gorgeous UI… then the build stalls: fields don’t map to real entities, permissions are fuzzy, and every tweak burns more credits and time (the credit burning for fixing bugs being a big issue in Lovable).

Example: You ask for “a Slack-like app,” get channels and chat UI, but later realize you forgot workspaces, roles, and thread permissions, now you’re refactoring the Data Base and many screens.


Why → Generative coding is fast but variance is high when specs are vague. Even with model upgrades, you still need clear acceptance criteria and tests. Each regeneration costs credits; big reworks create credit burn and regressions.


How to Prevent →Start by writing a Project Brief (1-page is fine!) before the first prompt. Keep it concrete and testable:

For example, applicable to a Slack app alike, it could look like:

  1. Users & Jobs

    • Primary users: Employees, Admins, Guests.

    • Top jobs: create/join channel, post message, search, mention, upload file, set notifications.

  2. Problem & Outcomes: Problem: async team comms are fragmented across email/docs ; Outcomes: reduce internal email by 60%, median response time < 5 min in work hours, searchable history ≥ 12 months.

  3. Core Entities

    • A Workspace has many Channels (1-N)

    • Each Channel has many Messages (1-N)

    • Each Message can have Replies (threads) (1-N)

    • Users can join many Channels, with a role (owner, admin, member, guest) (N-M)

    • Files and Reactions are linked to messages or replies

  4. Permissions

    • A user reads messages only in channels they’re a member of.

    • Guests limited to guest channels in a single workspace.

    • Only owners/admins can archive channels; message edit/delete within 15 minutes by author.

  5. MVP Scope (V1)

    • Must-have: auth, create/join channel, send/edit message, file upload, mentions, unread counters.

    • Nice-to-have: threads, reactions, search.

    • Out of scope: video calls, bots, SSO (phase 2).

  6. Acceptance Tests (examples)

    • Given a member of channel #sales, when they post a message with @alice, then Alice sees a notification and the message appears in #sales within 1s.

    • Given a guest user, when they navigate to a private channel they’re not a member of, then they receive an error message and see no metadata.

  7. Prompting Plan (example)

    • Step 1: Build the main data structure: workspaces, channels, messages, replies, files, and reactions, and show how they connect.

      Step 2: Add access rules so each user only sees or edits what they’re allowed to.

      Step 3: Create simple screens to list channels and chat messages, and make sure sending new messages works smoothly.

      Step 4: Test everything: sending, permissions, unread counts, without changing the data setup



To help, you can use the below prompt to create your Brief (or create a specific GPT for all your future briefs).

ROLE & GOAL
You are the Project Brief Builder for a web app that will be generated with Lovable (code gen) and backed by Supabase (Postgres/Auth/Storage/Edge Functions).

Run a short interview (12 questions max). After Q12, output a concise, testable 1-page brief that reduces credit burn and schema rework.

INTERVIEW RULES
Ask exactly one question at a time (max 2 lines per question).

Keep total up to 12 questions. Only ask questions for topics from the Output template for which you do not have an answer yet. Offer tight options/examples if I’m vague.

Assume Supabase is the backend; include RLS hints and an ERD sketch.
Aim for 3–7 entities, clear roles, measurable outcomes, and stepwise prompts.
After Q12, print the brief using the template below—no extra commentary.

THE 12 QUESTIONS
Name & one-liner: What’s the app called, and what’s the one-sentence value prop? (e.g., “Pulse — shared inbox for SMB support teams”)

Audience & roles: Who will use it (segments) and what roles exist? (owner, admin, member, guest?)

Top jobs: What are the 3–6 key user jobs this app must nail?

Problem & outcomes: What problem are we solving, and what 90-day success metrics matter (SMART/KPIs)?

Core entities: List entities and 3–6 key fields each. (e.g., Workspace{name}, Channel{name,type}, Membership{role}, Message{body}…)

Relationships: How do entities relate (1-N, N-M via join)? Any ownership boundaries/tenancy?

Access & RLS: Who can read/create/update/delete which entities? Any guest or cross-workspace rules?

MVP scope: Must-haves for V1; nice-to-haves for later; explicit out-of-scope.

Integrations: Payments (Stripe?), workflows (n8n/Make?), email (Resend?), LLM use (RAG/function calling?), analytics (PostHog/Segment?).

Non-functional: Perf targets (p95), availability, regions/compliance (PII, GDPR), retention/export needs.

Scale & budget: Expected users/concurrency/data size; credit/time caps per feature/sprint.

Acceptance tests: Give 3–5 “Given/When/Then” tests that prove V1 is done; any telemetry you want to track.

OUTPUT TEMPLATE (use after when you have all answers)
# Project Brief — <app_name>
## 1) Audience & Jobs
- Segments: <segments>
- Roles: <roles>
- Top jobs: <jobs> (3–6 bullets)

## 2) Problem & Outcomes (90 days)
- Problem: <problem_statement>
- Success metrics: <SMART KPIs>

## 3) Core Domain Model (ERD sketch)
Entities & relations:
- <EntityA> 1–N <EntityB>
- <EntityC> N–M <EntityD> via <JoinTable>
Key fields (5–8 total per entity):
- <Entity>: id, ...
Notes: naming rules, enums vs free text, soft deletes, timestamps.

## 4) Roles & Access (RLS-ready)
- Policies (examples): who can read/create/update/delete for each entity.
- Edge cases: invites, ownership transfer, exports.

## 5) MVP Scope (V1)
- Must-have: <auth, CRUD, list/detail, basic search, uploads, notifications>
- Nice-to-have (V1.1): <...>
- Out of scope: <...>

## 6) Integrations
- Supabase: Auth, Postgres, Storage, Edge Functions
- Payments: <Stripe?>
- Workflow: <n8n/Make?>
- Email/Notifications: <Resend?>
- LLM features: <RAG? function-calling? structured outputs?>
- Analytics: <PostHog/Segment>

## 7) Non-functional
- Performance: <targets>
- Security: RLS on day 1; secrets via env; audit log on <entities>
- Reliability: smoke tests for CRUD + permissions; CI must pass before regen

## 8) Stepwise Prompting Plan (Lovable)
1. Schema/ERD → migrations + ERD markdown/SQL
2. RLS policies → example queries that pass/fail
3. UI wiring → list/detail + create/edit forms → Supabase RPCs
4. Tests → lint/type checks + smoke tests for permissions/flows
5. Docs → README: env vars, seed script, “repro prompts”


Database architecture

ree

📌 As per 29/9, Lovable released lovable-cloud (beta), which integrates backend functionalities (database, authentication, edge functions). This means Supabase is no longer the underlying default. However, the tip below still perfectly applies IMO.



Symptom → Lovable ships a shiny UI but the data layer can easily become a mess: too many DB, duplicated concepts, orphan links, or “extra dimensions” you never use. Later you can’t answer simple questions or set clean permissions, so you refactor (and burn credits). Not to mention that the more complex it is, the more messy and problematic it becomes when adding features or logic.


Why → Generative tools optimize for fast visible progress. With a fuzzy idea, they’ll create “reasonable” tables that don’t fit together well. Every schema change later = more prompts, bugs and cost.


Prevent →

  • Sketch an ERD (Entity-Relationship Diagram): A simple diagram that shows your data tables (entities) and how they link (relationships). Possibly, you can use lovable for that too, in the chat mode, ask him to suggest a DB ERD and review it before implementation.

    It’s the blueprint of your database before you build UI. It prevents the “oops, we forgot how things connect.”

  • Write main permissions principles (example: “Users see rows they own or are members of.”)

  • For 1 DB, suggest core dimensions. Use short status lists (enums) and add created_at/updated_at (and deleted_at if needed). And, important, how a DB crosses with another one (a problem I saw many times). Example

    • workspace: id (uuid), name (text), created_at, updated_at

    • user: id (uuid), email (text, unique), name (text), created_at, updated_at

    • project (owned by a workspace)

      id, workspace_id (fork = common dimension with the workspace DB), name (text), status (enum: draft|active|archived), created_at, updated_at, deleted_at

    • task (belongs to a project)

      id, project_id (fork), title (text), description (text), assignee_id (fk user, nullable), priority (enum: low | med | high), status (enum: todo | doing | done), due_date (date, nullable), created_at, updated_at

👉 Rule of thumb: the more business-specific your app (B2B workflows, bespoke data, approvals), the more critical a clear Entity-Relationship Diagram and permissions are.


For patterned apps (many B2C, basic e-commerce), you can lean on well-known schemas and guide Lovable with examples (“like Stripe Checkout,” “like a basic blog”).




Backend is Supabase> own the schema & permissions


📌 As per 29/9, Lovable released lovable-cloud (beta), which integrates backend functionality (database, authentication, edge functions). This means Supabase is no longer the underlying default. However, the associated tips below perfectly applies.


Symptom →Permissions leak or joins get messy as features pile up. I got so many examples of wrong logic due to wrong DB architecture.


Why → Lovable scaffolds UI/flows, but Supabase is the real backend (Postgres, Auth, Storage, Edge Functions). You must own schema design, migrations, and permissions.


Prevent→

  • Permissions/securities from day 1: list read/write policies in plain English next to each table. “Users read rows where they’re members; owners/admins write.”

  • Ban schema drift during UI prompts. propose patches first. When you’re building with vibe coding tools, the model might try to change your database structure (“schema”) on the fly. For example, by adding new columns or tables while generating the UI. That’s called schema drift: your database quietly becoming different from what you originally planned.

    To prevent this, don’t let the model change the schema automatically. Instead, make it suggest or propose the change first (a “patch”), so you can review and approve it before it updates your database.

  • Tables overview: in Supabase, review each table’s description. Add a short tag like 🏷 users, 🏷 projects, 🏷 comms to group by category

  • Field types check: confirm each column’s type (ENUM / text / uuid / timestamptz / numeric). Replace free text with enums where bounded.

  • Relationships: verify every Fork (where 2 DB cross) points to the right table


ℹ️ When changing the schema, here is the workflow I would recommend: Start in Lovable (chat): explain what you want to change and why. Ask Lovable to challenge the impact (UI bindings, permissions, tests).

  1. Request a Schema Patch Proposal

  2. Apply patch → run migrations + seed → update permissions / views / RPCs (reusable backend action stored inside the DB) → re-run tests.

  3. Only then regenerate UI (with “no schema changes” guardrail).

Copy-paste prompt (Lovable, before a change):

Do not change any UI yet. Propose a Schema Patch to add priority enum(low|med|high) to task and deprecate priority_text. Include SQL up/down, impacted components (UI/RPC/tests), and RLS implications. After approval I’ll ask you to update UI without further schema edits.

Map ideas, then prioritize (before you build)

ree

Symptom → You jump into shiny add-ons (Stripe, chatbot, product tours, etc. etc.) and miss the core value. The MVP bloats, timelines slip, credits burn.


Why → Start with the core. just as for any project, vibe-coding or not… (actually this problem can be worse with vibe-coding, with the feeling of time/cost per shipment is so fast/low). But the more you add, the more complexity, the higher the risks of bugs and credit consumption.


Prevent → (plain + short)

  • Dump all ideas into one list first (no judging).

  • Tag each idea: Core value? Nice-to-have? Future?

  • Score quickly on Impact (user value) and Effort (time/complexity).

  • Ship MVP = only must-haves to prove value. For example, defer payments/chatbot/tours if unless critical.

  • Review weekly: move items up/down; kill what’s not essential.

Tiny Prioritization Table (copy one row per idea)

Idea

User Problem

Impact

Effort

Decision

Notes

<Feature>

<What value it delivers>

H/M/L

H/M/L

Now/Later/No

<why>

Tip: Do Now = High Impact / Low–Medium Effort.

Defer = High Effort or not core to first value proof (e.g., Stripe if manual invoicing works for pilots).



Develop the app page by page

This is really important IMO... will help you a lot.


Symptom → You prompt the whole app at once, Lovable generates lots of screens, yet flows don’t connect, roles leak, and bug-fix loops eat credits.


Why → Generative coding stays sane with small, well-scoped asks. Big multi-page prompts create inconsistent state, UI drift, and (a lot of!) rework.


Prevent → For each page, describe what you have in mind at the beginning. You can use example dimensions from the below table, then create the page and iterate.

Page name

Purpose / user goal

Displayed elements (UI)

Actions (create/edit/filter/nav)

Data ops (read/ write/ delete)

Role behavior (who can do what)

States (empty / error / loading)

Acceptance tests (Given/When/Then)

Dependencies / data needed

Navigation (entry/exit)

Analytics (events/KPIs)

<Page title>

<What the user achieves here>

<Forms, tables, cards, charts, modals…>

<All user actions on this page>

Read: … • Write: … • Delete: …

<Owner/Admin/Member/Guest rules>

Empty: … • Error: … • Loading: …

1) Given … When … Then …2) Given … When … Then …

<APIs, tables, feature flags>

<How users arrive/leave>

<Track: views, submits, success rate…>



Vague prompts, vague product

ree

Stop hallucinations: specify. Tight prompts = predictable outcomes


Symptom → The app veers off spec: UI/DB don’t match what you meant, weird fields appear, and pages behave inconsistently.


Why → Prompts are too broad or fuzzy. Without concrete acceptance criteria and examples, the model “fills in the blanks” creatively.


Prevent →

  • One feature at a time. No mega-prompts.

  • Provide examples (sample payloads/edge cases).

  • Reject off-spec outputs and restate the rule that was violated.

Mini example

  • Bad: “Build a Slack-like app.”

  • Good: “Propose 6 tables (workspace, channel, membership, user, message, reply). Output the database architecture (ERD). Do not generate UI.

    Acceptance: membership.role ∈ {owner, admin, member, guest}. Every message belongs to the same workspace as the channel it’s posted in.”



Break the UI into micro-items (and standardize them)


Symptom → You ask at various places for “a table with search & sort” and end up with different sort, filter, paginate, and edit. Typical... Users get confused; fixes require touching many places; credits burn on “make this like that one.”


Why → Generative tools optimize locally: if you don’t specify default UI patterns, the model invents new ones each time (different sort icons, filter layouts, modals vs. inline edit, etc.). Inconsistency compounds with every regeneration.


Prevent →

  • Define a micro-pattern catalog before building (and expand it along the way): table, search, sort, filter chips, pagination, inline edit, modal, date picker, toast, form field, empty/error/loading, card, page layout.

  • For each, pick one default variant (e.g., sorting = clickable column header with ↑/↓ + single tri-state; filters = chip bar + reset).

  • Reuse the same component names/props in every prompt (“use DataTable with sortMode='tri' and FilterChips”).

  • Add an acceptance test: “All tables must use the standard sort & filter patterns.

  • When you need an exception, call it out explicitly in the page spec.


Example (best is to add real UI examples. Either get one from a real app, or make a print screen from your own lovable app, once your’re satisfied with a result).

Element

Default variant (visual)

Behavior (rules)

Accessibility & keyboard

States (empty/error/loading)

Do / Don’t

Standard props / tokens

Example prompt snippet

<Data table>

<Header sort icons + chip filters above + 10/pg pagination>

Click header toggles ASC→DESC→NONE; filters are additive; server-side pagination

Tab/Enter focus headers; ↑/↓ to change sort; Esc clears chip focus

Empty: helper text • Error: toast • Loading: row skeleton

Do keep actions in last column • Don’t use modal for inline edits

rowsPerPage=10, sortMode='tri', filterType='chip'

“Use DataTable standard. Add chip filters (status, owner). Header sort tri-state only.”

<Search box>

<Input with leading icon + debounce>

Debounce 300ms; clears with Esc; persists query in URL

Cmd/Ctrl+K focuses search

Empty shows hint

Do show count • Don’t auto-submit on each char w/o debounce

debounceMs=300

“Insert StandardSearch with debounce=300, persist to URL.”

<Modal>

<Centered, primary action right>

Closes on Esc & backdrop; trap focus

Focus first field; Enter submits

Error inline; Loading disables buttons

Do keep forms ≤ 6 fields

size='md'

“Open StandardModal('Edit card').”

<Inline edit>

<Pencil icon → in-place field>

Enter saves, Esc cancels; show spinner on save

Focus moves to edited field

Error inline

Do use for small text/labels

variant='text'

“Enable StandardInlineEdit on card title.”



Get UI/UX inspiration


Symptom → The generated UI looks polished yet… generic. Stakeholders may say “nice, but boring”.


Why → LLMs optimize for safe, common patterns. They’re great at execution speed, not original art direction. Without references (motion, layout, personality), the result trends to “default.”


Prevent → (plain + short)

  • Moodboard first: collect some references (Dribbble / Behance / Product Hunt) for layout, color, motion, empty states.

  • Pick 1–2 hero inspirations per page.

  • Extract tokens: colors, type scale, spacing, corner radius, shadows, write them down once.

  • Specify microinteractions: hover, focus, tap, loading, empty/celebration states.

  • Attach assets: SVG logos, icons, Lottie/GIFs; define usage rules (sizes, placements).

  • Prompt with refs: “Use Primary ref A layout + ref B motion; keep our tokens; no new colors/fonts.”

  • Freeze the style kit: reuse the same tokens/components across pages.


Example: Celebration page

1/ Search in Dribbble inspiration for celebration.

ree

2/ find the image/GIFs that you like

3/ adapt it to your own graphic chart and colors (creating an image with OpenAI or any other tool)

4/ insert the created file in your app.



Document, or pay for it later


Symptom → When you switch branches (versions of the code of your app), onboard a teammate, or roll back a feature you lose time re-discovering past prompts, decisions, and schema rules. You re-describe things the app already “knew,” burning credits.


Why → Generative builds are fast but volatile: code, prompts, and schema drift. Without lightweight docs, knowledge lives only in chat history and commit diffs.


Prevent →

  • Decide a single source of truth (Notion, Word/Gdoc or any documentation tool) and stick to it.

    The good part, you can ask Lovable, in the chat mode, for the documentation for a particular topic.

  • Document what you can: features, data bases, processes and rules, permissions, design, etc. For example for a feature: describe the goal, actions, data, roles, states, tests).


Use for example the below prompt.

Please provide comprehensive documentation for the [FEATURE NAME] feature in this project.

⚠️ Do not modify code or database schema. If you believe changes are needed, output a short “Proposed Changes” note at the end—docs only.

Include the following aspects:
# 1) Overview & Purpose
- What is the feature? What problem does it solve?
- Who uses it (roles/personas)?
- Key workflows / user journeys (2–4 bullets)

# 2) Architecture & Components
- Frontend: file paths → short description
- Backend: edge/RPC/functions → short description
- Component hierarchy (brief)
- Data flow (only if non-trivial)

# 3) Database Schema
- Related tables + main columns
- Foreign keys / relationships
- Enums / custom types
- Sample row (JSON) if useful

# 4) Security & Access Control
- RLS policies (plain English)
- Role-based rules (who can read/write)
- Auth requirements (e.g., verified email)
- Security-definer functions (if any)

# 5) Business Logic
- Core rules / algorithms
- States and transitions
- Validation rules
- Important edge cases

# 6) UI/UX Details
- Screens/routes
- Navigation between screens
- Forms & input validation
- Feedback: loading, errors, success (toasts/modals)

# 7) Integration Points
- Depends on (internal features)
- External APIs/services
- Shared utilities/hooks

# 8) Recent Changes
- Last significant updates (date/PR)
- Known issues / tech debt
- Planned improvements (next steps)

# 9) Testing & Validation
- How to verify it works (quick steps)
- Test scenarios (Given/When/Then, 2–4 items)
- Demo data to use (IDs/files)

# 10) Configuration
- Env vars / secrets
- Feature flags / toggles
- Other configurable parameters

Return one Markdown document using the following sections exactly. Keep it concise and scannable (bullets, small tables). Where something is unknown, write TBD and add it to “Planned improvements”.


TEST / TEST / TEST (every feature, every time)

ree

Symptom → A feature “works on my machine,” but UI patterns differ from other pages, roles leak data, or empty/error states aren’t handled. Later fixes trigger credit burn and schema churn.


Why → Generative code is fast but inconsistent across pages. Without repeatable checks and demo data, you miss regressions in UI, permissions, and DB integrity.


Prevent →

  • Add demo data (use lovable or chatGPT to create your demo data in a CSV or Json). Indispensible to test your flows.

  • Write acceptance tests per feature and run them after every regen.

  • UI consistency check: same components, spacing, sort/filter patterns as your standards. (see Break the UI into micro-items)

  • Permissions check: E.g. verify each role sees/does only what it should.

  • States check: empty, loading, and error all render clearly (with retries).

  • Data integrity: required fields, enums, forks ; no orphan rows after deletes. (see Database architecture)



Error handling: improving, not magic


and credit burn in bug-fix loops - this... can become your nightmare :-)))


Symptom → Broken flows pop up... and you ask the AI for a “fix”  it creates a new bug. You start a regen spiral and watch most of your credits disappear.


Why → Generative coding varies by run. Without tests and guardrails, each regeneration can introduce regressions, and every try costs credits under daily/monthly caps.


Prevent →

  • Ship in tiny slices: one page → one feature → one small change.

  • Freeze the schema: no DB edits during UI fixes; propose a “schema patch” separately.

  • Write 2–4 acceptance tests first (Given/When/Then). Only regen if a test fails.

  • Add demo data so you can quickly reproduce and verify the bug.

  • Show the AI what it does (provide print screens of your app) and explain what it should do.

  • Do not hesitate to revert to a past version of your code ! (if the newly created bug seems even worse, most often easier to go back, then break the required goal in smaller bits rather than fix your new bug)

Mini acceptance test (example)

  1. Given a member of Board A, when they move a card to Column 2, then the new position is saved and visible within 1s.

  2. Given a guest on Board A, when they try to move a card, then they receive 403 and nothing changes.


Prompt guardrail (paste before a fix) 👉

Do not change schema. Fix only the failing behavior. If a schema change seems required, STOP and output a Schema Patch Proposal.



Looking ahead: Floot vs. Lovable (quick take)


Though I have far more experience on Lovable + Supabase, I recently tested Floot on a B2B app (the new YC 2025 “vibe-coding” entrant).

If Lovable is the “see value fast” rocket, Floot pitches “ship-and-host in one box.”


In short:

  • Product differentiation:

    • Floot: a more integrated stack (built-in hosting on AWS, bundled DB, opinionated backend) → fewer moving parts (though this argument may not be as impactful since 29/9 and Lovabl-cloud latest release)

    • Lovable: gorgeous frontends + fast scaffolding.

Platform

Advantages

Trade-offs

Floot

Fewer integrations to wire; single deploy surface; aims for steadier “production-ready” output for CRUD SaaS.

Younger ecosystem; less community content; less flexibility if you want to swap layers.

Lovable

Stunning UI generation; rich collaboration; fast update cadence/community; strong repo/Git workflow.

You must own schema/RLS and ops choices (Supabase, queues, etc.); complexity can amplify credit burn if you skip discipline.


  • Stage & funding reality:

    Floot is very early, a YC Summer 2025 company with modest funding (~$500k reported). Lovable, by contrast, has raised at scale (pre-seed/seed through $200M Series A in July 2025), meaning faster shipping, broader support, and a larger ecosystem.


  • Pricing: Both are very competitive versus traditional design/frontend/backend cycles.

    • Floot can feel better value at entry because hosting + DB are included.

    • Lovable usually means two bills (Lovable + Supabase), and during bug-fix loops credit burn can spike if you don’t enforce guardrails.

    • Bottom line: factor credits per accepted change and total platform cost (codegen + backend + hosting) rather than headline monthly price.


When to pick what (product-wise)

  • Choose Floot if you want an all-in-one path (generate → host → scale) and you’re okay with early-product bumps.

  • Choose Lovable if you want best-in-class UI speed, plan to own your data model on Supabase, and prefer the momentum of a larger ecosystem.


My take (for now): purely on product packaging, I’d lean Floot; but given stage & funding realities, I’ll keep building on Lovable, watch both closely, and reassess as Lovable hardens back-end workflows or Floot matures.


Interesting space to monitor !!!!



Have fun building !



Do not hesitate to ask me clarification in the comments.






ree

 
 
 

Commentaires


bottom of page