By Testfort · QA framework

From requirements to a deployable QA pack — in minutes, not sprints.

TestForTge is an AI-assisted QA workspace that ingests your specs, docs and live URLs and turns them into structured test cases, checklists, PERT estimations and one-click manual or Playwright runs across Web, Mobile Web, iOS and Android — all stored per project, with full history and export to the tools your team already uses.

What TestForTge does for your QA team

Seven capabilities. One Flask app. Multi-project workspace backed by Postgres. No vendor lock-in — every artefact exports to the tools your team already uses.

Requirement ingestion

PDF, DOCX, XLSX, CSV, MD, plain text — or a live URL. Crawls, parses, extracts atomic user stories with categories & priorities.

Test cases & checklists

Positive, negative and edge-case test cases reviewed by a built-in QA Lead pass. Compact checklists in the TestFort format. Exports to MD / HTML / CSV / XLSX.

QA estimation

PERT-based hours from the same input — minutes-per-TC × features × platforms, with compatibility, bug rate and PM overhead applied. Full XLSX breakdown ready for clients.

Test execution

One module for both flows: step-by-step manual runs and one-click Playwright automation. Target Web, Mobile Web, iOS or Android in the same run, headless or visible, with optional video capture and per-case screenshot evidence.

Bug reports & metrics

Structured bug forms (severity, priority, repro steps, attachments) — kept in the project DB and auto-linked to the failing test case and the run that produced them. Live status, coverage and severity dashboards. One-click Markdown / CSV export of the whole bug base.

Tedgie — ISTQB-grounded QA assistant

Built-in AI chat trained on the ISTQB Foundation Level Syllabus and certification-prep material. Answers theory questions with verbatim syllabus wording, walks juniors through test design techniques, and turns free-form bug descriptions from end-users into structured drafts that land directly in your Bug Reports queue for triage.

Multi-project workspace

Every artefact — test cases, checklists, estimations, runs, bugs, dashboard snapshots — is stored per project in Postgres. Switch context with a click, browse history of estimations, watch coverage trends across releases, and ship a deployable QA pack for each project independently.

How we work together

From the first dropped spec to a deployable QA pack — clear roles, shared workspace, real-time iteration.

testfortge.app/test-cases
Drop spec, screenshots, video — or paste a URL
https://shop.example.com/checkout
requirements_v3.pdf 2.4 MB
wireframes.docx 786 KB
SourceMulti-input · 64 MB cap
You

01.You drop the requirements

Paste a feature URL, upload a PDF/DOCX/XLSX, or just type the user stories. TestForTge accepts whatever format the dev team gave you.

  • 9 supported file types + crawl-on-URL
  • 64 MB upload cap per request
  • Saved to a versioned project snapshot
parser · user-stories
parsed 9 categories 47 stories
P1 As a guest, I want to add an item to cart without signup
P1 As a user, I want to apply a promo code at checkout
P2 As an admin, I want to override pricing on a single SKU
P3 As a returning user, I want my address auto-filled
+ 43 more · click any story to drill into traceability
TestForTge

02.We parse, classify, prioritise

The framework breaks the input into atomic user stories, assigns priority and category, and builds a story ↔ test-case traceability matrix automatically.

  • Persona & role detection (guest / user / admin)
  • Page-number / artefact scrubbing on PDFs
  • Saved to your project — re-runnable any time
generated · 218 test cases
Test Cases Checklist Estimation
PASSTC-018Verify that adding 2 items updates cart total
PASSTC-019Verify that promo PROMO10 reduces total by 10%
REVIEWTC-020Verify negative quantity is rejected client-side
EDGETC-021Verify cart with 999 items doesn't time out
Coveragepositive · negative · edge
TestForTge

03.We generate the QA pack

Test cases (positive / negative / edge), TestFort-format checklists and PERT-based hour estimates. Reviewed in the same pass by a built-in QA Lead module that fixes voice drift, dup IDs and page-number leftovers.

  • Markdown · HTML · CSV · XLSX exports
  • Compatible with TestRail, Zephyr, qase.io
  • Estimation breakdown shipped as a separate workbook
execution · live
Manual run · QA
142 PASS 3 FAIL 2 BLOCK
Automation · Playwright
42 PASS 1 REVIEW
Pass rate96.6 %
▸ video.mp4 · screenshots · har trace per step
You + TestForTge

04.You run, we record

Pick one or more environments — Web, Mobile Web, iOS or Android — and run the same set manually with the team or hand it to Playwright headless. Pass/fail/blocked, environment, and per-step evidence are captured for every case in the active project.

  • Multi-environment runs — one record per environment, side by side
  • Async job queue · 429 with Retry-After on overflow
  • Screenshots, video, HAR per Playwright run
  • Auto-generated test accounts on demand
delivery · qa-pack-v3.zip
Test cases 218 · XLSX, MD Checklist 1 · TestFort format Estimation 84.5 h · PERT Bug reports 7 · severity-coded Automation runs 3 · video + HAR
XLSX Markdown CSV HTML TestRail Zephyr qase.io
TestForTge

05.We deliver a deployable QA pack

Everything is exported as one self-contained pack — ready to share with stakeholders, drop into your tracker, or hand off to the next sprint's regression run. Estimation history, run history and metric snapshots stay in the project DB so you can chart trends across releases.

  • Stakeholder-friendly XLSX / MD / CSV / HTML reports
  • Per-project history — estimations, runs, bugs, dashboard trends
  • One-click re-run on a new release
Powered by
Flask 3.1 Playwright 1.49 Anthropic Claude openpyxl · pypdf Testfort QA process

The framework is engineered in-house at Testfort and battle-tested on real client engagements. Every line of generated content is reviewable, exportable and free of vendor lock-in.

Frequently asked questions

Short answers from real client conversations — open any item to expand.

Is TestForTge self-hosted or a SaaS?
Self-hosted. It's a Flask application you run on your own infrastructure — Docker, a VM, or bare-metal. Specs, generated artefacts and bug reports never leave your network. No data is sent to a third party unless you explicitly turn on the optional AI mode.
What inputs does it accept?
PDF, DOCX, XLSX, CSV, MD, TXT, HTML, PNG/JPG and short videos as evidence — plus a live URL crawler that extracts features straight from a deployed page. The framework normalizes everything into atomic user stories with categories, priorities and a story ↔ test-case traceability matrix.
Does it replace my QA team?
No — it removes the boilerplate hours. TestForTge produces a first draft: structured test cases (positive/negative/edge), a TestFort-format checklist, a PERT-based estimation and Playwright scripts. Your QA Lead reviews and adjusts before signing off. The built-in QA-Lead pass already fixes voice drift, duplicate IDs and PDF artefacts before you see the output.
Is Tedgie's QA knowledge actually trustworthy?
Tedgie is grounded in the ISTQB Foundation Level (CTFL v4.0.1) syllabus and a certification-prep textbook — testing principles, the test process, levels & types, all design techniques (equivalence partitioning, BVA, decision tables, state transition, exploratory, error-guessing), reviews and risk-based testing.
Can I use it without sending data to an LLM provider?
Yes. The default mode runs entirely offline: rule-based generators for test cases, checklists, estimation and the Tedgie chat assistant. AI mode is opt-in via an Anthropic API key when you want richer free-form suggestions — turn it on in .env, turn it off any time, no migration required.
Will the exports work with TestRail / Zephyr / qase.io?
Drop-in. Test cases and checklists export to Markdown, HTML, CSV and XLSX in column shapes those tools accept directly. Estimation ships as a separate XLSX workbook with the full breakdown — minutes-per-TC × features × platforms, compatibility, bug rate and PM overhead — ready for client sign-off.
How does the built-in automation work?
One click. The framework synthesises Playwright scripts directly from your test cases, runs them headless in Chromium, and captures video, screenshots and a per-step HAR trace. Async job queue with a per-session concurrency cap means you can launch a new run while a previous one is still finishing.
What about security & compliance?
CSRF on every state-changing request, HttpOnly + SameSite cookies, strict CSP, path-traversal guards on asset serving, optional Basic-Auth gate for shared deployments. Project data lives in your own Postgres instance (or local SQLite for dev) — never on a third-party server. Threat model and security posture are documented in the framework's REQUIREMENTS.md.
Can I run multiple projects in parallel?
Yes. Every run is scoped to an active project — create as many as you need from the dashboard, each with its own base URL and description. Test cases, checklists, estimations, runs, bugs and dashboard snapshots are stored separately per project, so the team can switch context without polluting another release's history.
Where is my data stored — and can I export it?
In your own Postgres database (Render, your VM, on-prem — your call). All nine tables — projects, test cases, checklists, bug reports, estimations, execution runs, per-case results, dashboard snapshots, Tedgie submissions — are queryable with vanilla SQL. Every artefact also exports to XLSX / MD / CSV / HTML directly from the UI, so a stakeholder hand-off needs zero database access.
How do I try it on my project?
Book a 30-minute walk-through with our QA team. Bring a real spec — a PDF, a Confluence page, a URL — and you'll leave the call with a generated draft test plan and an estimation workbook for your stack. Pricing and engagement model are scoped on the call.

Ready to ship QA faster?

Book a 30-minute walk-through with our QA team — bring your spec, leave with a draft test plan.