Requirement ingestion
PDF, DOCX, XLSX, CSV, MD, plain text — or a live URL. Crawls, parses, extracts atomic user stories with categories & priorities.
TestForTge is an AI-assisted QA workspace that ingests your specs, docs and live URLs and turns them into structured test cases, checklists, PERT estimations and one-click manual or Playwright runs across Web, Mobile Web, iOS and Android — all stored per project, with full history and export to the tools your team already uses.
Seven capabilities. One Flask app. Estimation from text, mockups, or a live URL. Crash-safe Playwright execution with a live filmstrip. ISTQB-aligned bug reports. Multi-project workspace backed by Postgres. No vendor lock-in — every artefact exports to the tools your team already uses.
PDF, DOCX, XLSX, CSV, MD, plain text — or a live URL. Crawls, parses, extracts atomic user stories with categories & priorities.
Positive, negative and edge-case test cases reviewed by a built-in QA Lead pass — auto-prefilled from your estimation's feature list with one click. Compact checklists in the TestFort format. Per-stage progress modal with deterministic stages so the user knows what is happening at every moment. Exports to MD / HTML / CSV / XLSX, plus Jira-XML straight to engineering.
Three input tabs in one page: paste a spec, drop PNG/JPG/PDF mockups (Claude vision identifies every testable element on every screen), or feed a live URL — the crawler walks up to 50 pages and infers the architecture (WordPress / SPA / e-commerce / dashboard) for an architecture-aware test budget. PERT-based hours per phase — Test design · Execution · Regression · Bug reporting + buffer — with compatibility, bug rate and PM overhead applied. One click converts the extracted feature list into a real Test-Cases pack. Full XLSX breakdown ready for clients.
One module for both flows: step-by-step manual runs and one-click Playwright automation. Target Web, Mobile Web, iOS or Android in the same run, headless or visible, with optional video capture and per-case screenshot evidence. The Playwright pass runs as a fully detached subprocess — it survives the web server restarting, gets a live filmstrip you can watch frame-by-frame, and salvages partial results if the worker is OOM-killed mid-run. Failed test cases lead the gallery with the annotated failure shot (red box + arrow on the broken element); passed cases stay clean.
Auto-generated bugs follow the test.io / softwaretestinghelp / qatestlab standards: title is "[Component]: [Observable defect]" — banned filler phrases like "does not work as expected" are stripped automatically. Severity / Priority are derived from the actual Playwright defect class (click-timeout, text-assertion-fail, navigation-error, server-error, …) and bumped to Critical/Highest when the affected area is a core path (login, checkout, search). Steps to Reproduce numbered atomically, Actual Result includes the verbatim Playwright error, Expected Result uses "should" phrasing. Attachments narrowed to the prior step's frame for context + the failed step's annotated screenshot. Live status, coverage and severity dashboards. Jira-XML / Markdown / CSV export of the whole bug base.
Built-in AI chat trained on the ISTQB Foundation Level (CTFL v4.0.1) syllabus, the Advanced-Level test-analyst material and certification-prep textbooks. Answers theory questions with verbatim syllabus wording, walks juniors through test design techniques, helps triage runs ("why is my live view empty?", "summarise the last 10 bugs by component"), and turns free-form bug descriptions from end-users into structured drafts that land directly in your Bug Reports queue for triage.
Every artefact — test cases, checklists, estimations, runs, bugs, dashboard snapshots — is stored per project in Postgres. The 🗂 project picker is on every module (Dashboard, Estimation, Test Cases, Checklist, Test Execution, Bug Reports), so switching context never costs a redirect. Auto-recovery via session ID means a server restart restores your active project + work in flight on the next page load. Browse history of estimations, watch coverage trends across releases, and ship a deployable QA pack for each project independently.
From the first dropped spec to a deployable QA pack — clear roles, shared workspace, real-time iteration.
Paste a feature URL, upload a PDF/DOCX/XLSX, or just type the user stories. TestForTge accepts whatever format the dev team gave you.
The framework breaks the input into atomic user stories, assigns priority and category, and builds a story ↔ test-case traceability matrix automatically.
Test cases (positive / negative / edge), TestFort-format checklists and PERT-based hour estimates. Reviewed in the same pass by a built-in QA Lead module that fixes voice drift, dup IDs and page-number leftovers.
Pick one or more environments — Web, Mobile Web, iOS or Android — and run the same set manually with the team or hand it to Playwright headless. Pass/fail/blocked, environment, and per-step evidence are captured for every case in the active project.
Everything is exported as one self-contained pack — ready to share with stakeholders, drop into your tracker, or hand off to the next sprint's regression run. Estimation history, run history and metric snapshots stay in the project DB so you can chart trends across releases.
The framework is engineered in-house at Testfort and battle-tested on real client engagements. Every line of generated content is reviewable, exportable and free of vendor lock-in.
Short answers from real client conversations — open any item to expand.
PDF, DOCX, XLSX, CSV, MD, TXT, HTML, PNG/JPG and short videos as evidence — plus a live URL crawler that extracts features straight from a deployed page. The framework normalizes everything into atomic user stories with categories, priorities and a story ↔ test-case traceability matrix.
.env, turn it off any time, no migration required.
REQUIREMENTS.md.
Book a 30-minute walk-through with our QA team — bring your spec, leave with a draft test plan.