Test Automation Platform
AI • DevTools • Automation • Internal Platform
A full-stack automation platform that enables technical and non-technical users to generate stable, production-grade Playwright tests using either natural language or recorded browser scripts. The system enhances raw scripts with AI, normalizes unstable Salesforce selectors, runs tests on GitLab CI, and returns full video + HTML reports in a clean UI. Built with Node, Playwright, GitLab Pipelines, Docker, and internal design system components.

Project Overview
I designed and built an internal test automation platform that dramatically accelerated how our QA, engineering, and release teams create, modify, and execute UI tests.
The platform supports:
Importing Playwright recorder scripts
Natural-language → Playwright test generation
AI-powered script enhancement and selector cleanup
GitLab CI execution with Docker runners
Full video recordings + HTML reports
Automatic retries, waits, and error-handling
Cross-team visibility into pipeline results
Goal:
Turn a highly technical Playwright + CI workflow into a fast, reliable, and accessible tool that reduces manual effort, stabilizes tests, and provides transparent pipeline feedback.
The Problem
Before this platform:
Test creation required Playwright specialists
Salesforce UI DOM changes made tests extremely flaky
Regression cycles often took hours
QA depended heavily on engineering for script fixes
GitLab CI logs were too technical for non-engineers
Debugging failures required DevTools experience
No unified place to create, enhance, run, and view tests
A high-friction workflow, not scalable across teams.
Users / Audience
QA testers (technical + non-technical)
Engineers
Release managers
Automation leads
User Needs
Generate test scripts without writing code
Normalize unstable Salesforce selectors automatically
Simple UI to run tests on GitLab CI
Easy-to-understand logs and evidence
Video recordings for failure debugging
Modify and re-run scripts without editing raw code
Goals
5–10x faster test creation
Reduce regression failures from selector instability
Make GitLab CI logs human-readable
Unify automation workflows across teams
Support both natural-language and imported recordings
Improve transparency and reduce Dev–QA back-and-forth
Architecture Overview
1. Test Creation UI (Node + React)
Natural-language instruction mode
Script import & preview modal
AI enhancement pipeline trigger
Custom selector injection
Test configuration
2. Script Enhancement Engine (Node)
Selector normalization for Salesforce DOM
Auto-waiting + retry scaffolds
Form field mapping
Error-handling additions
Final Playwright test generation
3. GitLab CI Integration
Docker-based Playwright execution
Video recording (Chromium/FF)
HTML report generation
Structured logs and result extraction
API callbacks back to the frontend
4. Results Viewer
Pipeline logs (cleaned and summarized)
Video playback
Step-by-step HTML test report
Status + timestamps
Process
1. Research & Pain Points
Mapped DOM instability across Salesforce components
Identified common flakiness patterns
Studied GitLab CI pipeline log structure
Interviewed testers blocked by DevTools-heavy debugging
2. Early Concepts
Import/export script flow
AI enhancement panel for normalization
Pipeline execution widget
Unified result viewer with video + HTML
3. Designing the System
Workflows optimized for non-technical testers
Readable log format (removed DevTools noise)
Green/red pipeline cues
Automatic screenshot + evidence capture
4. High Fidelity UI
“Create New Test”
“Import Recording” modal
AI-enhanced script viewer
Results page with embedded GitLab artifacts
5. Development & Integration
Node service for test enhancement
Playwright library integration
GitLab CI YAML pipeline with Docker executor
Artifact parsing and mapping back to UI
6. Testing & Validation
100+ internal test runs
Cross-browser runs
Pipeline stress tests
Selector verification across Salesforce edge cases
Impact
Quantitative
Test creation: 3 hours → 5 minutes
Regression cycles reduced 50–70%
Script flakiness dropped ~40%
Dev–QA interruptions drastically reduced
Lower engineering maintenance load
Qualitative
Testers felt empowered—no longer blocked by code
Release managers gained visibility with real evidence
Engineering saw fewer broken pipelines
Smoother, faster release cycles
Challenges
Normalizing selectors in unpredictable Salesforce DOM
Making GitLab logs readable for non-technical users
Ensuring AI script changes were safe and non-destructive
Designing UI for both beginners + power users
Handling pipeline failures gracefully across environments
Reflection
This project strengthened my ability to design technical platforms for mixed audiences. I learned how to:
Abstract complex DevTools workflows
Integrate AI where it creates meaningful value
Improve reliability in automation-heavy environments
Build clear, human-friendly CI feedback loops
Create maintainable internal tooling for long-term use
Flowchart
Screens








