Extracted from GL Platform Knowledge Base for test case generation Source: gl-platform-kb/context/dev-and-standards.md
A PBI must meet these criteria before development:
- Can be completed in one sprint
- Technical dependencies resolved
- Acceptance criteria documented, clear, testable
- Refined at least once by a suitable quorum
- UX considered for customer-facing changes
- Assessment integrity changes agreed with Product Content and Stats team
- Non-functional requirements (performance, browser support) documented
A PBI is not complete until:
- Requirements and Acceptance Criteria met in full
- Communication requirements agreed with impacted teams
- No further refactoring required
- Test execution complete, all tests passing
- ALL associated bugs addressed
- New automated tests introduced
- Existing automated tests updated if impacted
- Performance testing requirements agreed
- Successfully deployed to DEV-CI and QA (Tardis: also PREB if appropriate)
- Browser compatibility: Chrome primary; sanity-check on other browsers/devices
Team Mightier uses Given/When/Then format (BDD style):
- Given - Initial context or preconditions
- When - The action or event
- Then - Expected outcome or result
Example:
Given a teacher has logged into Testwise
When they navigate to the Create Sitting page
Then they should see a list of available assessments
Shift-left testing: testing activities begin before development (test cases written during PBI refinement). Aim: avoid testing blockages at end of sprint.
Used for Epoch UI, Reporting UI, and component library.
Key rules:
- Cucumber preprocessor with Gherkin syntax (Given/When/Then feature files)
- All network calls mocked via
cy.intercept— tests run standalone (no live server) - UI elements referenced via
data-cyattributes (not CSS class selectors) - Steps reusable across feature files; maximise step reuse
- Accessibility tests: Cypress Axe with standards:
wcag2a,wcag2aa,wcag21a,wcag21aa,best-practice
- BDD automation regression suite
- Runs against three test suites: Testwise API, Reporting API, Testplayer API
- Reports stored in Azure Blob Storage
- Jasmine-compatible API
- Scripts:
test:cifor CI pipeline (includes coverage)
- For managing Docker containers directly from test code
- Allows SQL Server and Azurite containers to spin up before tests
- Tests must run on Linux/Ubuntu build agents
- Chrome is the primary browser
- Must perform sanity-check on other browsers/devices
- All front-end changes must be tested across supported browsers
When generating test cases, include:
-
Happy Path / Positive Tests
- Standard user flows
- Expected inputs and outputs
- Normal system behavior
-
Negative Tests
- Invalid inputs
- Unauthorized access attempts
- Error handling scenarios
-
Edge Cases
- Boundary values
- Empty/null data
- Maximum/minimum values
- Special characters
-
Integration Tests
- API interactions
- Database operations
- External service integrations (Wonde, Clever)
-
Performance Tests
- Load scenarios
- Response time expectations
- Resource usage
-
Accessibility Tests
- WCAG 2.2 AA compliance
- Screen reader compatibility
- Keyboard navigation
- Color contrast
-
Security Tests
- Authentication/authorization
- Data validation
- SQL injection prevention
- XSS prevention
Epics → Features → PBIs (Product Backlog Items) → Tasks / Bugs / Impediments
PBI States: New → Refined → Development In Progress → Development Done → Testing In Progress → Testing Done → UAT In Progress → Done
PBIs: New → Approved (Dev+Tester pair write ACs in Given/When/Then) → Committed (ready for refinement) → Refined (test cases created, then estimated) → Dev In Progress → Dev Done (PR raised, peer review) → Testing In Progress (local testing, PR approval) → Done (artefact promoted to PREB then PROD).
When assessing PBI quality before test case generation, check for these gaps:
- Missing or vague acceptance criteria
- No description or insufficient detail
- Undefined success criteria
- Missing affected components/systems
- No specification for validation rules (e.g., password complexity)
- No defined user personas or roles
- Missing non-functional requirements (performance, security)
- Unclear edge cases or error handling
- No mention of data requirements
- Missing integration points or API contracts
- Missing UI/UX designs or mockups
- No performance benchmarks specified
- Missing API specifications or contracts
- No test data availability information
- Missing browser/device compatibility requirements
- Verify features work as intended with valid inputs
- Cover core user workflows and business logic
- Ensure acceptance criteria are met
- Test standard use cases
- Verify error handling and invalid inputs
- Test validation rules and constraints
- Ensure system fails gracefully
- Test unauthorized access scenarios
- Boundary conditions and limits
- Unusual but valid scenarios
- Special characters and unicode
- Concurrent operations
- Extremely long or empty inputs
- Ensure existing functionality isn't broken
- Risk-based selection (use risk matrix below)
- Focus on high-impact areas
- Related features and integrations
When to use: Multiple systems/components interact
- API integrations with external services
- Database connectivity and operations
- Message queue processing
- Service-to-service communication
When to use: Speed, load, or scalability matters
- Response time under normal load
- System behavior under peak load
- Resource usage and optimization
- Database query performance
When to use: Handling sensitive data, authentication, or authorization
- SQL injection prevention
- XSS and CSRF protection
- Authentication and authorization checks
- Data encryption and secure storage
- Session management and timeout
When to use: User-facing interfaces
- WCAG 2.2 AA compliance
- Screen reader compatibility
- Keyboard-only navigation
- Color contrast and visual clarity
- Focus indicators and tab order
When to use: Web applications across devices/resolutions
- Mobile (320px-768px)
- Tablet (768px-1024px)
- Desktop (1024px+)
- Orientation changes (portrait/landscape)
When to use: Visual validation against designs
- Layout and positioning
- Style consistency (colors, fonts, spacing)
- Visual elements (buttons, icons, images)
- Loading states and animations
When to use: API changes or new endpoints
- Request/response schema validation
- Breaking change detection
- Backward compatibility
- Version management
When to use: Data seeding, migration, or external data consumption
- Data migration validation
- Data integrity checks
- External data source integration
- Data transformation accuracy
Use Probability × Severity matrix to prioritize regression areas:
- High: Change directly modifies this area OR tightly coupled component
- Medium: Indirect dependency OR shared data structures
- Low: Loosely coupled OR different domain
- Critical: System unusable, data loss, security breach, financial impact
- High: Major feature broken, significant user impact, workflow blocked
- Medium: Minor feature affected, workaround available
- Low: Cosmetic issue, minimal impact
- Critical Priority: High Probability + Critical/High Severity
- High Priority: High Probability + Medium Severity OR Medium Probability + Critical/High Severity
- Medium Priority: Medium Probability + Medium Severity OR Low Probability + Critical/High Severity
- Low Priority: All other combinations
- Identify all areas potentially affected by the change
- For each area, assess Probability and Severity
- Calculate Risk Priority
- Recommend testing for Critical and High risk areas
- Consider Medium risk areas based on available time/resources
All generated test cases must meet these criteria:
- Titles are descriptive and unique - No generic names like "Test login"
- Steps are numbered and actionable - Each step is executable
- Expected results are specific and measurable - No ambiguous language
- No vague terms - Avoid "should work", "properly", "correctly"
- All fields populated - No TBD or placeholder values
- Prerequisites clearly stated - Required setup or state defined
- Test data specified - Exact data values provided
- Pass/fail criteria defined - Objective success criteria
- Linked to acceptance criteria - Maps to specific AC
- Linked to PBI/work item - Clear parent relationship
- Related test cases cross-referenced - Dependencies noted
- Executable by anyone - Given prerequisites, anyone can run it
- Results are objectively verifiable - No subjective judgment needed
- Independent where possible - Doesn't rely on other tests
- Repeatable - Same inputs produce same results
- Structured format - Easy to update and modify
- Automation tags included - Where applicable
- Clear notes - On risks, dependencies, limitations
- Version controlled - Tracked in test management system
Characteristics:
- Repetitive tests executed frequently
- Regression test candidates
- Tests with clear, deterministic outcomes
- Tests requiring multiple data sets
- Performance/load tests
- Security tests (injection attempts, etc.)
- API/integration tests
Tools: Selenium, Playwright, Cypress, RestAssured, Pytest, OWASP ZAP
Characteristics:
- UI tests with stable elements
- Tests requiring moderate setup
- Cross-browser/device tests
- Data validation tests
- Visual regression tests
Tools: Selenium with screenshot comparison, Percy, Applitools
Characteristics:
- Tests requiring subjective judgment
- Tests with frequently changing UI
- Visual design validation (without visual testing tools)
- Complex user workflows with many variations
Tools: Manual testing with assistance from automation for repetitive parts
Characteristics:
- Exploratory testing scenarios
- Usability tests requiring human judgment
- Ad-hoc testing
- Tests where automation cost exceeds value
Approach: Manual testing only
- Typical user inputs that should succeed
- Represent common usage patterns
- Cover different user roles/personas
- Examples: Standard email formats, valid phone numbers, expected date ranges
- Malformed inputs (invalid email format)
- Missing required fields
- Wrong data types (letters in numeric field)
- Out-of-range values (negative quantity)
- Boundary values (minimum/maximum lengths)
- Special characters (!@#$%^&*)
- Unicode and internationalization (中文, العربية)
- Extremely long inputs (SQL injection length)
- Empty strings and null values
- SQL injection payloads:
' OR '1'='1,admin'--,1; DROP TABLE users - XSS payloads:
<script>alert('xss')</script>,<img src=x onerror=alert('xss')> - CSRF tokens and session hijacking attempts
- Authentication bypass attempts
- Path traversal:
../../etc/passwd
Quick reference for test type recommendations based on work item type:
| Work Item Type | Core Tests | Additional Tests (if applicable) |
|---|---|---|
| User Story | Functional (pos/neg), Edge Cases, Regression | UI, UX, Accessibility, Responsiveness |
| Bug Fix | Functional (negative), Regression | Related feature tests, root cause validation |
| New Feature | Functional (pos/neg), Edge Cases, Integration | Performance, Security, UI, Accessibility |
| API Change | Functional, Integration, Regression | Performance, Security, Contract Testing |
| UI Change | UI, Responsiveness, Accessibility | Functional, Visual Regression |
| Security Feature | Security, Functional | Penetration Testing, Compliance Validation |
| Performance Improvement | Performance, Functional | Load, Stress, Scalability |
| Data Migration | Data Testing, Functional | Integration, Regression, Rollback Testing |
| Infrastructure | Integration, Performance | Availability, Disaster Recovery, Failover |
Use these standards when generating test cases to ensure comprehensive coverage and alignment with GL Assessment's quality expectations.