MENU
Showing posts with label Playwright. Show all posts
Showing posts with label Playwright. Show all posts

Sunday, 29 June 2025

 In the fast-paced world of web development, functionality is paramount, but so is visual integrity. A button that works perfectly but is misaligned, text that's readable but the wrong font size, or a broken layout can severely impact user experience and brand perception. Functional tests, while essential, often miss these subtle yet critical visual defects.

This is where Visual Regression Testing (VRT) comes into play. VRT ensures that your application's UI remains pixel-perfect and consistent across releases, browsers, and devices. And for modern web automation, Playwright offers powerful, built-in capabilities to make VRT not just possible, but efficient.

This blog post will guide you through mastering visual regression testing with Playwright, ensuring your application always looks exactly as intended.

What is Visual Regression Testing?

Visual Regression Testing is a testing technique that compares screenshots of a web page or component against a "baseline" (or "golden") image. If a new screenshot, taken after code changes, differs from the baseline, the test fails, highlighting the visual discrepancies. This allows QA teams and developers to quickly identify unintended UI changes, layout shifts, or styling regressions that functional tests might overlook.

Why is VRT crucial?

  • Catching Hidden UI Bugs: Detects visual glitches, broken layouts, font changes, and color discrepancies that automated functional tests won't.

  • Ensuring Brand Consistency: Maintains a consistent look and feel across your application, crucial for brand identity.

  • Cross-Browser/Device Consistency: Verifies that your UI renders correctly across different browsers (Chromium, Firefox, WebKit) and viewports.

  • Accelerating Development: Catches visual regressions early in the CI/CD pipeline, reducing costly fixes in later stages or production.

  • Boosting Confidence in Deployments: Provides an extra layer of assurance that new features or bug fixes haven't negatively impacted existing UI elements.

Playwright's Built-in Visual Comparison Power

One of Playwright's standout features is its native support for visual comparisons through the toHaveScreenshot() assertion. This means you don't need to rely on external plugins for basic VRT, simplifying your setup and streamlining your workflow.

Step 1: Set up Your Playwright Project

If you haven't already, set up a Playwright project:

Bash
npm init playwright@latest
# Choose TypeScript, add examples, etc.

Step 2: Write Your First Visual Test

Let's create a simple test that navigates to a page and captures a screenshot for comparison.

Create a new test file, e.g., tests/visual.spec.ts:

TypeScript
import { test, expect } from '@playwright/test';

test.describe('Visual Regression Tests', () => {

  test('homepage should look as expected', async ({ page }) => {
    await page.goto('https://www.example.com'); // Replace with your application's URL

    // Capture a full page screenshot and compare it with the baseline
    await expect(page).toHaveScreenshot('homepage.png', { fullPage: true });
  });

  test('specific element should look consistent', async ({ page }) => {
    await page.goto('https://www.example.com/products'); // Replace with a relevant URL

    // Target a specific element for screenshot comparison
    const productCard = page.locator('.product-card').first();
    await expect(productCard).toHaveScreenshot('first-product-card.png');
  });

});

Step 3: Run for Baseline Snapshots

The first time you run a visual test, Playwright will not find a baseline image and will automatically generate one. The test will initially fail, prompting you to review and approve the generated image.

Run your tests:

Bash
npx playwright test tests/visual.spec.ts

You will see output similar to: A snapshot doesn't exist at __snapshots__/visual.spec.ts-snapshots/homepage.png. A new snapshot was written.

Step 4: Review and Update Baselines

After the first run, Playwright saves the screenshots in a __snapshots__ folder next to your test file. Crucially, you must visually inspect these generated baseline images. If they look correct and reflect the desired state of your UI, "update" them to become your approved baselines:

Bash
npx playwright test --update-snapshots

Now, future runs will compare against these approved baseline images. If there's any pixel difference, the test will fail, and Playwright will generate three images in your test-results folder:

  • [test-name]-actual.png: The screenshot from the current run.

  • [test-name]-expected.png: The baseline image.

  • [test-name]-diff.png: A visual representation of the differences (often highlighted in red/pink).

This diff.png is invaluable for quickly pinpointing exactly what changed.

Best Practices for Robust Visual Regression Testing

While simple to implement, making VRT truly effective requires some best practices:

  1. Consistent Test Environments: Browser rendering can vary slightly across different operating systems, browser versions, and even hardware. For reliable results, run your VRT tests in a consistent, controlled environment (e.g., dedicated CI/CD agents, Docker containers, or cloud-based Playwright grids).

  2. Handle Dynamic Content: Dynamic elements (timestamps, ads, user-specific data, animations, loading spinners) are notorious sources of flaky tests in VRT.

    • Masking: Use the mask option to hide specific elements during screenshot capture:

      TypeScript
      await expect(page).toHaveScreenshot('page.png', {
        mask: [page.locator('.dynamic-ad'), page.locator('#current-timestamp')],
      });
      
    • Styling: Apply custom CSS via stylePath to hide or alter dynamic elements before taking the screenshot.

    • Wait for Stability: Ensure all animations have completed and dynamic content has loaded before taking the screenshot using Playwright's intelligent waits.

  3. Define Consistent Viewports: Always specify a viewport in your playwright.config.ts or directly in your test to ensure consistent screenshot dimensions across runs and environments.

    TypeScript
    // playwright.config.ts
    use: {
      viewport: { width: 1280, height: 720 },
    },
    
  4. Manage Snapshots Effectively:

    • Version Control: Store your __snapshots__ folder in version control (e.g., Git). This allows you to track changes to baselines and collaborate effectively.

    • Cross-Browser/Platform Baselines: Playwright automatically generates separate baselines for each browser/OS combination. Review all of them.

    • Regular Review & Update: When UI changes are intentional, update your baselines (--update-snapshots). Make reviewing diff.png images a mandatory part of your code review process for UI changes.

  5. Threshold Tuning: Playwright's toHaveScreenshot() allows options like maxDiffPixels, maxDiffPixelRatio, and threshold to control the sensitivity of the comparison. Adjust these based on your application's needs to reduce false positives while still catching meaningful regressions.

    TypeScript
    await expect(page).toHaveScreenshot('homepage.png', {
      maxDiffPixelRatio: 0.01, // Allow up to 1% pixel difference
      threshold: 0.2, // Tolerance for color difference
    });
    
  6. Integrate into CI/CD: Make VRT a gate in your DevOps pipeline. Run visual tests on every pull request or significant commit to catch UI regressions before they merge into the main branch.

Beyond Playwright's Built-in Features (When to use external tools)

While Playwright's built-in VRT is excellent, for advanced use cases (like comprehensive visual dashboards, visual review workflows, or advanced AI-powered visual comparisons), consider integrating with specialized tools like:

  • Percy (BrowserStack): Offers a cloud-based visual review platform, intelligent visual diffing, and a collaborative UI for approving/rejecting changes.

  • Applitools Eyes: Provides AI-powered visual testing (Visual AI) that understands UI elements, ignoring dynamic content automatically and focusing on actual layout/content changes.

  • Argos: An open-source alternative for visual review.

These tools often provide more sophisticated diffing algorithms and a dedicated UI for reviewing and approving visual changes, which can be invaluable for larger teams or complex applications.

Conclusion: Visual Quality as a First-Class Citizen

In the pursuit of delivering high-quality software at speed, visual regression testing with Playwright is no longer a luxury but a necessity. By leveraging Playwright's powerful built-in capabilities and adhering to best practices, you can effectively catch visual defects, maintain a consistent user experience, and ensure your application always looks its best. This vital layer of testing complements your functional tests, ultimately contributing to a more robust test suite health and greater confidence in every deployment within your DevOps workflow.

Start making "pixel perfect" a standard in your development process today!

 


In today's digital-first world, your web application isn't truly "done" unless it's accessible to everyone. Accessibility testing (often shortened to A11y testing) ensures that your software can be used by people with a wide range of abilities and disabilities, including visual impairments, hearing loss, motor difficulties, and cognitive disabilities. Beyond legal compliance (like WCAG guidelines), building accessible applications means reaching a broader audience, enhancing user experience for all, and demonstrating ethical design.

While manual accessibility testing (e.g., using screen readers, keyboard navigation) is crucial, automating parts of it can significantly accelerate your efforts and catch common issues early. This is where Playwright, a modern and powerful web automation framework, combined with dedicated accessibility tools, comes in.

This guide will provide a practical approach to integrating automated accessibility checks into your Playwright test suite.

Why Accessibility Testing Matters

  • Legal Compliance: Laws like the Americans with Disabilities Act (ADA) in the US, the European Accessibility Act, and WCAG (Web Content Accessibility Guidelines) set standards for digital accessibility. Non-compliance can lead to significant legal repercussions.

  • Wider User Base: Globally, over a billion people live with some form of disability. An inaccessible website excludes a substantial portion of potential users.

  • Improved User Experience: Features designed for accessibility (e.g., clear navigation, proper headings, keyboard support) often benefit all users, not just those with disabilities.

  • SEO Benefits: Many accessibility best practices (like proper semantic HTML, alt text for images) also contribute positively to Search Engine Optimization.

  • Ethical Responsibility: Building inclusive products is simply the right thing to do.

The Role of Automation vs. Manual Testing in A11y

It's important to understand that automated accessibility testing cannot catch all accessibility issues. Many problems, especially those related to cognitive load, user flow, or assistive technology compatibility, require manual accessibility testing and even testing by real users with disabilities.

However, automated tools are excellent at catching a significant percentage (often cited as 30-50%) of common, programmatic errors quickly and consistently. They are best for:

  • Missing alt text for images

  • Insufficient color contrast

  • Missing form labels

  • Invalid ARIA attributes

  • Structural issues (e.g., empty headings)

Automated tests allow you to shift-left testing for accessibility, finding issues early in the development cycle, when they are cheapest and easiest to fix.

Integrating Axe-core with Playwright for Automated A11y Checks

The most popular and effective tool for automated accessibility scanning is Axe-core by Deque Systems. It's an open-source library that powers accessibility checks in tools like Lighthouse and Accessibility Insights. Playwright integrates seamlessly with Axe-core via the @axe-core/playwright package.

Step 1: Set up your Playwright Project

If you don't have a Playwright project, set one up:

Bash
npm init playwright@latest
# Choose TypeScript, add examples, etc.

Step 2: Install Axe-core Playwright Package

Install the necessary package:

Bash
npm install @axe-core/playwright axe-html-reporter
  • @axe-core/playwright: The core library to run Axe-core with Playwright.

  • axe-html-reporter: (Optional but highly recommended) Generates beautiful, readable HTML reports for accessibility violations.

Step 3: Write Your First Accessibility Test

Let's create a simple test that navigates to a page and runs an Axe scan.

Create a new test file, e.g., tests/accessibility.spec.ts:

TypeScript
import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
import { createHtmlReport } from 'axe-html-reporter';
import * as fs from 'fs';
import * as path from 'path';

test.describe('Accessibility Testing', () => {

  test('should not have any automatically detectable accessibility issues on the homepage', async ({ page }, testInfo) => {
    await page.goto('https://www.google.com'); // Replace with your application's URL

    // Run Axe-core scan
    const accessibilityScanResults = await new AxeBuilder({ page })
      .withTags(['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa', 'best-practice']) // Define WCAG standards and best practices
      .analyze();

    // Generate HTML report for detailed violations
    if (accessibilityScanResults.violations.length > 0) {
      const reportDir = 'test-results/a11y-reports';
      const reportFileName = `${testInfo.title.replace(/[^a-zA-Z0-9]/g, '_')}_${testInfo.workerIndex}.html`;
      const reportPath = path.join(reportDir, reportFileName);

      if (!fs.existsSync(reportDir)) {
        fs.mkdirSync(reportDir, { recursive: true });
      }

      createHtmlReport({
        results: accessibilityScanResults,
        options: {
          outputDir: reportDir,
          reportFileName: reportFileName,
        },
      });
      console.log(`Accessibility report generated: ${reportPath}`);
      testInfo.attachments.push({
        name: 'accessibility-report',
        contentType: 'text/html',
        path: reportPath
      });
    }

    // Assert that there are no accessibility violations
    expect(accessibilityScanResults.violations).toEqual([]);
  });

  test('should not have accessibility issues on a specific element (e.g., form)', async ({ page }) => {
    await page.goto('https://www.example.com/contact'); // Replace with a page with a form

    const accessibilityScanResults = await new AxeBuilder({ page })
      .include('form#contact-form') // Scan only a specific element
      .withTags(['wcag2a', 'wcag2aa'])
      .analyze();

    expect(accessibilityScanResults.violations).toEqual([]);
  });
});

Step 4: Run Your Tests

Bash
npx playwright test tests/accessibility.spec.ts

If violations are found, the test will fail, and an HTML report will be generated in test-results/a11y-reports showing the exact issues, their WCAG criteria, and suggested fixes.

Advanced Accessibility Testing Strategies with Playwright

  1. Scanning Specific Elements (.include() / .exclude()): Focus your scan on a particular component or exclude known inaccessible third-party widgets.

    TypeScript
    await new AxeBuilder({ page }).include('#my-component').analyze();
    await new AxeBuilder({ page }).exclude('.third-party-widget').analyze();
    
  2. Configuring Rules and Standards (.withTags() / .disableRules()): Specify which WCAG standards (e.g., wcag2aa for Level AA, wcag21a for WCAG 2.1 Level A) or best practices to include, or temporarily disable specific rules.

    TypeScript
    // Check for WCAG 2.1 Level AA and best practices
    .withTags(['wcag21aa', 'best-practice'])
    // Disable a specific rule (e.g., for known, accepted issues)
    .disableRules(['color-contrast'])
    
  3. Integrating into E2E Flows: Instead of separate tests, run accessibility scans at crucial points within your existing end-to-end functional tests (e.g., after navigating to a new page, after a modal opens).

    TypeScript
    test('User registration flow should be accessible', async ({ page }) => {
      await page.goto('/register');
      await expect(new AxeBuilder({ page }).analyze()).resolves.toHaveNoViolations(); // Initial page check
    
      await page.fill('#username', 'testuser');
      await page.fill('#password', 'password');
      await page.click('button[type="submit"]');
    
      await page.waitForURL('/dashboard');
      await expect(new AxeBuilder({ page }).analyze()).resolves.toHaveNoViolations(); // Dashboard check
    });
    
  4. CI/CD Integration: Automate these accessibility checks to run with every code commit or nightly build. This ensures continuous quality and helps catch regressions early in your DevOps pipeline. Playwright's integration with CI tools makes this straightforward.

Limitations of Automated A11y Testing

Remember, automation is a powerful first line of defense, but it doesn't replace human judgment:

  • Contextual Issues: Automated tools can't determine if the purpose of a link is clear to a user or if the reading order makes sense.

  • Complex Interactions: They struggle with scenarios requiring user intent, like complex form workflows or keyboard-only navigation for custom components.

  • Assistive Technology Compatibility: True compatibility with screen readers, braille displays, etc., requires manual testing with those devices.

Therefore, a truly robust accessibility testing strategy combines automated checks (for speed and coverage of common issues) with expert manual reviews and, ideally, user testing with individuals with disabilities.

Conclusion: Building a More Inclusive Web

Integrating automated accessibility testing with Playwright using tools like Axe-core is a crucial step towards building inclusive and compliant web applications. By making A11y a consistent part of your continuous testing efforts and shifting quality left, you can proactively identify and resolve issues, reduce your test maintenance burden, and ultimately deliver a better experience for every user. Start making accessibility a core part of your quality strategy today!

Saturday, 28 June 2025



Magento applications, with their rich UIs, extensive JavaScript, and reliance on AJAX, often pose unique challenges for test automation. While Playwright's intelligent auto-waiting handles many scenarios, the dynamic nature of Magento's storefront and admin panels demands more sophisticated waiting strategies.

This guide explores specific Playwright waiting mechanisms that are particularly effective when automating tests on a Magento base application.

                                      

1. Embracing Playwright's Auto-Waiting (The Foundation)

First and foremost, always leverage Playwright's built-in auto-waiting for actions. This means that when you perform a click(), fill(), check(), etc., Playwright automatically waits for the element to be visible, enabled, stable, and receive events before attempting the action. This is your primary defense against flakiness.

JavaScript
// Playwright automatically waits for the button to be clickable
await page.getByRole('button', { name: 'Add to Cart' }).click();

// Playwright waits for the input to be editable
await page.getByLabel('Search').fill('product name');

However, Magento's complexity often goes beyond simple element actionability.

2. Waiting for Page Load States (After Navigation)

Magento pages, especially PLPs and PDPs, can be heavy. page.waitForLoadState() is crucial after any navigation or form submission.

  • 'domcontentloaded': The HTML has been fully loaded and parsed. Good for quick checks, but not all JS might have executed or assets loaded.

  • 'load': All resources (images, stylesheets, scripts) have finished loading. A safer bet for general page readiness.

  • 'networkidle': When there are no more than 0 network connections for at least 500 ms. This is often the most reliable for Magento, especially for pages that load content asynchronously after the initial DOM is ready (e.g., related products, product reviews, price updates).

JavaScript
// Navigate to a product page and wait for everything to settle
await page.goto('/product/some-product-sku.html', { waitUntil: 'networkidle' });

// After adding to cart, wait for mini-cart to update its content
await page.getByRole('button', { name: 'Add to Cart' }).click();
await page.waitForLoadState('networkidle'); // Might trigger a cart update via AJAX

3. Waiting for Specific URLs (Post-Navigation)

Many Magento actions trigger redirects or change URLs (e.g., login, checkout steps, category navigation). page.waitForURL() is your best friend here.

JavaScript
// After successful login, wait for the dashboard URL
await page.getByRole('button', { name: 'Sign In' }).click();
await page.waitForURL('**/customer/account/', { timeout: 15000 });

// After proceeding to checkout, wait for the first checkout step URL
await page.getByRole('button', { name: 'Proceed to Checkout' }).click();
await page.waitForURL('**/checkout/index/index/#shipping', { timeout: 20000 });

4. Waiting for Network Activity (AJAX-Heavy Interactions)

Magento heavily uses AJAX for dynamic content updates (e.g., filtering products, updating cart quantity, search suggestions). page.waitForResponse() and page.waitForRequest() are indispensable.

  • Waiting for filtered products: When applying a filter on a PLP, the product list often reloads via AJAX.

    JavaScript
    // Click on a filter option (e.g., 'Color: Red')
    const productsResponsePromise = page.waitForResponse(response =>
      response.url().includes('/catalogsearch/ajax/suggest/') && response.status() === 200
    );
    await page.getByLabel('Color').getByText('Red').click();
    await productsResponsePromise; // Wait for the AJAX response to complete
    // Now, assert on the updated product list
    await expect(page.locator('.product-item')).toHaveCount(5);
    
  • Waiting for add-to-cart confirmation:

    JavaScript
    const addToCartResponsePromise = page.waitForResponse(response =>
      response.url().includes('/checkout/cart/add/') && response.status() === 200
    );
    await page.getByRole('button', { name: 'Add to Cart' }).click();
    await addToCartResponsePromise;
    await expect(page.locator('.message.success')).toBeVisible(); // Or check mini-cart
    

5. Waiting for Specific Elements/Locators (Dynamic Content & Overlays)

Magento often displays loading spinners, overlays (like "Adding to Cart" popups), or dynamically loaded blocks.

  • locator.waitFor(): The most direct way to wait for an element's state change.

    JavaScript
    // Wait for the main content area to be visible after a dynamic load
    await page.locator('#maincontent').waitFor({ state: 'visible' });
    
    // Wait for a loading overlay to disappear
    await page.locator('.loading-mask').waitFor({ state: 'hidden' });
    
  • expect().toBeVisible() / expect().toBeHidden(): These are web-first assertions that automatically retry, effectively acting as intelligent waits for visibility.

    JavaScript
    // Assert that the success message appears and wait for it
    await expect(page.locator('.message.success')).toBeVisible({ timeout: 10000 });
    

6. Waiting for Specific Events (Pop-ups, Alerts)

While less common for core Magento flows, third-party extensions might introduce pop-ups (e.g., newsletter sign-ups, cookie consents) or browser alerts.

JavaScript
// Handle a potential pop-up (e.g., newsletter signup modal)
// Note: This often needs to be set up *before* the action that triggers the popup
const popupPromise = page.waitForEvent('popup');
// (Perform action that might trigger popup, e.g., waiting a few seconds on homepage)
// For Magento, often an initial page load could trigger it.
// await page.goto('/');
const popup = await popupPromise;
await popup.locator('#newsletter-popup-close-button').click(); // Interact with the popup

// Handle a browser dialog (e.g., 'Are you sure you want to delete?')
page.on('dialog', async dialog => {
  console.log(`Dialog message: ${dialog.message()}`);
  await dialog.accept(); // Or dialog.dismiss()
});
// Trigger the action that causes the dialog
await page.getByRole('button', { name: 'Delete Item' }).click();

7. Waiting for Custom JavaScript Conditions (page.waitForFunction())

For extremely specific and complex Magento scenarios where standard waits don't suffice, you might need to wait for a JavaScript variable to be set, a particular class to be added/removed, or a complex animation to complete.

JavaScript
// Example: Wait for a custom JavaScript flag set by Magento's theme after AJAX update
// (e.g., after mini-cart updates, a global JS var `window.cartUpdated` is set to true)
await page.waitForFunction(() => window.cartUpdated === true, null, { timeout: 15000 });

// Wait for a dynamically calculated price to update after selecting options
const priceLocator = page.locator('.product-info-price .price');
await page.waitForFunction((priceSelector) => {
  const priceElement = document.querySelector(priceSelector);
  // Check if price element exists and its text content is not empty or "Loading..."
  return priceElement && priceElement.textContent.trim() !== '' && !priceElement.textContent.includes('Loading');
}, '.product-info-price .price');

8. Best Practices for Magento Waiting

  • Prioritize Specificity: Always prefer waiting for a specific condition (e.g., waitForURL, waitForResponse, locator.waitFor()) over generic waits like networkidle if a more precise signal is available.

  • Combine Waits: For complex interactions (like "Add to Cart" that updates mini-cart via AJAX and possibly shows a success message), you might combine waitForResponse with expect().toBeVisible().

  • Timeouts are Your Friend (and Foe): Playwright has reasonable default timeouts, but Magento's server response times can vary. Adjust actionTimeout, navigationTimeout, and expect.timeout in your playwright.config.js or per-call if specific actions are consistently slow.

  • Debug with Trace Viewer: When tests are flaky due to waiting issues, use Playwright's Trace Viewer (npx playwright test --trace on) to visually inspect the state of the page and network activity leading up to the failure. This helps identify the exact moment your script gets out of sync.

  • Identify Unique Identifiers: Leverage Magento's semantic HTML (roles, labels) and encourage developers to add data-testid attributes to critical dynamic elements to make locators more robust, which Playwright can then auto-wait on more reliably.

  • Avoid page.waitForTimeout(): This is a hard wait and should be avoided at all costs. It makes tests slow and unreliable, as Magento's dynamic loading times are rarely fixed.

By strategically combining these Playwright waiting mechanisms, you can effectively synchronize your automation scripts with the dynamic and sometimes unpredictable nature of a Magento application, leading to more stable, reliable, and faster test execution.


When you embark on a Playwright test automation journey, you quickly encounter playwright.config.js. This seemingly humble JavaScript file is, in fact, the central control panel for your entire test suite. It's where you configure browsers, define parallel execution, set timeouts, integrate reporters, and manage various test environments.

Understanding playwright.config.js is crucial because it dictates the behavior of your tests without needing to modify individual test files. This makes your framework incredibly flexible, scalable, and adaptable to different testing needs.

Let's unravel the key sections of this powerful configuration file.

What is playwright.config.js?

At its core, playwright.config.js is a Node.js module that exports a configuration object. Playwright's test runner reads this file to understand:

  • Where to find your tests.

  • Which browsers to run tests on.

  • How many tests to run in parallel.

  • How to report test results.

  • Various timeouts and debugging options.

  • And much more!

Basic Structure

When you initialize a Playwright project (e.g., npm init playwright@latest), a playwright.config.js file is generated for you. It typically looks something like this:

JavaScript
// playwright.config.js
import { defineConfig } from '@playwright/test';

export default defineConfig({
  testDir: './tests', // Where your test files are located
  fullyParallel: true, // Run tests in files in parallel
  forbidOnly: process.env.CI ? true : false, // Disallow .only on CI
  retries: process.env.CI ? 2 : 0, // Number of retries on CI
  workers: process.env.CI ? 1 : undefined, // Number of parallel workers on CI
  reporter: 'html', // Reporter to use

  use: {
    // Base URL to use in tests like `await page.goto('/')`.
    baseURL: 'http://127.0.0.1:3000',
    trace: 'on-first-retry', // Collect trace when retrying a failed test
  },

  /* Configure projects for browsers */
  projects: [
    {
      name: 'chromium',
      use: { ...devices['Desktop Chrome'] },
    },
    {
      name: 'firefox',
      use: { ...devices['Desktop Firefox'] },
    },
    {
      name: 'webkit',
      use: { ...devices['Desktop Safari'] },
    },
  ],

  /* Run your local dev server before starting the tests */
  // webServer: {
  //   command: 'npm run start',
  //   url: 'http://127.0.0.1:3000',
  //   reuseExistingServer: !process.env.CI,
  // },
});

Let's break down the most important configuration options.

Key Configuration Options Explained

1. testDir

  • Purpose: Specifies the directory where Playwright should look for your test files.

  • Example: testDir: './tests', (looks for tests in a folder named tests at the root).

2. Execution Control & Parallelization

  • fullyParallel: boolean

    • Purpose: If true, tests in different test files will run in parallel.

    • Default: false

  • forbidOnly: boolean

    • Purpose: If true, fails the test run if any test uses .only(). Essential for CI to prevent accidentally committed focused tests.

    • Example: forbidOnly: process.env.CI ? true : false, (only forbid on CI).

  • retries: number

    • Purpose: The number of times to retry a failed test. Highly recommended for CI environments to mitigate flakiness.

    • Example: retries: 2, (retry twice if a test fails).

  • workers: number

    • Purpose: Defines the maximum number of worker processes that Playwright can use to run tests in parallel.

    • Default: About 1/2 of your CPU cores.

    • Example: workers: 4, or workers: process.env.CI ? 1 : undefined, (run sequentially on CI for specific reasons, like database contention).

3. reporter

  • Purpose: Configures how test results are reported. You can specify single or multiple reporters.

  • Common Built-in Reporters:

    • 'list': Prints a list of tests and their status (default).

    • 'dot': Prints a dot for each test (pass/fail).

    • 'line': A more verbose list reporter.

    • 'html': Generates a rich, interactive HTML report (highly recommended for local viewing).

    • 'json': Exports results as a JSON file.

    • 'junit': Exports results in JUnit XML format (common for CI/CD tools).

  • Example (multiple reporters):

    JavaScript
    reporter: [
      ['list'],
      ['html', { open: 'never' }], // Don't open automatically after run
      ['json', { outputFile: 'test-results.json' }],
    ],
    

4. use

This is a global configuration object applied to all tests unless overridden by projects. It contains browser-specific settings and test runtime options.

  • baseURL: string

    • Purpose: The base URL for your application. Allows you to use relative paths like await page.goto('/') in your tests.

    • Example: baseURL: 'http://localhost:8080',

  • headless: boolean

    • Purpose: If true, browsers run in headless mode (without a UI). Ideal for CI. If false, browsers launch with a visible UI.

    • Default: true in CI, false otherwise.

    • Example: headless: true,

  • viewport: { width: number, height: number }

    • Purpose: Sets the browser viewport size.

    • Example: viewport: { width: 1280, height: 720 },

  • Timeouts (actionTimeout, navigationTimeout, expect.timeout)

    • actionTimeout: number: Maximum time for any action (click, fill, etc.) to complete. Includes auto-waiting.

    • navigationTimeout: number: Maximum time for a navigation to occur.

    • expect.timeout: number: Default timeout for expect() assertions (Web-First Assertions).

    • Example:

      JavaScript
      actionTimeout: 10000, // 10 seconds
      navigationTimeout: 30000, // 30 seconds
      expect: { timeout: 5000 }, // 5 seconds for assertions
      
  • Artifacts on Failure (screenshot, video, trace)

    • Purpose: Configure what artifacts Playwright saves when a test fails. Crucial for debugging.

    • screenshot: 'off', 'on', 'only-on-failure'.

    • video: 'off', 'on', 'retain-on-failure'.

    • trace: 'off', 'on', 'retain-on-failure', 'on-first-retry'. on-first-retry is a good balance.

    • Example:

      JavaScript
      screenshot: 'only-on-failure',
      video: 'retain-on-failure',
      trace: 'on-first-retry',
      
  • testIdAttribute: string

    • Purpose: Defines the data-* attribute that Playwright's getByTestId() locator should look for. Connects directly to our previous discussion on robust locators!

    • Default: 'data-testid'

    • Example: testIdAttribute: 'data-qa-id', (if your developers use data-qa-id).

5. projects

  • Purpose: Defines different test configurations (projects). This is how you run tests across multiple browsers, device emulations, or even different environments (e.g., staging vs. production API tests). Each project can override global use settings.

  • Key usage: Often combined with devices (Playwright's predefined device presets).

  • Example (Desktop & Mobile):

    JavaScript
    projects: [
      {
        name: 'desktop_chromium',
        use: { ...devices['Desktop Chrome'] },
      },
      {
        name: 'mobile_safari',
        use: { ...devices['iPhone 12'] }, // Emulate iPhone 12
      },
      // You can also define projects for different environments:
      // {
      //   name: 'api_staging',
      //   testMatch: /.*\.api\.spec\.js/, // Run only API tests
      //   use: { baseURL: 'https://staging.api.example.com' },
      // },
    ],
    

    To run specific projects: npx playwright test --project=desktop_chromium

6. webServer

  • Purpose: Automatically starts a local development server before tests run and stops it afterwards. Ideal for testing front-end applications that need to be served.

  • Example:

    JavaScript
    webServer: {
      command: 'npm run start', // Command to start your dev server
      url: 'http://localhost:3000', // URL the server should be available at
      reuseExistingServer: !process.env.CI, // Don't start if already running (useful locally)
      timeout: 120 * 1000, // Timeout for the server to start (2 minutes)
    },
    

7. defineConfig

  • Purpose: (Used implicitly in the default template) A helper function that provides type safety and better IntelliSense/autocompletion for your configuration object, especially useful in TypeScript. While not strictly required for JavaScript, it's good practice.

  • Example: export default defineConfig({ ... });

Tips and Best Practices

  1. Start Simple: Don't over-configure initially. Add options as your needs evolve.

  2. Leverage projects: Use projects extensively for managing different test dimensions (browsers, devices, environments).

  3. Use Environment Variables: Parameterize sensitive data or environment-specific values using process.env.

  4. Manage Timeouts Wisely: Adjust timeouts based on your application's typical responsiveness, but avoid excessively long timeouts which can hide performance issues.

  5. Artifacts for Debugging: Always configure screenshot, video, and trace on failure, especially for CI runs. They are invaluable for debugging.

  6. testIdAttribute: Collaborate with developers to implement a consistent data-testid strategy in your application and configure it here.

Conclusion

playwright.config.js is much more than just a settings file; it's a powerful tool that enables you to precisely control your test execution, improve debugging, and build a highly adaptable test automation framework. By understanding and effectively utilizing its myriad options, you can tailor Playwright to fit the exact needs of your project, ensuring robust, efficient, and reliable test automation.

Popular Posts