There are 40 multiple-choice questions.
Each question has only one correct answer.
The passing score is 65% (26 out of 40).
Recommended time: 60 minutes.
This blog post brings you a 40-question mock test designed to mirror the structure and difficulty of the real ISTQB CTFL exam. Take your time, answer each question to the best of your ability, and then use the provided answer key to check your performance. Aim to complete these 40 questions within 60 minutes, just like the actual exam.
Important Note on Interactivity: While it would be fantastic to offer a fully interactive quiz here with real-time scoring and highlighting, this blog post format primarily delivers text. To experience an interactive version with automated scoring and feedback (like showing marks and highlighting wrong answers in red), you would typically need a dedicated online quiz platform or custom web development using HTML, CSS, and JavaScript.
For now, treat this as a classic paper-based mock test. Grab a pen and paper, mark your answers, and then compare them with our solution at the end!
Instructions:
There are 40 multiple-choice questions.
Each question has only one correct answer.
The passing score is 65% (26 out of 40).
Recommended time: 60 minutes.
Questions:
1. Which of the following is a potential benefit of using an Independent Test Team?
A. To avoid the developer's bias in finding defects.
B. To avoid conflict between developers and testers.
C. To reduce the need for formal test planning.
D. To eliminate the need for retesting after defect fixes.
2. Which of the following is a valid objective for testing?
A. To prove that all defects are removed.
B. To ensure the software is 100% defect-free.
C. To find defects and reduce the risk of failure.
D. To reduce the cost of quality assurance.
3. Which of the following statements about the relationship between testing and debugging is TRUE?
A. Testing and debugging are the same activity.
B. Testing finds defects; debugging removes them.
C. Debugging should always be done before testing.
D. Testing can only be done after debugging is complete.
4. According to the seven testing principles, which statement is true about 'Tests wear out'?
A. Test cases should be discarded after one use.
B. Repeating the same tests will find new defects over time.
C. As more and more tests are run, the likelihood of finding new defects decreases for those same tests. D. Tests must be performed in new environments to remain effective.
5. Which of the following is NOT a fundamental test activity?
A. Test planning and control.
B. Test analysis and design.
C. Test management and leadership.
D. Test implementation and execution.
6. What is the primary purpose of static testing?
A. To execute the code and observe its behavior.
B. To find defects without executing the code.
C. To measure the performance of the software.
D. To identify security vulnerabilities at runtime.
7. Which of the following is a benefit of early test involvement (Shift-Left)?
A. Defects are found when they are cheapest to fix.
B. Test cases can be designed more quickly.
C. There is no need for retesting.
D. It eliminates the need for detailed requirements.
8. In which phase of the fundamental test process is a test charter typically created?
A. Test planning and control.
B. Test analysis.
C. Test implementation.
D. Test execution.
9. Which of the following is a typical work product of static testing?
A. Test cases.
B. Defect reports.
C. Review reports.
D. Test scripts.
10. What is the main difference between verification and validation?
A. Verification is "Are we building the right product?", validation is "Are we building the product right?".
B. Verification is "Are we building the product right?", validation is "Are we building the right product?".
C. Verification is always manual, validation is always automated.
D. Verification happens after coding, validation happens before coding.
11. Which test level focuses on the interaction between integrated components?
A. Unit testing.
B. Integration testing.
C. System testing.
D. Acceptance testing.
12. Which test type confirms that defects have been fixed and do not reappear?
A. Regression testing.
B. Sanity testing.
C. Smoke testing.
D. Confirmation testing.
13. Given the following statements about maintenance testing:
It is performed on existing software.
It is triggered by modifications, migrations, or retirement.
It always requires new test cases to be written.
It only involves re-running existing regression tests.
Which statements are TRUE?
A. 1, 2, and 3.
B. 1 and 2.
C. 2, 3, and 4.
D. 1 and 4.
14. What is the purpose of exit criteria in a test plan?
A. To define the start conditions for testing.
B. To specify when testing can be stopped.
C. To outline the resources required for testing.
D. To identify the types of tests to be performed.
15. Which of the following is an example of a product risk?
A. Unrealistic project deadlines.
B. High turnover of development staff.
C. Software crashing in production.
D. Inability to get expert advice on testing.
16. Which of the following test techniques is a Black-Box technique?
A. Statement testing.
B. Boundary value analysis.
C. Branch testing.
D. Code coverage analysis.
17. You are testing an input field that accepts values between 1 and 100. Using Equivalence Partitioning, which are the valid equivalence classes?
A. Less than 1, 1 to 100, Greater than 100.
B. 1, 50, 100.
C. 0, 101.
D. Any number between 1 and 100.
18. Based on the Boundary Value Analysis for an input field that accepts values between 10 and 20 (inclusive), which values would be considered boundary values?
A. 9, 10, 20, 21.
B. 10, 11, 19, 20.
C. 1, 10, 20, 100.
D. 10, 20.
19. Which of the following is a typical defect found by static analysis?
A. Incorrect calculation results.
B. Memory leaks.
C. Inconsistent user interface.
D. Misspellings in error messages.
20. What is the main characteristic of Experience-based testing techniques?
A. They require formal documentation.
B. They rely on the tester's knowledge, intuition, and experience.
C. They are always automated.
D. They are used only for performance testing.
21. A defect report should contain which of the following?
A. Developer's name.
B. Root cause of the defect.
C. Steps to reproduce the defect.
D. Time taken to fix the defect.
22. Which of the following is a K1 level question?
A. Explain why static testing is beneficial.
B. Calculate the number of test cases using equivalence partitioning.
C. Define "test objective".
D. Analyze a given scenario to identify a project risk.
23. What is the primary purpose of a test policy?
A. To provide detailed steps for executing tests.
B. To define the overall goals and approach to testing for an organization.
C. To list all test environments required for a project.
D. To document specific test techniques to be used.
24. Which of the following describes a typical objective for alpha testing?
A. Formal testing conducted to determine if a system satisfies its acceptance criteria.
B. Operational testing by potential users at external sites.
C. Testing by a potential user/customer at the developer's site.
D. Testing to find defects in the interfaces between components.
25. Which of the following is a benefit of having an independent test team?
A. It guarantees that no defects will be missed.
B. It helps identify developer bias in defect reporting.
C. It eliminates communication issues between development and testing.
D. It reduces the need for test tools.
26. Which metric is typically used to monitor test progress?
A. Number of defects found per tester.
B. Test case execution status (e.g., pass/fail percentage).
C. Lines of code written per day.
D. Number of hours worked by the test team.
27. What is the purpose of a test execution schedule?
A. To define what needs to be tested.
B. To specify who will perform which test activities and when.
C. To list the tools required for testing.
D. To detail the conditions for exiting testing.
28. Which type of review is typically led by the author of the work product and is considered the least formal?
A. Inspection.
B. Walkthrough.
C. Informal review.
D. Technical review.
29. What is the main purpose of configuration management in testing?
A. To manage the test team's daily tasks.
B. To ensure that all testware is uniquely identified, version controlled, and traceable.
C. To control the project budget.
D. To manage customer relationships.
30. Which of the following is a characteristic of good testing?
A. Testing should focus on proving that the software works perfectly.
B. Testing should always be performed by independent testers.
C. Testing should be context-dependent.
D. Testing should find all defects.
31. What is the primary reason for performing retesting?
A. To find new defects introduced by the fix.
B. To ensure that the fixed defect does not reappear.
C. To verify that all test cases passed in the previous execution.
D. To check compatibility with different operating systems.
32. Consider the following decision table for a travel booking system:
Condition / Action | Child < 2 years | Child 2-12 years | Adult |
Rule 1 | Yes | No | No |
Rule 2 | No | Yes | No |
Rule 3 | No | No | Yes |
Discount 10% | Yes | No | No |
Discount 5% | No | Yes | No |
Full Price | No | No | Yes |
Which of the following is a valid test case based on this decision table?
A. Child 1 year old, gets 5% discount.
B. Child 8 years old, gets 10% discount.
C. Adult, gets full price.
D. Child 1 year old, gets full price.**
33. What is the main benefit of using a risk-based approach to testing?
A. It eliminates the need for detailed test cases.
B. It ensures that all possible defects are found.
C. It focuses testing efforts where they are most needed, based on risk.
D. It always reduces the overall testing effort and time.
34. Which of the following is an example of an operational acceptance test?
A. Checking if the software integrates with third-party systems.
B. Verifying system performance under peak load.
C. Testing the software for usability by end-users.
D. Checking backup and restore procedures.
35. Which testing principle states that "complete testing is impossible"?
A. Exhaustive testing is impossible.
B. Tests wear out.
C. Defect clustering.
D. Pesticide paradox.
36. You are testing a mobile application. Which of the following is a primary concern for maintenance testing in this context?
A. Ensuring the initial development timeline is met.
B. Verifying functionality after an operating system update.
C. Designing new features based on market research.
D. Conducting usability tests for the first time.
37. What is the purpose of traceability between test cases and requirements?
A. To measure the performance of the testers.
B. To ensure that every requirement has at least one corresponding test case.
C. To identify the root cause of defects quickly.
D. To automate the test execution process.
38. Which of the following is NOT a characteristic of good testing?
A. It provides sufficient information to stakeholders to make informed decisions.
B. It is performed only after coding is complete.
C. It focuses on defect prevention.
D. It identifies the root cause of failures.
39. Which of the following is a benefit of static analysis tools?
A. They detect defects early in the SDLC.
B. They execute code to find runtime errors.
C. They are primarily used for performance testing.
D. They eliminate the need for code reviews.
40. What is the objective of component testing?
A. To test interfaces between integrated components.
B. To test individual software components in isolation.
C. To verify the entire system against user requirements.
D. To check non-functional characteristics like performance.
Compare your answers with the correct solutions below. Bolded options are correct.
A. To avoid the developer's bias in finding defects.
C. To find defects and reduce the risk of failure.
B. Testing finds defects; debugging removes them.
C. As more and more tests are run, the likelihood of finding new defects decreases for those same tests. (This describes the Pesticide Paradox)
C. Test management and leadership. (While important, it's a role, not one of the fundamental activities: Planning, Analysis, Design, Implementation, Execution, Reporting, Completion).
B. To find defects without executing the code.
A. Defects are found when they are cheapest to fix.
B. Test analysis.
C. Review reports.
B. Verification is "Are we building the product right?", validation is "Are we building the right product?".
B. Integration testing.
D. Confirmation testing.
B. 1 and 2.
B. To specify when testing can be stopped.
C. Software crashing in production.
B. Boundary value analysis.
A. Less than 1 (invalid), 1 to 100 (valid), Greater than 100 (invalid).
A. 9, 10, 20, 21. (Values just outside and on the boundaries).
B. Memory leaks. (Static analysis can detect potential memory leaks in code logic, unlike some other defect types listed that require execution).
B. They rely on the tester's knowledge, intuition, and experience.
C. Steps to reproduce the defect.
C. Define "test objective". (K1 is about remembering/defining).
B. To define the overall goals and approach to testing for an organization.
C. Testing by a potential user/customer at the developer's site.
B. It helps identify developer bias in defect reporting.
B. Test case execution status (e.g., pass/fail percentage).
B. To specify who will perform which test activities and when.
C. Informal review.
B. To ensure that all testware is uniquely identified, version controlled, and traceable.
C. Testing should be context-dependent.
B. To ensure that the fixed defect does not reappear.
C. Adult, gets full price.
C. It focuses testing efforts where they are most needed, based on risk.
D. Checking backup and restore procedures.
A. Exhaustive testing is impossible.
B. Verifying functionality after an operating system update.
B. To ensure that every requirement has at least one corresponding test case.
B. It is performed only after coding is complete. (Good testing is continuous/shift-left).
A. They detect defects early in the SDLC.
B. To test individual software components in isolation.
Count how many answers you got correct.
Divide your correct answers by 40 and multiply by 100 to get your percentage.
Remember, a typical passing score is 65% (26 out of 40).
Playwright Interview Questions
Playwright has rapidly become a favorite among automation engineers for its speed, reliability, and powerful feature set. If you're eyeing a role in test automation, particularly one that leverages Playwright, being prepared for a range of questions is crucial.
This blog post provides a comprehensive list of Playwright interview questions, from fundamental concepts to more advanced topics and real-world problem-solving scenarios, designed to help you showcase your expertise.
These questions assess your basic understanding of Playwright's architecture, key components, and core functionalities.
What is Playwright, and how does it fundamentally differ from Selenium?
Hint: Discuss architecture (WebDriver protocol vs. direct browser interaction), auto-waiting, browser support, isolated contexts, multi-language support.
Explain the relationship between Browser
, BrowserContext
, and Page
in Playwright.
Hint: Hierarchy, isolation, use cases for each (e.g., BrowserContext
for user sessions, Page
for tabs).
What are Playwright's auto-waiting capabilities, and why are they significant for test stability?
Hint: Explain what it waits for (visible, enabled, stable, detached/attached) and how it reduces explicit waits and flakiness.
Describe the various types of locators in Playwright and when you would choose one over another.
Hint: Discuss getByRole
, getByText
, getByLabel
, getByPlaceholder
, getByAltText
, getByTitle
, getByTestId
, CSS, XPath. Emphasize "Web-First" locators.
How do you handle different types of waits in Playwright (beyond auto-waiting)? Provide examples.
Hint: waitForLoadState
, waitForURL
, waitForSelector
, waitForResponse
/waitForRequest
, waitForEvent
, waitForFunction
.
What is playwright.config.js
used for, and name at least five key configurations you'd typically set there?
Hint: testDir
, use
(baseURL, headless, viewport, timeouts, trace), projects
, reporter
, retries
, workers
, webServer
.
Explain Playwright's expect
assertions. What are "soft assertions" and when would you use them?
Hint: Auto-retrying nature of expect
. Soft assertions (expect.soft
) to continue test execution even after an assertion failure.
How do you set up and tear down test environments or data using Playwright's test runner? (Think Hooks and Fixtures)
Hint: beforeEach
, afterEach
, beforeAll
, afterAll
, and custom test fixtures for reusable setup/teardown.
Can Playwright be used for API testing? If so, how?
Hint: request
fixture, page.route()
, mocking.
What is Trace Viewer, and how does it aid in debugging Playwright tests?
Hint: Visual timeline, screenshots, DOM snapshots, network logs, console messages for post-mortem analysis.
These questions delve deeper into Playwright's powerful features and challenge your problem-solving abilities.
You need to test an application that requires users to log in. How would you handle authentication efficiently across multiple tests to avoid repeated logins?
Hint: storageState
, browserContext.storageState()
, reusing authenticated contexts.
Explain Network Interception (page.route()
) in Playwright. Provide a scenario where it would be indispensable.
Hint: Mocking API responses, simulating network errors/delays, blocking third-party scripts.
How do you perform visual regression testing using Playwright? What are the limitations or common pitfalls?
Hint: toMatchSnapshot()
, pixel comparison, handling dynamic content, screenshot stability.
Your application has an iframe for a payment gateway. How would you interact with elements inside this iframe using Playwright?
Hint: frameLocator()
, accessing frame content.
Describe how Playwright facilitates parallel test execution. What are the benefits and potential considerations?
Hint: workers
, fullyParallel
, isolated browser contexts, benefits (speed, isolation), considerations (shared resources, reporting).
How would you handle file uploads and downloads in Playwright? Provide a code snippet for each.
Hint: setInputFiles()
, waitForEvent('download')
.
Your tests are running fine locally but consistently fail on CI/CD with "Timeout" errors. What steps would you take to debug and resolve this?
Hint: Check CI logs, use Trace Viewer, adjust timeouts (CI vs. local), check network conditions, ensure webServer
is stable.
You need to test a responsive website across different device viewports and mobile emulations. How would you configure your Playwright tests for this?
Hint: projects
, devices
presets, viewport
in use
configuration.
How would you debug a Playwright test script interactively in your IDE?
Hint: page.pause()
, DEBUG=pw:api
environment variable, VS Code debugger integration.
Can you explain the concept of Test Fixtures
in Playwright beyond simple beforeEach
/afterEach
? Provide a scenario for a custom fixture.
Hint: Reusable setup/teardown logic, passing resources (like API clients) to tests, complex setups (e.g., a logged-in user fixture, a database connection fixture).
These questions test your practical application of Playwright knowledge in realistic situations.
Scenario: "Our e-commerce application has a product filter that updates the product list asynchronously without a full page reload. When a filter is applied, a small loading spinner appears for 2-5 seconds, then disappears, and the product count updates. How would you ensure your Playwright test reliably waits for the new product list to load after applying a filter?"
* Expected Answer: Combine waitForResponse
(for the filter API call) with locator.waitFor({ state: 'hidden' })
(for the loading spinner) and then expect(page.locator('.product-item')).toHaveCount(...)
(which auto-waits for elements).
Scenario: "You need to automate a checkout flow where after clicking 'Place Order,' the page navigates to an order confirmation page, but there's an intermediate redirect and a few seconds of network activity before the final content renders. How would you write a robust wait for the order confirmation to be fully displayed?"
* Expected Answer: Use page.waitForURL('**/order-confirmation-success-url', { timeout: 30000 })
combined with waitUntil: 'networkidle'
or waitForLoadState('networkidle')
. Then, verify a key element on the confirmation page using expect().toBeVisible()
.
Scenario: "Your application has a complex form with conditional fields. When you select 'Option A' from a dropdown, 'Field X' becomes visible, and 'Field Y' becomes hidden. How would you automate filling out 'Field X' only after 'Option A' is selected and 'Field Y' is confirmed hidden?"
* Expected Answer: await page.selectOption('#dropdown', 'Option A');
then await expect(page.locator('#fieldX')).toBeVisible();
and await expect(page.locator('#fieldY')).toBeHidden();
before filling fieldX
. Playwright's auto-waiting with expect
assertions would handle the dynamic visibility.
Scenario: "You're getting intermittent failures on your CI pipeline, specifically when tests interact with a 'Save' button. The error message is often 'Element is not enabled'. What could be the cause, and how would you investigate and fix it?"
* Expected Answer: Discuss auto-waiting not being enough if an element is disabled. Suggest locator.waitFor({ state: 'enabled' })
before the click. Debugging with Trace Viewer (npx playwright test --trace on
), video recording, and console logs. Check for JavaScript errors preventing enablement.
Scenario: "Your team wants to implement data-driven testing for user login with 100 different user credentials. How would you structure your Playwright tests and manage this test data effectively?"
* Expected Answer: Use a JSON or CSV file for data. Employ test.each()
from @playwright/test
to iterate over the data. Briefly mention separating data from logic, and potential need for API-driven data setup if users need to be created dynamically.
This comprehensive list should provide a strong foundation for your blog post and help automation engineers confidently approach Playwright interviews!
Your ultimate guide to cracking QA interviews with confidence!
Manual testing remains a critical skill in the software industry. Whether you're a fresher or an experienced tester, preparing for interviews with a strong set of common and real-world questions is essential.
This blog gives you 50 hand-picked manual testing questions with simple, clear answers, based on real interview scenarios and ISTQB fundamentals.
Answer: Software testing is the process of verifying that the software works as intended and is free from defects. It ensures quality, performance, and reliability.
Answer:
Verification: Are we building the product right? (Reviews, walkthroughs)
Validation: Are we building the right product? (Testing the actual software)
Answer:
Error: Human mistake in coding
Bug/Defect: Deviation from expected behavior
Failure: System behaves unexpectedly due to a defect
Answer: Software Testing Life Cycle includes phases like:
Requirement Analysis → Test Planning → Test Case Design → Environment Setup → Test Execution → Closure.
Answer:
Test Case: Detailed step-by-step instructions.
Test Scenario: High-level functionality to be tested.
Answer:
Smoke: Basic checks to ensure app stability.
Sanity: Focused testing after bug fixes.
Answer: Testing unchanged parts of the application to ensure new code hasn’t broken existing functionality.
Answer: Testing a specific functionality again after a bug has been fixed.
Answer:
Severity: Impact of defect on functionality.
Priority: Urgency to fix the defect.
Answer: Informal testing where testers explore the application without pre-written test cases.
Answer: Identify the test objective → write clear steps → expected result → actual result → status.
Answer:
Black-box testing
White-box testing
Gray-box testing
Answer: Testing at boundary values. E.g., if valid age is 18-60, test 17, 18, 60, 61.
Answer: Dividing input data into valid and invalid partitions and testing one from each.
Answer: Tabular method for representing and testing complex business rules.
Answer: Testing based on user’s interaction with the application.
Answer: Informal testing without a plan or documentation.
Answer: Ensuring software works across different devices, OS, and browsers.
Answer: Testing how user-friendly the application is.
Answer: Testing the complete workflow from start to finish as a user would.
Answer: A deviation from expected behavior or requirement.
Answer: New → Assigned → Open → Fixed → Retest → Closed/Rejected/Duplicate.
Answer: ID, title, steps, expected result, actual result, severity, priority, status.
Answer: Jira, Bugzilla, Mantis, Redmine.
Answer: Defect found by end-user which was not found during testing.
Answer: Number of defects per unit size of code (e.g., defects per 1,000 lines).
Answer: Based on business impact and criticality to functionality.
Answer: Finding the origin of the defect to avoid future issues.
Answer:
Reproducible: Can be consistently repeated.
Non-reproducible: Happens occasionally or under unknown conditions.
Answer: A critical defect that stops testing or development progress.
Answer: A document that defines the scope, strategy, resources, and schedule of testing activities.
Answer: High-level document describing the testing approach across the organization or project.
Answer: Input data used during testing to simulate real-world conditions.
Answer: A summary of all testing activities and outcomes at the end of the test cycle.
Answer:
Entry: Conditions before testing starts.
Exit: Conditions to stop testing.
Answer: A measure of the extent of testing performed on the application (code, requirements, test cases).
Answer: A document that maps test cases to requirements to ensure full coverage.
Answer: TestRail, Zephyr, HP ALM, Xray.
Answer: Organize by modules/features; maintain version control; use tools like Excel or TestRail.
Answer: Prioritizing test cases based on risk of failure or business impact.
Answer: Yes. Participated in daily standups, sprint planning, and delivered tests in iterations.
Answer: Stay flexible, update test cases and plans, communicate impact clearly.
Answer: Provide detailed steps and evidence (screenshots/logs); discuss the issue with team leads if needed.
Answer: Use requirement traceability matrix and map each requirement to test cases.
Answer: Based on number of features, complexity, past experience, and available resources.
Answer: Follow QA blogs, take courses, attend webinars, and read documentation.
Answer: Yes. It taught me the importance of edge case testing and peer reviews.
Answer: Involving testing early in the development life cycle to catch defects sooner.
Answer: Test on real devices and emulators, check compatibility, UI, and performance.
Answer: I have a strong grasp of testing fundamentals, excellent bug reporting skills, and a passion for quality. I ensure user experience and product stability.