MENU
Showing posts with label Interview Prep. Show all posts
Showing posts with label Interview Prep. Show all posts

Tuesday, 1 July 2025

SDLC Interactive Mock Test SDLC Mock Test: Test Your Software Development Knowledge Instructions:

There are 40 multiple-choice questions.
Each question has only one correct answer.
The passing score is 65% (26 out of 40).
Recommended time: 60 minutes.

SDLC Mock Test: Test Your Software Development Knowledge

1. Which phase of the SDLC focuses on understanding and documenting what the system should do?

2. In which SDLC model are phases completed sequentially, with no overlap?

3. What is the primary goal of the Design phase in SDLC?

4. Which SDLC model emphasizes iterative development and frequent collaboration with customers?

5. What is 'Unit Testing' primarily concerned with?

6. Which phase involves writing the actual code based on the design specifications?

7. What is a key characteristic of the Maintenance phase in SDLC?

8. Which SDLC model is best suited for projects with unclear requirements that are likely to change?

9. What is 'Integration Testing' concerned with?

10. In the V-Model, which testing phase corresponds to the Requirements Gathering phase?

11. What is the primary purpose of a Feasibility Study in the initial phase of SDLC?

12. Which document is typically produced during the Requirements Gathering phase?

13. What does CI/CD stand for in the context of modern SDLC practices?

14. Which SDLC model is characterized by its emphasis on risk management and iterative refinement?

15. What is the primary output of the Implementation/Coding phase?

16. Which of the following is a non-functional requirement?

17. What is the purpose of 'User Acceptance Testing' (UAT)?

18. Which SDLC phase typically involves creating flowcharts, data models, and architectural diagrams?

19. What is the main characteristic of a 'prototype' in software development?

20. What is the purpose of 'Version Control Systems' (e.g., Git) in SDLC?

21. Which SDLC model is known for its high risk in large projects due to late defect discovery?

22. What is the 'Deployment' phase of the SDLC?

23. Which of the following is a benefit of adopting DevOps practices in SDLC?

24. What is a 'Sprint' in the Scrum Agile framework?

25. Which SDLC model is a sequential design process in which progress is seen as flowing steadily downwards (like a waterfall) through phases?

26. What is the primary purpose of a 'System Requirements Specification' (SRS)?

27. Which SDLC model includes distinct phases for risk analysis and prototyping at each iteration?

28. What is 'Refactoring' in the context of software development?

29. Which phase of the SDLC involves monitoring the system for performance, security, and user feedback after deployment?

30. What is a 'backlog' in Agile methodologies?

31. Which of the following is a benefit of using an Iterative SDLC model?

32. What is the role of a 'System Analyst' in the SDLC?

33. Which SDLC model explicitly links each development phase with a corresponding testing phase?

34. What is 'Scrum'?

35. What is the primary purpose of a 'Daily Stand-up' meeting in Agile?

36. Which SDLC phase would typically involve creating a 'Test Plan'?

37. What is the concept of 'Technical Debt' in software development?

38. Which of the following is a common challenge in the Requirements Gathering phase?

39. What is the purpose of a 'Post-Implementation Review'?

40. Which of the following best describes 'DevOps'?

Your Score: 0 / 40

Ready to ace your ISTQB Certified Tester Foundation Level (CTFL) exam? Practice is paramount! While studying the official syllabus and glossary is essential, testing your knowledge with mock exams is the best way to prepare for the actual exam format, question types, and time pressure.

This blog post brings you a 40-question mock test designed to mirror the structure and difficulty of the real ISTQB CTFL exam. Take your time, answer each question to the best of your ability, and then use the provided answer key to check your performance. Aim to complete these 40 questions within 60 minutes, just like the actual exam.

Important Note on Interactivity: While it would be fantastic to offer a fully interactive quiz here with real-time scoring and highlighting, this blog post format primarily delivers text. To experience an interactive version with automated scoring and feedback (like showing marks and highlighting wrong answers in red), you would typically need a dedicated online quiz platform or custom web development using HTML, CSS, and JavaScript.

For now, treat this as a classic paper-based mock test. Grab a pen and paper, mark your answers, and then compare them with our solution at the end!


ISTQB Certified Tester Foundation Level (CTFL) Mock Test

Instructions:

  • There are 40 multiple-choice questions.

  • Each question has only one correct answer.

  • The passing score is 65% (26 out of 40).

  • Recommended time: 60 minutes.


Questions:

1. Which of the following is a potential benefit of using an Independent Test Team?

A. To avoid the developer's bias in finding defects. 

B. To avoid conflict between developers and testers. 

C. To reduce the need for formal test planning. 

D. To eliminate the need for retesting after defect fixes.

2. Which of the following is a valid objective for testing? 

A. To prove that all defects are removed. 

B. To ensure the software is 100% defect-free. 

C. To find defects and reduce the risk of failure. 

D. To reduce the cost of quality assurance.

3. Which of the following statements about the relationship between testing and debugging is TRUE? 

A. Testing and debugging are the same activity. 

B. Testing finds defects; debugging removes them. 

C. Debugging should always be done before testing.

 D. Testing can only be done after debugging is complete.

4. According to the seven testing principles, which statement is true about 'Tests wear out'? 

A. Test cases should be discarded after one use. 

B. Repeating the same tests will find new defects over time. 

C. As more and more tests are run, the likelihood of finding new defects decreases for those same tests. D. Tests must be performed in new environments to remain effective.

5. Which of the following is NOT a fundamental test activity? 

A. Test planning and control. 

B. Test analysis and design. 

C. Test management and leadership. 

D. Test implementation and execution.

6. What is the primary purpose of static testing? 

A. To execute the code and observe its behavior. 

B. To find defects without executing the code. 

C. To measure the performance of the software. 

D. To identify security vulnerabilities at runtime.

7. Which of the following is a benefit of early test involvement (Shift-Left)? 

A. Defects are found when they are cheapest to fix. 

B. Test cases can be designed more quickly. 

C. There is no need for retesting. 

D. It eliminates the need for detailed requirements.

8. In which phase of the fundamental test process is a test charter typically created? 

A. Test planning and control. 

B. Test analysis. 

C. Test implementation. 

D. Test execution.

9. Which of the following is a typical work product of static testing? 

A. Test cases. 

B. Defect reports. 

C. Review reports. 

D. Test scripts.

10. What is the main difference between verification and validation? 

A. Verification is "Are we building the right product?", validation is "Are we building the product right?". 

B. Verification is "Are we building the product right?", validation is "Are we building the right product?". 

C. Verification is always manual, validation is always automated. 

D. Verification happens after coding, validation happens before coding.

11. Which test level focuses on the interaction between integrated components? 

A. Unit testing. 

B. Integration testing. 

C. System testing. 

D. Acceptance testing.

12. Which test type confirms that defects have been fixed and do not reappear? 

A. Regression testing. 

B. Sanity testing. 

C. Smoke testing. 

D. Confirmation testing.

13. Given the following statements about maintenance testing:

  1. It is performed on existing software.

  2. It is triggered by modifications, migrations, or retirement.

  3. It always requires new test cases to be written.

  4. It only involves re-running existing regression tests. Which statements are TRUE? 

     A. 1, 2, and 3. B. 1 and 2. C. 2, 3, and 4. D. 1 and 4.

14. What is the purpose of exit criteria in a test plan? 

A. To define the start conditions for testing.

 B. To specify when testing can be stopped. 

C. To outline the resources required for testing.

 D. To identify the types of tests to be performed.

15. Which of the following is an example of a product risk?

A. Unrealistic project deadlines. 

B. High turnover of development staff. 

C. Software crashing in production. 

D. Inability to get expert advice on testing.

16. Which of the following test techniques is a Black-Box technique? 

A. Statement testing. 

B. Boundary value analysis. 

C. Branch testing. 

D. Code coverage analysis.

17. You are testing an input field that accepts values between 1 and 100. Using Equivalence Partitioning, which are the valid equivalence classes? 

A. Less than 1, 1 to 100, Greater than 100. 

B. 1, 50, 100. 

C. 0, 101. 

D. Any number between 1 and 100.

18. Based on the Boundary Value Analysis for an input field that accepts values between 10 and 20 (inclusive), which values would be considered boundary values? 

A. 9, 10, 20, 21. 

B. 10, 11, 19, 20. 

C. 1, 10, 20, 100. 

D. 10, 20.

19. Which of the following is a typical defect found by static analysis? 

A. Incorrect calculation results.

 B. Memory leaks. 

C. Inconsistent user interface.

 D. Misspellings in error messages.

20. What is the main characteristic of Experience-based testing techniques?

A. They require formal documentation. 

B. They rely on the tester's knowledge, intuition, and experience. 

C. They are always automated. 

D. They are used only for performance testing.

21. A defect report should contain which of the following?

A. Developer's name. 

B. Root cause of the defect. 

C. Steps to reproduce the defect.

 D. Time taken to fix the defect.

22. Which of the following is a K1 level question? 

A. Explain why static testing is beneficial. 

B. Calculate the number of test cases using equivalence partitioning. 

C. Define "test objective".

 D. Analyze a given scenario to identify a project risk.

23. What is the primary purpose of a test policy? 

A. To provide detailed steps for executing tests. 

B. To define the overall goals and approach to testing for an organization. 

C. To list all test environments required for a project. 

D. To document specific test techniques to be used.

24. Which of the following describes a typical objective for alpha testing? 

A. Formal testing conducted to determine if a system satisfies its acceptance criteria. 

B. Operational testing by potential users at external sites. 

C. Testing by a potential user/customer at the developer's site. 

D. Testing to find defects in the interfaces between components.

25. Which of the following is a benefit of having an independent test team? 

A. It guarantees that no defects will be missed. 

B. It helps identify developer bias in defect reporting. 

C. It eliminates communication issues between development and testing. 

D. It reduces the need for test tools.

26. Which metric is typically used to monitor test progress? 

A. Number of defects found per tester. 

B. Test case execution status (e.g., pass/fail percentage). 

C. Lines of code written per day. 

D. Number of hours worked by the test team.

27. What is the purpose of a test execution schedule? 

A. To define what needs to be tested. 

B. To specify who will perform which test activities and when. 

C. To list the tools required for testing. 

D. To detail the conditions for exiting testing.

28. Which type of review is typically led by the author of the work product and is considered the least formal? 

A. Inspection. 

B. Walkthrough. 

C. Informal review. 

D. Technical review.

29. What is the main purpose of configuration management in testing? 

A. To manage the test team's daily tasks. 

B. To ensure that all testware is uniquely identified, version controlled, and traceable.

C. To control the project budget. 

D. To manage customer relationships.

30. Which of the following is a characteristic of good testing? 

A. Testing should focus on proving that the software works perfectly. 

B. Testing should always be performed by independent testers. 

C. Testing should be context-dependent. 

D. Testing should find all defects.

31. What is the primary reason for performing retesting? 

A. To find new defects introduced by the fix. 

B. To ensure that the fixed defect does not reappear. 

C. To verify that all test cases passed in the previous execution. 

D. To check compatibility with different operating systems.

32. Consider the following decision table for a travel booking system:

Condition / Action

Child < 2 years

Child 2-12 years

Adult

Rule 1

Yes

No

No

Rule 2

No

Yes

No

Rule 3

No

No

Yes

Discount 10%

Yes

No

No

Discount 5%

No

Yes

No

Full Price

No

No

Yes

Which of the following is a valid test case based on this decision table? 

A. Child 1 year old, gets 5% discount. 

B. Child 8 years old, gets 10% discount. 

C. Adult, gets full price. 

D. Child 1 year old, gets full price.**

33. What is the main benefit of using a risk-based approach to testing? 

A. It eliminates the need for detailed test cases. 

B. It ensures that all possible defects are found. 

C. It focuses testing efforts where they are most needed, based on risk. 

D. It always reduces the overall testing effort and time.

34. Which of the following is an example of an operational acceptance test? 

A. Checking if the software integrates with third-party systems. 

B. Verifying system performance under peak load. 

C. Testing the software for usability by end-users. 

D. Checking backup and restore procedures.

35. Which testing principle states that "complete testing is impossible"? 

A. Exhaustive testing is impossible. 

B. Tests wear out. 

C. Defect clustering. 

D. Pesticide paradox.

36. You are testing a mobile application. Which of the following is a primary concern for maintenance testing in this context? 

A. Ensuring the initial development timeline is met. 

B. Verifying functionality after an operating system update. 

C. Designing new features based on market research. 

D. Conducting usability tests for the first time.

37. What is the purpose of traceability between test cases and requirements? 

A. To measure the performance of the testers. 

B. To ensure that every requirement has at least one corresponding test case. 

C. To identify the root cause of defects quickly. 

D. To automate the test execution process.

38. Which of the following is NOT a characteristic of good testing?

A. It provides sufficient information to stakeholders to make informed decisions. 

B. It is performed only after coding is complete. 

C. It focuses on defect prevention. 

D. It identifies the root cause of failures.

39. Which of the following is a benefit of static analysis tools? 

A. They detect defects early in the SDLC. 

B. They execute code to find runtime errors. 

C. They are primarily used for performance testing. 

D. They eliminate the need for code reviews.

40. What is the objective of component testing? 

A. To test interfaces between integrated components. 

B. To test individual software components in isolation. 

C. To verify the entire system against user requirements. 

D. To check non-functional characteristics like performance.


Answer Key

Compare your answers with the correct solutions below. Bolded options are correct.

  1. A. To avoid the developer's bias in finding defects.

  2. C. To find defects and reduce the risk of failure.

  3. B. Testing finds defects; debugging removes them.

  4. C. As more and more tests are run, the likelihood of finding new defects decreases for those same tests. (This describes the Pesticide Paradox)

  5. C. Test management and leadership. (While important, it's a role, not one of the fundamental activities: Planning, Analysis, Design, Implementation, Execution, Reporting, Completion).

  6. B. To find defects without executing the code.

  7. A. Defects are found when they are cheapest to fix.

  8. B. Test analysis.

  9. C. Review reports.

  10. B. Verification is "Are we building the product right?", validation is "Are we building the right product?".

  11. B. Integration testing.

  12. D. Confirmation testing.

  13. B. 1 and 2.

  14. B. To specify when testing can be stopped.

  15. C. Software crashing in production.

  16. B. Boundary value analysis.

  17. A. Less than 1 (invalid), 1 to 100 (valid), Greater than 100 (invalid).

  18. A. 9, 10, 20, 21. (Values just outside and on the boundaries).

  19. B. Memory leaks. (Static analysis can detect potential memory leaks in code logic, unlike some other defect types listed that require execution).

  20. B. They rely on the tester's knowledge, intuition, and experience.

  21. C. Steps to reproduce the defect.

  22. C. Define "test objective". (K1 is about remembering/defining).

  23. B. To define the overall goals and approach to testing for an organization.

  24. C. Testing by a potential user/customer at the developer's site.

  25. B. It helps identify developer bias in defect reporting.

  26. B. Test case execution status (e.g., pass/fail percentage).

  27. B. To specify who will perform which test activities and when.

  28. C. Informal review.

  29. B. To ensure that all testware is uniquely identified, version controlled, and traceable.

  30. C. Testing should be context-dependent.

  31. B. To ensure that the fixed defect does not reappear.

  32. C. Adult, gets full price.

  33. C. It focuses testing efforts where they are most needed, based on risk.

  34. D. Checking backup and restore procedures.

  35. A. Exhaustive testing is impossible.

  36. B. Verifying functionality after an operating system update.

  37. B. To ensure that every requirement has at least one corresponding test case.

  38. B. It is performed only after coding is complete. (Good testing is continuous/shift-left).

  39. A. They detect defects early in the SDLC.

  40. B. To test individual software components in isolation.


Calculate Your Score!

  • Count how many answers you got correct.

  • Divide your correct answers by 40 and multiply by 100 to get your percentage.

  • Remember, a typical passing score is 65% (26 out of 40).

Saturday, 28 June 2025

 

 Playwright Interview Questions

Playwright has rapidly become a favorite among automation engineers for its speed, reliability, and powerful feature set. If you're eyeing a role in test automation, particularly one that leverages Playwright, being prepared for a range of questions is crucial.

This blog post provides a comprehensive list of Playwright interview questions, from fundamental concepts to more advanced topics and real-world problem-solving scenarios, designed to help you showcase your expertise.

Foundational Playwright Concepts

These questions assess your basic understanding of Playwright's architecture, key components, and core functionalities.

  1. What is Playwright, and how does it fundamentally differ from Selenium?

    • Hint: Discuss architecture (WebDriver protocol vs. direct browser interaction), auto-waiting, browser support, isolated contexts, multi-language support.

  2. Explain the relationship between Browser, BrowserContext, and Page in Playwright.

    • Hint: Hierarchy, isolation, use cases for each (e.g., BrowserContext for user sessions, Page for tabs).

  3. What are Playwright's auto-waiting capabilities, and why are they significant for test stability?

    • Hint: Explain what it waits for (visible, enabled, stable, detached/attached) and how it reduces explicit waits and flakiness.

  4. Describe the various types of locators in Playwright and when you would choose one over another.

    • Hint: Discuss getByRole, getByText, getByLabel, getByPlaceholder, getByAltText, getByTitle, getByTestId, CSS, XPath. Emphasize "Web-First" locators.

  5. How do you handle different types of waits in Playwright (beyond auto-waiting)? Provide examples.

    • Hint: waitForLoadState, waitForURL, waitForSelector, waitForResponse/waitForRequest, waitForEvent, waitForFunction.

  6. What is playwright.config.js used for, and name at least five key configurations you'd typically set there?

    • Hint: testDir, use (baseURL, headless, viewport, timeouts, trace), projects, reporter, retries, workers, webServer.

  7. Explain Playwright's expect assertions. What are "soft assertions" and when would you use them?

    • Hint: Auto-retrying nature of expect. Soft assertions (expect.soft) to continue test execution even after an assertion failure.

  8. How do you set up and tear down test environments or data using Playwright's test runner? (Think Hooks and Fixtures)

    • Hint: beforeEach, afterEach, beforeAll, afterAll, and custom test fixtures for reusable setup/teardown.

  9. Can Playwright be used for API testing? If so, how?

    • Hint: request fixture, page.route(), mocking.

  10. What is Trace Viewer, and how does it aid in debugging Playwright tests?

    • Hint: Visual timeline, screenshots, DOM snapshots, network logs, console messages for post-mortem analysis.

Advanced Concepts & Scenarios

These questions delve deeper into Playwright's powerful features and challenge your problem-solving abilities.

  1. You need to test an application that requires users to log in. How would you handle authentication efficiently across multiple tests to avoid repeated logins?

    • Hint: storageState, browserContext.storageState(), reusing authenticated contexts.

  2. Explain Network Interception (page.route()) in Playwright. Provide a scenario where it would be indispensable.

    • Hint: Mocking API responses, simulating network errors/delays, blocking third-party scripts.

  3. How do you perform visual regression testing using Playwright? What are the limitations or common pitfalls?

    • Hint: toMatchSnapshot(), pixel comparison, handling dynamic content, screenshot stability.

  4. Your application has an iframe for a payment gateway. How would you interact with elements inside this iframe using Playwright?

    • Hint: frameLocator(), accessing frame content.

  5. Describe how Playwright facilitates parallel test execution. What are the benefits and potential considerations?

    • Hint: workers, fullyParallel, isolated browser contexts, benefits (speed, isolation), considerations (shared resources, reporting).

  6. How would you handle file uploads and downloads in Playwright? Provide a code snippet for each.

    • Hint: setInputFiles(), waitForEvent('download').

  7. Your tests are running fine locally but consistently fail on CI/CD with "Timeout" errors. What steps would you take to debug and resolve this?

    • Hint: Check CI logs, use Trace Viewer, adjust timeouts (CI vs. local), check network conditions, ensure webServer is stable.

  8. You need to test a responsive website across different device viewports and mobile emulations. How would you configure your Playwright tests for this?

    • Hint: projects, devices presets, viewport in use configuration.

  9. How would you debug a Playwright test script interactively in your IDE?

    • Hint: page.pause(), DEBUG=pw:api environment variable, VS Code debugger integration.

  10. Can you explain the concept of Test Fixtures in Playwright beyond simple beforeEach/afterEach? Provide a scenario for a custom fixture.

    • Hint: Reusable setup/teardown logic, passing resources (like API clients) to tests, complex setups (e.g., a logged-in user fixture, a database connection fixture).

Real-Time / Scenario-Based Questions

These questions test your practical application of Playwright knowledge in realistic situations.

  1. Scenario: "Our e-commerce application has a product filter that updates the product list asynchronously without a full page reload. When a filter is applied, a small loading spinner appears for 2-5 seconds, then disappears, and the product count updates. How would you ensure your Playwright test reliably waits for the new product list to load after applying a filter?" * Expected Answer: Combine waitForResponse (for the filter API call) with locator.waitFor({ state: 'hidden' }) (for the loading spinner) and then expect(page.locator('.product-item')).toHaveCount(...) (which auto-waits for elements).

  2. Scenario: "You need to automate a checkout flow where after clicking 'Place Order,' the page navigates to an order confirmation page, but there's an intermediate redirect and a few seconds of network activity before the final content renders. How would you write a robust wait for the order confirmation to be fully displayed?" * Expected Answer: Use page.waitForURL('**/order-confirmation-success-url', { timeout: 30000 }) combined with waitUntil: 'networkidle' or waitForLoadState('networkidle'). Then, verify a key element on the confirmation page using expect().toBeVisible().

  3. Scenario: "Your application has a complex form with conditional fields. When you select 'Option A' from a dropdown, 'Field X' becomes visible, and 'Field Y' becomes hidden. How would you automate filling out 'Field X' only after 'Option A' is selected and 'Field Y' is confirmed hidden?" * Expected Answer: await page.selectOption('#dropdown', 'Option A'); then await expect(page.locator('#fieldX')).toBeVisible(); and await expect(page.locator('#fieldY')).toBeHidden(); before filling fieldX. Playwright's auto-waiting with expect assertions would handle the dynamic visibility.

  4. Scenario: "You're getting intermittent failures on your CI pipeline, specifically when tests interact with a 'Save' button. The error message is often 'Element is not enabled'. What could be the cause, and how would you investigate and fix it?" * Expected Answer: Discuss auto-waiting not being enough if an element is disabled. Suggest locator.waitFor({ state: 'enabled' }) before the click. Debugging with Trace Viewer (npx playwright test --trace on), video recording, and console logs. Check for JavaScript errors preventing enablement.

  5. Scenario: "Your team wants to implement data-driven testing for user login with 100 different user credentials. How would you structure your Playwright tests and manage this test data effectively?" * Expected Answer: Use a JSON or CSV file for data. Employ test.each() from @playwright/test to iterate over the data. Briefly mention separating data from logic, and potential need for API-driven data setup if users need to be created dynamically.


This comprehensive list should provide a strong foundation for your blog post and help automation engineers confidently approach Playwright interviews!

 

Top 50 Manual Testing Interview Questions and Answers (2025 Edition)

Your ultimate guide to cracking QA interviews with confidence!



Manual testing remains a critical skill in the software industry. Whether you're a fresher or an experienced tester, preparing for interviews with a strong set of common and real-world questions is essential.

This blog gives you 50 hand-picked manual testing questions with simple, clear answers, based on real interview scenarios and ISTQB fundamentals.


๐Ÿ”ฅ Core Manual Testing Interview Questions & Answers

1. What is software testing?

Answer: Software testing is the process of verifying that the software works as intended and is free from defects. It ensures quality, performance, and reliability.


2. What is the difference between verification and validation?

Answer:

  • Verification: Are we building the product right? (Reviews, walkthroughs)

  • Validation: Are we building the right product? (Testing the actual software)


3. What is the difference between defect, bug, error, and failure?

Answer:

  • Error: Human mistake in coding

  • Bug/Defect: Deviation from expected behavior

  • Failure: System behaves unexpectedly due to a defect


4. What is STLC?

Answer: Software Testing Life Cycle includes phases like:
Requirement Analysis → Test Planning → Test Case Design → Environment Setup → Test Execution → Closure.


5. What is the difference between test case and test scenario?

Answer:

  • Test Case: Detailed step-by-step instructions.

  • Test Scenario: High-level functionality to be tested.


6. What is the difference between smoke and sanity testing?

Answer:

  • Smoke: Basic checks to ensure app stability.

  • Sanity: Focused testing after bug fixes.


7. What is regression testing?

Answer: Testing unchanged parts of the application to ensure new code hasn’t broken existing functionality.


8. What is retesting?

Answer: Testing a specific functionality again after a bug has been fixed.


9. What is severity and priority?

Answer:

  • Severity: Impact of defect on functionality.

  • Priority: Urgency to fix the defect.


10. What is exploratory testing?

Answer: Informal testing where testers explore the application without pre-written test cases.


๐Ÿงช Test Design & Execution Questions

11. How do you write a test case?

Answer: Identify the test objective → write clear steps → expected result → actual result → status.


12. What are test techniques?

Answer:

  • Black-box testing

  • White-box testing

  • Gray-box testing


13. Explain Boundary Value Analysis.

Answer: Testing at boundary values. E.g., if valid age is 18-60, test 17, 18, 60, 61.


14. Explain Equivalence Partitioning.

Answer: Dividing input data into valid and invalid partitions and testing one from each.


15. What is decision table testing?

Answer: Tabular method for representing and testing complex business rules.


16. What is use case testing?

Answer: Testing based on user’s interaction with the application.


17. What is adhoc testing?

Answer: Informal testing without a plan or documentation.


18. What is compatibility testing?

Answer: Ensuring software works across different devices, OS, and browsers.


19. What is usability testing?

Answer: Testing how user-friendly the application is.


20. What is end-to-end testing?

Answer: Testing the complete workflow from start to finish as a user would.


๐Ÿž Bug Reporting & Defect Life Cycle

21. What is a defect?

Answer: A deviation from expected behavior or requirement.


22. What is the defect life cycle?

Answer: New → Assigned → Open → Fixed → Retest → Closed/Rejected/Duplicate.


23. What are components of a bug report?

Answer: ID, title, steps, expected result, actual result, severity, priority, status.


24. What tools are used for defect tracking?

Answer: Jira, Bugzilla, Mantis, Redmine.


25. What is defect leakage?

Answer: Defect found by end-user which was not found during testing.


26. What is defect density?

Answer: Number of defects per unit size of code (e.g., defects per 1,000 lines).


27. How do you prioritize bugs?

Answer: Based on business impact and criticality to functionality.


28. What is the root cause analysis?

Answer: Finding the origin of the defect to avoid future issues.


29. What is the difference between reproducible and non-reproducible bugs?

Answer:

  • Reproducible: Can be consistently repeated.

  • Non-reproducible: Happens occasionally or under unknown conditions.


30. What is a blocker bug?

Answer: A critical defect that stops testing or development progress.


๐Ÿ“˜ Testing Documents & Tools

31. What is a test plan?

Answer: A document that defines the scope, strategy, resources, and schedule of testing activities.


32. What is a test strategy?

Answer: High-level document describing the testing approach across the organization or project.


33. What is test data?

Answer: Input data used during testing to simulate real-world conditions.


34. What is a test closure report?

Answer: A summary of all testing activities and outcomes at the end of the test cycle.


35. What are entry and exit criteria?

Answer:

  • Entry: Conditions before testing starts.

  • Exit: Conditions to stop testing.


36. What is test coverage?

Answer: A measure of the extent of testing performed on the application (code, requirements, test cases).


37. What is traceability matrix?

Answer: A document that maps test cases to requirements to ensure full coverage.


38. Which test management tools have you used?

Answer: TestRail, Zephyr, HP ALM, Xray.


39. How do you manage test cases?

Answer: Organize by modules/features; maintain version control; use tools like Excel or TestRail.


40. What is risk-based testing?

Answer: Prioritizing test cases based on risk of failure or business impact.


๐Ÿ’ผ Real-World & Behavioral Questions

41. Have you tested in Agile projects?

Answer: Yes. Participated in daily standups, sprint planning, and delivered tests in iterations.


42. How do you handle changing requirements?

Answer: Stay flexible, update test cases and plans, communicate impact clearly.


43. What would you do if the developer rejects your bug?

Answer: Provide detailed steps and evidence (screenshots/logs); discuss the issue with team leads if needed.


44. How do you ensure complete test coverage?

Answer: Use requirement traceability matrix and map each requirement to test cases.


45. How do you estimate test efforts?

Answer: Based on number of features, complexity, past experience, and available resources.


46. How do you stay updated with testing trends?

Answer: Follow QA blogs, take courses, attend webinars, and read documentation.


47. Have you ever missed a bug? What did you learn?

Answer: Yes. It taught me the importance of edge case testing and peer reviews.


48. What is shift-left testing?

Answer: Involving testing early in the development life cycle to catch defects sooner.


49. How do you perform mobile testing?

Answer: Test on real devices and emulators, check compatibility, UI, and performance.


50. Why should we hire you as a manual tester?

Answer: I have a strong grasp of testing fundamentals, excellent bug reporting skills, and a passion for quality. I ensure user experience and product stability.


Popular Posts