Every Puppeteer API test case is essentially a Javascript module. It must define a runTest function which is exported by your test case module, like so:

function runTest(api) {
  // Perform page testing here, for example:

  // Grab an element from the page.
  const outer = api.getElement('div.outer', 'div', 'an', 'outer {div} element');

  // Perform an ACTION on the element:
  outer.click();

  // Perform a CHECK on the element:
  outer.expectTextContent('This is some example text.');
}

module.exports = {runTest};

Actions

Whenever you interact with the user's page in a way that could make changes or fire event handlers, we call this an action. Actions include things like clicking on something, typing text into an input or selecting a radio button. We keep a record of the actions that have been performed so that we can write meaningful reports for the student if there is a subsequent test failure.

Test Checks

Most of the time you will want to test things in the user's page. For this we have a variety of functions that start with expect... which will check the state of the user page and generate a nice failure report if the state does not match what you expect. Most test checks will report the actions that were taken so far, whenever a test check fails.

Many of the expect... checker functions have matching get... functions that allow you to retrieve information from the page, without performing a test. This enables you to perform custom validation. You can manually pass or fail the test case using the api methods pass or fail as needed.

Failure Reports

When a test fails, we produce a failure report with several different components. The checker function will produce a reason for the failure, which is specific to the failure itself. You can add details to the failure report as well as control what actions have been performed so far. Finally, you can add a hint at the end of a failure report.

Example Failure Report

[reason]The text in the top text display element is incorrect. The text was Yellow Sunflowers when it was meant to be: Yellow Submarine
[details]
[actions != []] We performed these actions before checking:

  • Entered "Hello World" into the top text entry element
  • Entered "Some more text" into the top text entry element
  • [hint] Did you remember to add an event listener to the top text entry element?

    The failure report can be controlled for each individual test check by passing the appropriate FailureReportOptions as part of the options parameter. If you want to change the reason part of the failure message, you will need to perform custom validation and then call fail directly.

    To assist you with formatting for failure reports, you can use one of the format... functions in the api, such as formatBlock, formatAttr or formatStyle (see TestCaseAPI for others).

    Please note: All failure reports assume that your failure messages provided are not safe to be displayed as HTML, and are wrapped in as UnsafeString wrapper class. See the BaseString class for detailed information about specifying safe HTML, versus unsafe HTML when reporting failures.

    Elements vs ElementGroups

    An Element allows you to interact with a specific DOM element in the user page, e.g. a button, or a paragraph.

    An ElementGroup allows you to interact with a collection of elements that are logically grouped in some way, e.g. all the <img> tags, or all the elements with class "bright". Most checker functions will check that ALL the elements in the group match your expected value, and if the check fails, will not refer to the failing element specifically. For example:

    Expected all picture frame div elements to have border set to 4px solid rgb(10, 10, 10) but we found at least one with border set to 1px solid rgb(0, 0, 0).

    When you use ElementGroup.getElementAtIndex to refer to a specific Element in an ElementGroup, the description of the Element is adjusted to refer to its ordinal position in the group. This makes it possible to refer to "the first spaceship", for example.

    Use caution with this approach, because changes to the ElementGroup will not update the Element handles -- say, if the element is deleted from the DOM -- and your failure reports will be misleading.

    Dialogs

    Whenever the user page pops up a dialog, the main thread of execution in your test case will be blocked until the dialog is dealt with. Because of this, every time we see a dialog that is not specifically expected, we fail the test case.

    If you expect the student to pop up a dialog whilst the page is loading, your runTest function will not have a chance to register the next expected dialog. To get around this you can specify another module function to register expected dialogs, like so:

    function handleDialogs(api) {
      // During page load we expect three consecutive dialogs to pop up.
      api.expectDialog('alert', );
      api.expectDialog('confirm', {message: 'Please agree to our Terms and Conditions'});
      api.expectDialog('prompt', {message: 'What is your name?', response: 'Sally Sparrow'});
    }
    
    function runTest(api) {
      // Normal test functions proceed here ...
    }
    
    module.exports = {handleDialogs, runTest};

    Please see the TestCaseAPI for other ways to deal with dialogs, including ensuring that all dialogs have been correctly "popped up" by the student.

    Animations

    The marker supports animations that use requestAnimationFrame. It is important not to rely on timers inside the animation loop, as the marker will compress time or pause the animation for each requestAnimationFrame callback.

    To enable marker support for animations, export the following flag in your test case:

    module.exports = {runTest, INTERCEPT_RAF: true};

    Inside the test case, the page will pause any animation after the first requestAnimationFrame callback. You can then step through the animation using fastForwardRaf, like so:

    function runTest(api) {
      const flipbook = api.getElement('#flipbook', 'img', 'a', 'virtual flipbook <img> element');
    
      // Test something here, after the first animation frame:
      flipbook.expectAttr('src', 'frame-1.png');
    
      // Advance the animation 50 frames:
      api.fastForwardRaf(50);
    
      // Now the state should be different:
      flipbook.expectAttr('src', 'frame-51.png');
    }
    
    module.exports = {runTest, INTERCEPT_RAF: true};

    Navigation handling

    By default, the student page should not cause a navigation to occur. The marker will fail a test case if it detects a navigation about to occur, with a generic message: "Your page navigated to a new URL, when it was not supposed to.". You can register a handler for unexpected navigation to fail the test with a more appropriate message. Do this by exporting the handleUnexpectedNavigation function in your test module. For example:

    function handleUnexpectedNavigation(api) {
      api.fail("Your page made a form submission when it shouldn't have.", {hint: `Did you remember to call ${api.formatJavascript('preventDefault')} inside your ${api.formatAttr('onsubmit')} event?`});
    } 
    
    function runTest(api) {
      const submitElement = api.getElement('input[type="submit"]', 'input', 'a', 'submit <input> element');
      submitElement.click();
    }
    
    module.exports = {runTest, handleUnexpectedNavigation};

    On navigation, all Element handles are invalidated, as they are associated with DOM elements in the previously loaded page. Using some api features after a navigation can lead to system errors from the marker, as a result.

    We have some limited [experimental!] functionality for testing legitimate page navigations and form submissions, but we have not enabled these features in the current API. Please see the Grok team if you'd like to test for this type of navigation.