Challenge Overview
- Target environment: Jest, Karma (with Chrome, Safari, Firefox, Edge)
- Basic Requirements: Design the test cases for a different kind of DOM manipulation, the test cases should be able to cause false positive results on JSDOM, compared with Karma.
Project Background
The goal of this project is to research to show the risks involved in using NodeJS and Jest (Based on JSDOM) for testing Angular 7 applications.
The risks involve:- Based on Jest, what use cases or patterns will get false results or be unable to test
- Compare to Karma (with real browsers, Chrome, Safari, Firefox, Edge), what kind of tests written in Jest will give invalid results.
Technology Stack
Individual requirements
The requirement of this challenge is simple - design the test cases that will get false results.
1.1 What should be included in a valid test case
A valid test case should include
- A simple app written in Angular 7, can be just a showcase of a specific Angular feature, the UI looking or the functionality of apps are not important.
- A unit test case that is written by Jest + Chai + Sinon, the result should be false in Jest.
- The same unit test case that is written by Karma, the result should be positive in Karma.
- A simple document (in markdown format) that includes
- The detailed steps about how to run your test case in Jest and Karma
- Explain what the real browser(s) you tested against
- A compatibility matrix/table with the running results in all browsers
- A simple writeup to describe the false positive result, and what the expected result is.
- A simple explanation about what unimplemented feature or issues of Jest caused this false positive result.
1.2 The browsers in scope
The following browsers are in scope
- Jest
- JSDOM
- Karma
- Chrome Desktop (70+)
- Chrome Android (70+)
- Chrome iOS (70+)
- Safari Desktop (11+)
- Safari macOS (11+)
- Safari iOS (11+)
- Firefox Deskop (65+, the latest version)
- Edge Desktop (44+, the latest version)
Import Notes
- All the test cases should be in Angular 7 — this is what is important. Especially for Angular Elements features.
- For the test cases in Jest, all the test cases should be implemented with Jest + Chai + Sinon, it means the Jest matchers and mocks should be not used, instead, the Chai and Sinon should be used. Where Sinon should be used for HTTP Request mock, Chai should be used for assertion test. Because the client doesn’t like the style of Jest matchers and mocks.
- For the real browsers in Karma, if you don’t have some devices and can’t test with some real browsers, that’s fine to miss some of it. But your test case should cover at least 1 browser.
Hints
- You can start to investigate and design the false positive test cases from the following ways
- reading through the issues https://github.com/jsdom/jsdom/issues to find the unimplemented features of DOM. For example, there are two unimplemented features of JSDOM
- Check the spec of DOM standard: https://dom.spec.whatwg.org/
- Dive into the source code of JSDOM: https://github.com/jsdom/jsdom
- Check the comparison wikis of different Javascript engines
- https://en.wikipedia.org/wiki/Comparison_of_JavaScript_engines_(DOM_support)
- https://en.wikipedia.org/wiki/Comparison_of_JavaScript_engines
- You can start to investigate and design the false positive test cases from the following ways
Submission guideline
To submit a test case, please create a GitLab issue in https://gitlab.com/jest-risk-discovery/test-cases-design/issuesYou can grant yourself access via the Topcoder-X link, the Topcoder-X link is shared in the forum.
The content of the GitLab issue should include
- Your Topcoder handle
- Paste your document as content of GitLab issue.
- Zip all your test case and upload to the cloud drives, like Google Drive, Dropbox etc, then share the link in the GitLab
- In the title of the issue, you can describe the feature that caused the false results briefly, or anything else that can introduce your test case.
I provided a sample of GitLab issue in the ticket for your reference. https://gitlab.com/jest-risk-discovery/test-cases-design/issues/1
Before the submission deadline, you should submit a txt file that includes the links of all the issues you created to Topcoder, otherwise, you won’t get points and get paid.
Review & prize rules
You will gain points by submitting test cases, the final ranking and payment are related to the points.- Base: 50 points for each valid test cases
- By submitting a valid test case, “valid” means the test case should meet all requirements stated in 1.1 What should be included in a valid test case.
- To gain the base points, the test case only needs to include Jest and Karma with 1 real browser. If your test cases in Karma include more real browsers, you will get extra bonus points for the extra browsers you provided.
- Extra browser bonus: 5 points for each extra browser of a test case
- First submission bonus: 40 points
- If you are the first one to submit a valid test case, you will get an extra 40 points.
If two test cases are the same, only the first submitter who submitted this test case will get the base points. But if your test case covers extra browsers that the first submitter's test case doesn't have, you will get the extra browser bonus.
Here are some cases of the point calculation
- Case 1: Submitter A submitted 5 valid test cases, each of the test cases covered all browsers. As 8 real browsers are in scope, then 7 browsers are extra. So submitter A will get 5 * 50 + 5 * 7 * 5 = 425 points. If the submitter A submitted the first submission, then he/she will get the extra 40 points as the bonus, so finally his/her will get 465 points. Where
- 5 * 50: 50 base points for the 5 valid test cases each
- 5 * 7 * 5: 5 points for 7 extra browsers of the 5 valid test cases each
- 40: the first submission bonus
- Case 2: Submitter B submitted 4 valid test cases, each of the test cases covered 4 browsers, 3 browsers are extra. So submitter B will get 4 * 50 + 4 * 3 * 5 = 260 points. Where
- 4 * 50: 50 base points for the 4 valid test cases each
- 4 * 3 * 5: 5 points for 3 extra browsers of the 4 valid test cases each
- Case 3: Submitter C submitted 4 valid test cases, each of the test case covered 4 browsers. However, 2 test cases are the same as Submitter B and he/she submitted later than Submitter B. So the other 2 test cases are valid. But one of the late test cases provides 2 extra browsers that Submitter B’s test case doesn’t cover. Then submitter C will get 2 * 50 + 2 * 3 * 5 + 2 * 2 * 5 = 150 points. Where
- 2 * 50: 50 base points for the 2 valid test cases each
- 2 * 3 * 5: 5 points for the 3 extra browsers of the 2 valid test cases each
- 2 * 2 * 5: 5 points for the 2 extra browsers of the 2 same/duplicated test cases each
Finally, all submitters will be ranked by points,
- 1st placement: got $1400
- 2nd placement: got $800
- 3rd placement: got $500
For others who are not in the top 3 but got points, will get the same USD as the points you have, but up to $400.
For example, if you got 210 points but not in the top 3, you will get $210 after the challenge completed. But if you are in the top 3, no matter how many points you got, you will get the prize of related placement - $1400/$800/$500
Each test case will be reviewed by the copilot/PM, no appeal/appeal response phase.
We are expecting the test cases would be as thorough as possible. We will reserve the right to final interpretation, like if finally we only get 1 valid test case in total, and even don’t cover all browsers, we will mark the challenge failed. But the submitters will still get the USD payment same as the points they have.