I have an automation framework that uses a grunt task to run multiple spec files. Some are run using a pre-defined suite, other's using file naming conventions. Each spec file has an average of one "describe" block, each usually having multiple "it" blocks.
I'm currently using jasmine-spec-reporter which gives useful and detailed results after each spec file, which looks like:
------------------------------------
[chrome OS X 10.10 #1-78] PID: 1880
[chrome OS X 10.10 #1-78] Specs: /**/**/jenkins/workspace/Main Suites/tests/User_Management/smoke_student_does_something.js
[chrome OS X 10.10 #1-78]
[chrome OS X 10.10 #1-78] Using SauceLabs selenium server at http://*******
[chrome OS X 10.10 #1-78] Spec started
[chrome OS X 10.10 #1-78]
[chrome OS X 10.10 #1-78] 1 A student can link and unlink to another student account
[chrome OS X 10.10 #1-78] ✓ can link to another student account (33 secs)
[chrome OS X 10.10 #1-78] ✓ can unlink a student account (14 secs)
[chrome OS X 10.10 #1-78]
[chrome OS X 10.10 #1-78] Executed 2 of 2 specs SUCCESS in 46 secs.
[chrome OS X 10.10 #1-78] SauceLabs results available at http://saucelabs.com/jobs/*****************
[launcher] 5 instance(s) of WebDriver still running
Protractor gives me a fairly useless summary console output after all the jasmine reports that references taskId's and lists passes + failures. This is what the Protractor summary looks like:
....
[launcher] chrome #1-69 passed
[launcher] chrome #1-70 failed 1 test(s)
[launcher] chrome #1-73 passed
[launcher] chrome #1-71 passed
[launcher] chrome #1-75 passed
[launcher] chrome #1-72 passed
[launcher] chrome #1-79 passed
[launcher] chrome #1-74 passed
[launcher] chrome #1-80 passed
[launcher] chrome #1-81 passed
[launcher] chrome #1-82 passed
[launcher] chrome #1-84 passed
[launcher] chrome #1-83 passed
[launcher] chrome #1-85 passed
[launcher] chrome #1-88 passed
[launcher] chrome #1-87 passed
[launcher] chrome #1-86 passed
[launcher] chrome #1-76 passed
[launcher] chrome #1-89 passed
[launcher] chrome #1-90 passed
[launcher] chrome #1-91 passed
[launcher] chrome #1-92 passed
[launcher] chrome #1-78 passed
[launcher] chrome #1-93 passed
[launcher] chrome #1-95 passed
[launcher] chrome #1-77 passed
[launcher] chrome #1-96 passed
[launcher] chrome #1-94 failed 5 test(s)
[launcher] overall: 12 failed spec(s)
[launcher] Process exited with error code 1
>>
Warning: Tests failed, protractor exited with code: 1 Use --force to continue.
My problem is that I want to see a summary of all spec files, not taskIds, only for failed specs, and only after they're all done running. The MVP would be just the failed spec file names in a list or written to a file in an easy to read format (xml, html, etc.). Next iteration I would have the "describe" and "it" block descriptions, error codes, and saucelabs link next to each failure similar the jasmine reporter.
I normally have hundreds of spec files in a single job, and having to comb through all the individual results just to track down which spec file failed is quite annoying (since I feel an automatically generated summary should have that information). I'm literally having to do a ctrl+f on the console output for "failures", just to see which specs have failed. I want to avoid going to saucelabs for the failure list, because of how our job runs are organized through Jenkins. The console output should be an easy first source/reference for a list of failures once a job is done running.
Every third party reporter's scope seems to only go up to the spec file level. After each spec file is executed, it reports, then goes on to the next one which overrides the previous report. I open the HTML or XML report, and it always has just one spec file's results. I can't seem to carry any information into a summary for multiple spec files. I've tweaked the "consolidate(All)" settings, and none seem to build a report for all spec files. They seem to only consider a single file at a time.
I have experimented with the following reporters:
jasmine-reporters
jasmine-spec-reporter
protractor-jasmine2-html-reporter
jasmine-json-test-reporter
I've been able to manipulate the Protractor "reportSummary" code directly to give me most of what I want (except for the saucelabs link), but this isn't ideal since I'd have to convince them to adopt it. Here's what my summary looks like:
********************
* FAILED SPEC(S) *
********************
/Playground/Test_Spec_2.js failed 2 test(s)
can log in once
FAILURE: Expected false to be true, 'Didn't see home page'.
FAILURE: Expected false to be true, 'Didn't see link'.
can log in twice
PASSED
can log in thrice
FAILURE: Expected false to be true, 'Didn't see home page'.
/Playground/Test_Spec_3.js failed 1 test(s)
can login
FAILURE: Expected false to be true, 'Didn't see home page'.
********************
* SUMMARY *
********************
overall: 2/4 failed spec(s)
********************
Am I missing something with using those jasmine reporters? Is my framework just not designed to work with those reporters the way I want? Should I just convince the protractor guys to beef up their summary reporter with some options?