Replies: 8 comments 26 replies
-
I prefer the collapsing way (It sounds more easy to implement than filtering as the usage of |
Beta Was this translation helpful? Give feedback.
-
My main grudge with the test output panel is that it encourages (or even sometimes enforces) writing non-idiomatic/suboptimal test suites. Main features I see responsible for this fact are: performance of the UI, and verbosity of the output. Disclaimer: on a daily basis I work with NUnit and JUnit. Sometimes I use them for TDD, sometimes for fine-grained unit tests, but usually to create tests of quite high-level SUTs to provide test coverage and safe refactoring of a legacy code base. I do not have too much experience with BDD frameworks or property-based testing frameworks outside of CW. In my work I use things like
Basically, in many test suites of CW kata, something what is supposed to be a test actually is a set of tests. Something what is considered a bad practice, antipattern, and rarely seen in the wild. I am pretty well aware that tests of a CW kata are (can be) a specific topic, and not all CW use cases can be easily expressed with frameworks for unit test, but these would be probably more of an exception rater than rules. Ideally, I would see the above test suite broken down: VerbosityGenerated test cases make the output very noisy. Authors like looping over inputs in a test because it keeps output small. Log entry per assertionThis one I think is a real WTF. Having a One collapsible header per test (
|
Beta Was this translation helpful? Give feedback.
-
If performance of the output panel would be improved without filtering, I take it. I personally am not bothered by the verbosity of the test output, and when creating test suites, the only thing which stops me from creating a test case generator of 10k inputs is performance on the client. Even if test output panel would display 10k collapsible blocks for every The only problem I see is hunting this one failing tet case among the long list of passing ones. But I think it's not a very common scenario. In majority of cases, either everything passes, or a noticeable part fails. Except when crappy rounding in reference solution, or something like that. |
Beta Was this translation helpful? Give feedback.
-
Also related to test output panel: test runs which crash, get aborted, or tests which pass, but not: #69 |
Beta Was this translation helpful? Give feedback.
-
I have crashed too many not just panels, not just browser tabs, but whole browsers and even complete PCs on verbose test results of many tests. Tests really should quit an Hobo doesn't seem to agree, but I am very much in favour of multiple |
Beta Was this translation helpful? Give feedback.
-
+1 for filtering. A suggestion I'd have pertains to logging. 100% of logging is related to debugging. 99% of debugging needs to happen on failed tests. Usually, I only need to see the inputs. In the best case, inputs are small. Often they're not. If I'm lucky, I won't exceed the 1.5 MB limit logging all inputs of hundreds of tests, most of which pass and are therfore uninteresting, but even in those cases, I now have to scroll and look for that tiny red blip in a sea of white text. In the worst case I can't debug at all because I can log everything or nothing. On a related note, it would be nice if the UI clearly grouped the test result (e.g. "xxx should equal yyy") together with any logging I performed. Currently, the log messages are in their own framed box above the result, and even to this day I often have to pause for a minute and remember if the result applies to the textbox above or below. Extending the frame to include the test, with a divider between them, or something similar would improve the UX. It's a small thing, but small things take a UX from good to great :) Final note, regarding @JohanWiltink 's remark on multiple asserts per test: I disagree. It's not helpful when a test tells me "You've failed one ore more of 5 things. I won't tell you which though." I thought this was a bad design choice on the test author's part, not related to the test runner itself. |
Beta Was this translation helpful? Give feedback.
-
@hobovsky Can you elaborate? I know the problem exists, but how can it be improved? The logs from a solution should be shown under the relevant test case when possible (test framework dependent). Logs come before the assertion failure message because tests call the solution before testing the result.
The last log (if any) is associated with the failure
|
Beta Was this translation helpful? Give feedback.
-
@JohanWiltink Can you elaborate on this? I'm not sure what you mean by "quit an |
Beta Was this translation helpful? Give feedback.
-
Let me know if you have ideas to improve the Runner UI:
I'll briefly explain how we get the test results rendered:
We've kept it minimal for years, but it's actually pretty flexible on how to display the results, and it can be extended. For example, by adding a step between 3 and 4 or making it more interactive.
Possible ideas:
If you have any questions on how it works in more detail, feel free to ask in a separate discussion.
Please post one idea per comment, so it's easier to discuss. We can also move to independent ideas after discussing it here first.
Beta Was this translation helpful? Give feedback.
All reactions