Add an easy way to unit-test strategies in the IDE
I know SUnit exists for stratego, but
- the syntax for defining tests could be better in my opinion (there is no extra syntax now, only the
testsuite
andapply-test
(and family) strategies, and- the only way to run them appears to be from the command line; there is no IDE integration
Was thinking about something that integrates with JUnit, so you can get nice graphical overview of your test suite, potentially even integrated with tests written in Java.
(just brainstorming here)
One possibility could be a custom DSL with embedded ATerms and/or concrete syntax that generates Java code which implements JUnit test case. Might increase turnaround because of the code generation…
Another possibility could be overriding apply-test in Stratego/J (if running in Eclispe) and adding the tests and expected results to a JUnit test suite, that is then run after all tests have been collected.
It would be epic win if Spoofax could improve on the standard develop-build-test cycle, for example by continously executing tests on the background and using editor markers to highlight strategies that failed their test(s). (example driven development)
If there are already developments in this direction it would be great to hear about them.
Submitted by Tobi Vollebregt on 3 December 2010 at 12:15
Issue Log
Tobi, if you have any ideas for a syntax for sunit tests I would be interested. I have implemented a spoofax extension last week which from parse-unit tests generates junit tests, and want to look into something for stratego tests now.
Cool, will try to find some time to look at that :)
Just some brainstorming again for a syntax:
// Random idea for SUnit syntax.// Main idea is to use the
<strategy> term => pattern' construct to // write tests, e.g.
input => expected-output’,
// and to put the tests in their own “block”, like strategies and rules.testcases
// Maybe the code generator can take the first build and last match
// and extract those to be able to give better diagnostics.
// (After all,<Hd> [1,2,3] => 1' ==
![1,2,3]; Hd; ?1’)// For plain SUnit, this could desugar into:
// apply_test(!“Hd…”, Hd, ![1,2,3], !1)
// (Doesn’t apply anymore when variables or `_’ are used in the pattern.)testcase “Hd on a list should return the first element”
[1,2,3] => 1// Not sure about negative tests.
// There should also be something to differentiate between failing
// strategies, and strategies that raise an error from a with clause.testcase “Hd on a tuple should fail” should fail //don’t like the dup
(1,2,3)// This illustrates a testcase works like any strategy, except:
// - it has no global name
// - it must have a build and a matchtestcase “repeat-to-list should simply work”
let s1 = (“a” -> “b”) + (“b” -> “c”) + (“c” -> “d”)
in
<repeat-to-list(s1)> “a” => [“a”,“b”,“c”,“d”]
endtestcase “wrong test, no match” !1
testcase “wrong test, no build” ?1// Alternatively, some way to specify input (and output?) term
// for many testcases at once (inspired a little bit by RSpec):context
input [1,2,3]
testcase “Hd…”
Hd => 1
testcase “Tl…”
Tl => [2,3]
//etc.// or to match with JUnit vocabulary:
testsuite
setup ![1,2,3]
testcase “Hd…”
Hd => 1
//etc.Then I didn’t even think about parameterized tests yet. Maybe allow multiple `input’ directives in a testsuite and generate a test for each pair of input+testcase. Probably more things I missed, but this could be a start :-)
Tried the testing plugin; seems like a major step in the right direction! :-)
Some comments on it:
Any strategy that uses import-term can’t be used (Maybe related to the TODOs related to setting working directory before calling target-plugin strategies?)
Real test driven development is not as good as it could be, because when you write the test(s) first, you need to rebuild after every iteration of your actual product to see if the test already passes. A workaround is of course to use a let binding in the test itself to develop the strategy and only copy it to its real location once it passes the test. This only works for a single test though. Hence it would be great if also the Stratego code of the plugin-under-test is dynamically reloaded after changes, so you can edit it in a window side by side to the window with tests, and see them fail/pass as you edit your plugins Stratego code :P
When using a let-binding as in the previous point it is easy to accidentally lock up the editor by writing some kind of infinite loop strategy. Would be good to have some kind of timeout / maximum number of instructions to execute during a test. (Max. number of executed instructions is preferable as this won’t make the tests dependent on the platform they run on.)
A test type that involves both parsing and Stratego would be nice to have. Or is the idea to use concrete syntax embedding in the testing language to accomplish this? I have some prototype changes that should allow this (i.e. a combined parsing & Stratego test, where the result of the parse is the input for the Stratego test), possibly I can finish that this weekend.
@Tobi:
Thanks for your feedback :)
I don’t know about the
import-term
problem, but it won’t be related to the working directory. Imported terms should work exactly as they do normally, asking the language descriptor where to get the attachment. Can you post a bug report?I don’t really get your point about “real test driven development”. You’re saying that it shouldn’t have a “build” step??
Loops are handled the way they always are: you get a stop button in the progress view and hope for the best. Don’t write loops.
As for your fourth point, yes, we want to add Stratego expressions as conditions for normal syntax-based test cases next.
First point: kk, I’ll file a separate report.
Second point, maybe “real” was bit of an exaggeration; though indeed I mean it would be great if the build step was eliminated for writing code against existing tests, as it is now for writing tests against existing code. (Where code = Stratego code in the plugin-under-test)
Third point: hmm, I guess the problem in this case was that I entered something that filled the heap a bit too quickly, as I didn’t notice a progress view or stop button. (just a locked up Eclipse and an OutOfMemoryException after a while) I’ll watch what happens when I have some more memory friendly infinite loop :-)
Fourth point: Ok, do “conditions” imply you can simply use the fragment as input for testing a strategy? In that case it is what I was trying to build :)
@Tobi:
2) Right now, Spoofax plugins still require compilation. It just so happens that the test expressions don’t, they use a new technique based on
HybridInterpreter.desugar/evaluate
and are only dynamically checked. They’re actually a bit fragile that way and don’t support higher-order strategies or nullary constructors without parentheses. But it could be interesting to see if we could use it for normal Stratego definitions as well, where the editor does all the static checks anyway. We also want incremental compilation at some point.4) And yes, that’s exactly what I meant with conditions.
Implemented as Spoofax/361.
Log in to post comments