The purpose of this issue is to document a use-case of SPT that is currently not supported. Our use case, testing optimisations, usually takes the following pattern:

  1. Arrange: write down the input fragment
  2. Act: perform the optimisation to the input fragment
  3. Assert: make one or more assertions about (parts of) the result

SPT falls short in several aspects:

  • With the current run strategy to term we need to perform the optimisation over and over for every assertion. It would be nice to run the optimisation on the full fragment and use markers to parametrise the strategies that perform assertions.

    • For example, after optimising the input fragment, I would like to assert that run get-type(#0) to IntTy() where #0 is a marker that captures the name of the variable we’re interested in.
  • (A more ambitious goal here would be to link markers in the input program to nodes in the program after optimisation. Some kind of reverse-origin-tracking. Oh god, that sounds daunting…)

  • It would be nice to use concrete syntax in the result of a run strategy test, so tests become less sensitive to changes in the underlying representation. This comes with its own set of challenges.

Submitted by Martijn on 20 April 2017 at 01:47

On 20 April 2017 at 11:18 Volker commented:

Could you elaborate a bit on what this optimisation is and what it produces?

Based on the info provided this is what I think the problem is:

A unit test consists of several elements:
- component under test
- initial state of this component
- test input
- expected output

The test input is the input fragment.
The component under test is (often implicitly) declared by a test expectation.
The initial state can only be expressed through test fixtures and the start symbol header (we might have to take a look at that at some point).
The expected output is also a part of the test expectations.

In your case, both your component under test and your assertions about the expected output are not covered by the common test expectations of SPT.
So you are forced to use the run expectation to specify your custom component under test (the optimisation), but that still leaves the assertion problem.
I would need a clearer picture of what this optimisation returns and what you want to assert about it to see what SPT is missing.

To address your list, you can use run <strategy> on <selection> to run a strategy on a selection of the input fragment.
However, it sounds like you want to first run a strategy on the entire fragment, and then select parts of the output of that strategy and run yet another strategy against these selections.
If the output of the optimisation is also a valid program in your language you could maybe simply create a new test case with this program as input.

If the optimisation is like a transformation Spoofax might actually already track the origin.
Last time I checked, when a new Term is created, the input term is passed as the origin of that new Term.

According to the docs, you can use run <strategy> to <Language?> <Fragment> for concrete syntax.
http://www.metaborg.org/en/latest/source/langdev/meta/lang/spt.html.
If that doesn’t work, please report a new Issue.


On 20 April 2017 at 17:25 Martijn commented:

In your case, both your component under test and your assertions about the expected output are not covered by the common test expectations of SPT.
So you are forced to use the run expectation to specify your custom component under test (the optimisation), but that still leaves the assertion problem.
I would need a clearer picture of what this optimisation returns and what you want to assert about it to see what SPT is missing.

Take, for example, loop merging:

test [[
  for (int x = 0; x < 10; x++) {}
  for (int x = 0; x < 10; x++) {}
]] perform merge-loops then assert count-loops to 1

Where I perform my-fancy-optimization once and make one or more assertions afterwards.

However, it sounds like you want to first run a strategy on the entire fragment, and then select parts of the output of that strategy and run yet another strategy against these selections.

Correct. Take this more general example of loop merging, where we want to assert that one loop is being merged but the other isn’t:

test valid loops are merged but invalid ones not [[
  class Foo {
    method bar() {
      // Can be merged
      [[for (int i = 0; i < 10; i++) { ... }
      for (int i = 0; i < 10; i++) { ... }]]

      // Cannot be merged
      [[for (int i = 0; i < 1; i++) { ... }
      for (int i = 5; i < 6; i++) { ... }]]
    }
  }
]] perform merge-loops then assert count-loops on #1 to 1, assert count-loops on #2 to 2

Actually, in this case you could split this into two separate test cases. But then imagine some interaction going on between the mergeable and non-mergeable loops, such that they need to be in the same test case.

Log in to post comments