Cross-platform runtime bundle tests

Hi,

we’re having 3+ bundle/resolver/runtime implementations now, and we don’t share a test suite between them. That’s kinda problematic, so I pondered if we can do something similar to what we’re doing for .syntax for .bundle. Aka, fixtures and reference results.

@stas, @zbraniecki, @spookylukey, have you thought about that yet?

My ad-hoc idea is to have a suite of Fluent files, with semantic comments to formalize input and output data.

# @returns "value"
simple-1 = value

# @returns None
# @disable-term
-term-1 = simple

# @returns "simple term reference"
simple-2 = {-term-1} term reference

I didn’t bother checking this against the proposal for semantic comments :wink:

+1 from me.

fluent-rs is about to start incorporating some of the tests from fluent.js, and I would love to not have to rewrite them to make them language-agnostic.

This sounds like a good first step to specify the expected behavior. We could even do it before the work on the reference resolver starts.

Ideally, the comments would also include the expected errors. It might be best to add them in a following iteration, however, once we standardize the error codes.

This sounds really good to me. Like stas I was wondering about errors - it would be a shame to have to keep parts of our test suite around only for the sake of checking the errors returned. Errors may require care to do in a language agnostic way.

For python-fluent, this proposal might also make it easier to have multiple implementations in separate packages. Up until now I’ve been arguing for keeping them in the same package, and the test suite has been a big reason for this, but if a full external test suite is available that problem goes away.

On the other hand, a difficulty with this proposal is making the test suite full enough for all implementations, or making it easy for implementers to add to the test suite (perhaps locally for their own needs, but contributing back to a central one would be better). For example, in my compiler implementation for python-fluent, I had to add more tests for the select expression e.g.:

For lots of tests I’ve got a ‘static’ and ‘runtime’ version - this is because the compiler implementation optimizes static expressions and so the entire select expression becomes a static string if possible. So to exercise all the code paths you have to include a version that can’t be optimized away.

When I was in control of the test suite it was easy to just add more tests, right alongside the existing ones so they were ‘equal citizens’ as it were. It would be nice if we could have a similarly hassle free experience for other implementers.

My brain quickly glanced at errors, and then moved on :wink: . There’s subtle details about how to encode errors. Are we doing error IDs or error messages, and if we do the latter, do we end up with problems running tests on localized systems. Etc yadda yadda.

To Luke’s point, I think of these tests as integration tests of sort. I do expect that implementations will explicitly disable a few tests, for example when it comes down to platform-dependent formatter support. Not that they do something different, but some features might just not be feasible.

Implementations should be encouraged to have unit tests as they see fit.