[Avocado-devel] Multi Stream Test Support

Jeff Nelson jen at redhat.com
Mon Apr 10 17:56:21 UTC 2017


On Mon, 3 Apr 2017 09:48:16 -0400
Cleber Rosa <crosa at redhat.com> wrote:

> Note: this document can be view in rendered format at:
> 
> https://github.com/clebergnu/avocado/blob/RFC_multi_stream_v1/docs/source/rfcs/multi_stream.rst
> 

Here's my feedback. Please let me know if you have any questions.


> ===========================
>  Multi Stream Test Support
> ===========================
> 
> Introduction
> ============
> 
> Avocado currently does not provide test writers with standard tools
> or guidelines for developing tests that spawn multiple machines.
> 


s/spawn multiple machines/require multiple, cooperating execution
contexts/ ?


> Since these days the concept of a "machine" is blurring really
> quickly, this proposal for Avocado's version of "multi machine" test
> support is more abstract (that's an early and quick explanation of
> what a "stream" means).  One of the major goal is to be more flexible
> and stand the "test" (pun intended) of time.

This introduction mentions "machine" multiple times. I found this
puzzling, because the proposal is titled "multi stream test support".
The machine view is an old, historical idea and I know next to nothing
about it, so it's not very helpful to me.

I suggest writing an intro for a target audience that knows nothing
about the past but is familiar with how avocado operates today. The
intro should describe the goal or objective of the design. Reference to
older implementations and ideas can be done in the Background section.


> This is a counter proposal to a previous RFC posted and discussed on
> Avocado development mailing list.  Many of the concepts detailed here
> were introduced there:
> 
> * https://www.redhat.com/archives/avocado-devel/2016-March/msg00025.html
> * https://www.redhat.com/archives/avocado-devel/2016-March/msg00035.html
> * https://www.redhat.com/archives/avocado-devel/2016-April/msg00042.html
> * https://www.redhat.com/archives/avocado-devel/2016-April/msg00072.html

Are there other related RFCs that this proposal affects? For example,
is there an RFC that describes the avocado callable test interface? If
so, referencing it might be useful for readers.

 
> Background
> ==========
> 
> The prior art that influences Avocado the most is Autotest.  The
> reason is that many of the Avocado developers worked on Autotest
> before, and both share various common goals.  Let's use Autotest,
> which provided support for multiple machine test support as a basis
> for comparison.
> 
> Back in the Autotest days, a test that would spawn multiple machines
> was a very particular type of test.  To write such a test, one would
> write a **different** type of "control file" (a server one).  Then, by
> running a "server control file" with an **also different** command
> line application (``autoserv``, A.K.A. ``autotest-remote``), the
> server control file would have access to some special variables, such
> as the ``machines`` one.  By using an **also different** type of job
> implementation, the control file could run a given **Python function**
> on these various ``machines``.
> 
> An actual sample server control file (``server/samples/reboot.srv``)
> for Autotest looks like this::
> 
>    1  def run(machine):
>    2     host = hosts.create_host(machine)
>    3     host.reboot()
>    4
>    5  job.parallel_simple(run, machines)
> 
> Line #5 makes use of the different (server) job implementation to run
> function ``run`` (defined in line #1) in parallel on machines given by
> the special variable ``machines`` (made available by the also special
> ``autoserv`` tool).
> 
> This quick background check shows two important facts:

s/two/three/ (or four, if you adopt the suggestion that follows)

 
> 1) The functionality is not scoped to tests.  It's not easy to understand
>    where a test begins or ends by looking at such a control file.
> 
> 2) Users (and most importantly test writers) have to learn about
>    different tools and APIs when writing "multi machine" code;
> 
> 3) The machines are defined outside the test itself (in the form of
>    arguments to the ``autoserv`` command line application);

4. The "machine" abstraction is insufficient.

Basically, the "multiple machines" view has evolved into a "multiple
execution streams" view. With multiple streams, the machine is just one
of several properties.


> Please keep these Autotest characteristics in mind: Avocado's multi
> stream test support goals will be presented shortly, and will detail
> how they contrast with those.
> 
> Avocado's Multi Stream Test Support Goals
> =========================================
> 
> This is a hopefully complete summary of our goals:
> 
> 1) To not require a different type of test, that is, allow users
>    to *write* a plain `avocado.Test` while still having access to
>    multi stream goodies;
> 
> 2) To allow for clear separation between the test itself and its
>    execution environment (focus here on the execution streams
>    environment);
> 
> 3) To allow increased flexibility by abstracting the "machines"
>    concept into "excution streams";
> 
> 4) To allow for even increased flexibility by allowing test writers to
>    use not only Python functions, but other representations of code to
>    be executed on those separate streams;

I'm not sure what this means, specifically the phrase "other
representations of code".


> Comparison with prior art
> -------------------------
> 
> When compared to the Autotest version of multiple machine support for
> tests, Avocado's version is similar in that it keeps the separation of
> machine and test definition.

This is slightly contradictory to the autotest example given above,
where one of the deficiencies was, "it's not easy to understand where a
test begins and ends". In other words, it seems as though you're
claiming avocado has a characteristic similar to autotest, but it's a
characteristic we don't really like.


>                                That means that tests written in
> accordance to the official guidelines, will not contain reference to
> the machines ("execution streams") on which they will have portions of
> themselves executed on.

OK, I see the point.


> But, a major difference from the Autotest version is that this
> proposal attempts to provide the **same basic tools and test APIs** to
> the test writers needing the multiple stream support.  Of course,
> additional tools and APIs will be available, but they will not
> incompatible with traditional Avocado INSTRUMENTED tests.

Yes.

This comparison section is a bit light. One could almost read it as a
restatement of Goal 1. And there are 4 goals in total. If the key
points of this comparison were incorporated into Goal 1, you could
probably drop this section. (Or you could enhance the section by adding
comparisons to the other goals.)


> Core concepts
> =============
> 
> Because the first goal of this RFC is to set the general scope and
> approach to Multi Stream test support, it's important to properly
> describe each of the core concepts (usually abstractions) that will be
> used in later parts of this document.
> 
> Execution Stream
> ----------------
> 
> An *Execution Stream* is defined as a disposable execution environment,
> different and ideally isolated from the main test execution environment.

What about the relationship between execution streams? Is there any
requirement or expectation that the execution streams are different and
ideally isolated from *each other* as well?


> A simplistic but still valid implementation of an execution
> environment could be based on an Operating System level process.
> Another valid implementation would be based on a lightweight
> container.  Yet another valid example could be based on a remote
> execution interface (such as a secure shell connection).
> 
> These examples makes it clear that level of isolation is determined
> solely by the implementation.
> 
>  .. note:: Even though the idea is very similar, the term *thread* was
>            intentionally avoided here, so that readers are not led to think
>            that the architecture is based on an OS level thread.
> 
> An execution stream is the *"where"* to execute a "Block Of Code"
> (which is the *"what"*).
> 
> Block of Code
> -------------
> 
> A *Block of Code* is defined as computer executable code that can run
> from start to finish under a given environment and is able to report
> its outcome.

Self-referential definition (uses the word code in term being defined
and in the definition). Instead of "computer executable code" how about
something like "a sequence of executable statements"?

 
> For instance, a command such as ``grep -q vmx /proc/cpuinfo; echo $?``
> is valid computer executable code that can run under various shell
> implementations.  A Python function or module, a shell command, or
> even an Avocado INSTRUMENTED test could qualify as a block of code,
> given that an environment knows how to run them.
> 
> Again, this is the *what* to be run on a "Execution Streams" (which,
> in turn, is *"where"* it can be run).
> 
> Basic interface
> ===============
> 
> Without initial implementation attempts, it's unreasonable to document
> interfaces at this point and do not expect them to change.  Still, the
> already existing understanding of use cases suggests an early view of
> the interfaces that would be made available.

Personal preference: The first sentence contains two double-negative
phrases (unreasonable, do not) which makes the meaning hard to
understand. I think the statement would read better if one of the
negatives is transformed into a positive.

For example:
  s/and do not expect them/because they are likely/

 
> Execution Stream Interface
> --------------------------
> 
> One individual execution stream, within the context of a test, should
> allow its users (test writers) to control it with a clean interface.

"Control" to me implies being able to manipulate an active, executing
stream. Perhaps a better term instead of "control" is "define"?

[After reading on, I see that in fact the word "control" does make
sense given the actions you've listed below. I guess I was expecting to
see this section describe the "where" and the "what" properties of an
execution stream.]


> Actions that an execution stream implementation should provide:
> 
> * ``run``: Starts the execution of the given block of code (async,
>   non-blocking).
> * ``wait``: Block until the execution of the block of code has
>   finished.  ``run`` can be given a ``wait`` parameter that will
>   automatically block until the execution of code has finished.
> * ``terminate``: Terminate the execution stream, interrupting the
>   execution of the block of code and freeing all resources
>   associated with this disposable environment

I'm trying to think about how these actions are different than the
actions avocado already defines for a test. So one thing that I think
could be clarified is how the multi stream proposal affects the
existing test architecture.

For example:
* execution streams: today, tests have a single execution stream. In the
  future, tests have one or more execution streams.

* run: today, tests run themselves. In the future, (something) runs
  execution streams.

  * what is this "something?" Is it the test runner? Is it a stream?
    If it's a stream, then what launches the first stream? I don't
    have a clear picture of how this looks (I suspect this reflects
    more on me than on you).

  * this question arises in part because I'm trying to understand what
    happens today to the existing code that makes up a test, in
    particular the part that makes up the 'mainline'. Does it turn into
    a stream? If not, then perhaps it should have an official name
    (assuming it doesn't) so that we can reason about it separately
    from streams and to avoid people thinking that it's a stream (in
    case it's not).

* async: today, test (stream) execution is synchronous. In the future,
  streams can execute asynchronously.

* wait: doesn't really exist today and it wouldn't make sense
  anyway: a single test (stream) cannot wait on itself. In the future,
  streams can wait on other streams. However, a stream cannot wait on
  itself and a stream cannot wait on a stream in another test.

  * I realize I made a possibly invalid assumption: that any stream in
    a test can get information about any other stream in the same test.
    That may or may not be true. It implies being able to pass some
    sort of handle to an execution stream so that streams can learn
    about each other. That's another--possibly bad--assumption.

* terminate: doesn't really exist today, but there is a similar
  operation called 'cancel'. Does the meaning of 'cancel' change?


> The following properties should be provided to let users monitor the
> progress and outcome of the execution:
> 
> * ``active``: Signals with True or False wether the block of code
>   given on ``run`` has finished executing.  This will always return
>   False if ``wait`` is used, but can return either True or False when
>   running in async mode.

s/wether/whether/

I think if you use the word "signal" then the reader will be thinking
in terms of exceptions that are signaled. Using the word "return" seems
better, especially since that's what you use in the rest of the
description.

Consider that 'active' is only one possibility; there could be a whole
set of values that describe the execution state of the stream.


> * ``success``: A simplistic but precise view of the outcome of the
>   execution.

Likewise, 'success' seems to be one of many possible outcomes.


> * ``output``: A dictionary of various outputs that may have been
>   created by ``run``, keyed by a descriptive name.
> 
> The following properties could be provided to transport block of code
> payloads to the execution environment:
> 
> * ``send``: Sends the given content to the execution stream
>   environment.
> 
> Block of Code Interface for test writers
> ----------------------------------------
> 
> When a test writer intends to execute a block code, he must choose from
> one of the available implementations.  Since the test writer must know
> what type of code it's executing, the user inteface with the implementation
> can be much more flexible.

I'm stuck on wording again. Are these equivalent statements?

a) When a test writer intends to execute a block of code
b) When a test writer intends to define a block of code
c) When a test author intends to write a block of code

If so I would prefer either (c) or (b) instead of (a).

I have a similar difficulty parsing the phrase, "the test writer must
know what type of code it's executing". First, machines typically
execute code, not people. Second, "it's" isn't a valid reference to a
person in the English language, the gender-specific terms "he" and
"she" terms are used.

 
> For instance, suppose a Block Of Code implementation called
> ``PythonModule`` exists.  This implementation would possibly run
> something like
> ``python -m <modulename>`` and collect its outcome.
> 
> A user of such an implementation could write a test such as::
> 
>   from avocado import Test
>   from avocado.streams.code import PythonModule
> 
>   class ModuleTest(Test):
>     def test(self):
>         self.streams[1].run(PythonModule("mymodule",
>                                          path=["/opt/myproject"]))
> 
> The ``path`` interface in this example is made available and supported
> by the ``PythonModule`` implementation alone and will not be used the
> execution stream implementations. As a general rule, the "payload"
> should be the first argument to all block of code implementations.
> Other arguments can follow.

This all seems OK to me.


> Another possibility related to parameters is to have the Avocado's own
> test parameters ``self.params`` passed through to the block of code
> implementations, either all of them, or a subset based on path.  This
> could allow for example, a parameter signaling a "debug" condition to
> be passed on to the execution of the block of code.  Example::
> 
>   from avocado import Test
>   from avocado.streams.code import PythonModule
> 
>   class ModuleTest(Test):
>     def test(self):
>         self.streams[1].run(PythonModule("mymodule",
>                                          path=["/opt/myproject"],
>                                          params=self.params))

This makes sense.

 
> Block of Code Interface for Execution Stream usage
> --------------------------------------------------
> 
> Another type of public interface, in the sense that it's well known
> and documented, is the interface that Execution Stream implementations
> will use to interact with Block of Code implementations.  This is not
> intended to be used by test writers, though.

Then who would use it?

 
> Again, it's too early to define a frozen implementation, but this is
> how it could look like:
> 
> * ``send_self``: uses the Execution Stream's ``send`` interface to properly
>   populate the payload or other necessary assets for its execution.

I'm afraid I'm lost again. The above section on Execution Streams says
that its interfaces are used by users (test writers).

Here we have a definition of a send_self interface which is described
as using 'send', one of the interfaces of Execution Streams. So does
that mean the person writing a send_self method (that calls 'send') is
a test writer?

I think an architecture diagram would help me see how these various
layers fit together and interact with each other.


> * ``run``: Starts the execution of the payload, and waits for the outcome
>   in a synchronous way.  The asynchronous support is handled at the
> Execution
>   Stream side.
> * ``success``: Reports the positive or negative outcome in a
>   simplistic but precise way.
> * ``output``: A dictionary of various outputs that may be generated by the
>   execution of the code.  The Execution Stream implementation may merge this
>   content with its own ``output`` dictionary, given an unified view of the
>   output produced there.
> 
> Advanced topics and internals
> =============================
> 
> Execution Streams
> -----------------
> 
> An execution stream  was defined as a "disposable execution
> environment".  A "disposable execution environment", currently in the
> form of a fresh and separate process, is exactly what the Avocado
> test runner gives to a test in execution.
> 
> While there may be similarities between the Avocado Test Process
> (created by the test runner) and execution streams, please note that
> the execution streams are created *by* one's test code.  The following
> diagram may help to make the roles clearer::
> 
>    +-----------------------------------+
>    |       Avocado Test Process        |  <= created by the test runner
>    | +-------------------------------+ |
>    | | main execution stream         | |  <= executes your `test*()` method
>    | +-------------------------------+ |
>    | | execution stream #1           | |  <= initialized on demand by one's
>    | | ...                           | |     test code.  utilities to do so
>    | | execution stream #n           | |     are provided by the framework
>    | +-------------------------------+ |
>    +-----------------------------------+

Yay, a picture! :-)

Some of my earlier questions about the test (is it a stream or not) are
now coming into focus. In particular, that the existing test code has a
particular status.

 
> Even though the proposed mechanism is to let the framework create the
> execution lazily (on demand), the use of the execution stream is the
> definitive trigger for its creation.  With that in mind, it's accurate
> to say that the execution streams are created by one's test code
> (running on the "main execution stream").
> 
> Synchronous, asynchronous and synchronized execution
> ----------------------------------------------------
> 
> As can be seen in the interface proposal for ``run``, the default
> behavior is to have asynchronous executions, as most observed use
> cases seem to fit this execution mode.
> 
> Still, it may be useful to also have synchronous execution.  For that,
> it'd be a matter of setting the ``wait`` option to ``run``.
> 
> Another valid execution mode is synchronized execution.  This has been
> thoroughly documented by the previous RFCs, under sections named
> "Synchronization".  In theory, both synchronous and asynchronous
> execution modes could be combined with a synchronized execution, since
> the synchronization would happen among the execution streams
> themselves.  The synchronization mechanism, usually called a "barrier",
> won't be given too much focus here, since on the previous RFCs, it was
> considered a somehow agreed and understood point.
> 
> Termination
> -----------
> 
> By favoring asynchronous execution, execution streams need to also
> have a default behavior for handling termination of termination
> of resources.

Nit: It's the fact that asynchronous execution is offered at all that
causes this problem to arise. It doesn't matter whether it's the
favored execution model or not.

Suggestion: "By favoring asynchronous execution" --> "By offering an
asynchronous execution environment"

"termination of termination" is a circular phrase. Maybe something like
this would be better: "...execution streams need to also have a default
behavior for their termination that includes releasing of resources."

However, please be very explicit about exactly what resources you are
reclaiming. (I recall a recent request to terminate child processes
created by a test; that request was denied. I don't think you mean to
reverse that decision, which is why I'm making this point.)

Here's another suggested wording. It's probably too much,
but maybe you can get something useful from it:

  Synchronous execution streams are designed to terminate when their
  block of code exits; because they are synchronous, the termination of
  the test is blocked until the execution stream has completed.
  However, asynchronous execution streams may not explicitly terminate,
  but run "forever". Therefore, avocado must terminate an asynchronous
  execution stream when the test that created it terminates.


>                For instance, for a process based execution stream,
> if the following code is executed::
> 
>   from avocado import Test
>   from avocado.streams.code import shell
>   import time
> 
>   class MyTest(avocado.Test):
>       def test(self):
>           self.streams[0].run(shell("sleep 100"))
>           time.sleep(10)
> 

Nit: I thought stream numbers started at 1. s/streams[0]/streams[1]/.


> The process created as part of the execution stream would run for
> 10 seconds, and not 100 seconds.  This reflects that execution streams
> are, by definition, **disposable** execution environments.

This is twice that the characteristic "disposable" has been used and
I'm still not sure what it means.


> Execution streams are thus limited to the scope of one test, so
> implementations will need to terminate and clean up all associated
> resources.

"all associated resources" -- again, I think you need to be precise
about what resources are being discussed. Saying "all" implies more
than I think you mean.

 
> .. note:: based on initial experiments, this will usually mean that a
>           ``__del__`` method will be written to handle the cleanup.
> 
> Avocado Utility Libraries
> -------------------------
> 
> Based on initial evaluation, it looks like most of the features necessary
> to implement multi stream execution support can be architected as a set
> of utility libraries.
> 
> One example of pseudo code that could be possible with this design::
> 
>   from avocado import Test
>   from avocado.streams import get_implementation
>   from avocado.streams.code import shell
> 
>   class Remote(Test):
> 
>       def test_filtering(self):
>           klass = get_implementation("remote")
>           if klass is not None:
>               stream = klass(host=self.params.get("remote_hostname"),
>                              username=self.params.get("remote_username")
>                              password=self.params.get("remote_password"))
>               cmd = "ping -c 1 %s" % self.params.get("test_host_hostname")
>               stream.run(shell(cmd))
>

> Please note that this is not the intended end result of this proposal, but
> a side effect of implementing it using different software layers.  Most
> users should favor the simplified (higher level) interface.

I am trying to understand the purpose of this illustration. Based on
the remarks in the last paragraph above, I think you are trying to
demonstrate how to construct a test using the stream interface
described by this RFC (one that would be simpler to define using the
existing test interface).

Given this context, I'm trying to see if I understand how this code
behaves. And I remember (from the above definition) that run() defaults
to asynchronous execution, so should the 'wait' option be specified?
The ping must terminate since it has a count of 1. If wait is not
passed, then it's possible for the test to terminate before the ping
has been sent (or the response received).


> Writing a Multi-Stream test
> ===========================
> 
> As mentioned before, users have not yet been given tools **and
> guidelines** for writing multi-host (multi-stream in Avocado lingo)
> tests.  By setting a standard and supported way to use the available
> tools, we can certainly expect advanced multi-stream tests to become
> easier to write and then much more common, robust and better supported
> by Avocado itself.
> 
> Mapping from parameters
> -----------------------
> 
> The separation of stream definitions and test is a very important goal
> of this proposal.  Avocado already has a advanced parameter system, in
> which a test received parameters from various sources.The most common
> way of passing parameters at this point is by means of YAML files, so
> these will be used as the example format.

Suggested:
s/stream definitions and test/stream and test abstractions/

Grammar changes:
s/a advanced/an advanced/
s/a test received parameters/a test receives/


> Parameters that match a predefined schema (based on paths and node
> names) will be by evaluated by a tests' ``streams`` instance
> (available as ``self.streams`` within a test).
> 
> For instance, the following snippet of test code::
> 
>   from avocado import Test
> 
>   class MyTest(Test):
>       def test(self):
>           self.streams[1].run(python("import mylib; mylib.action()"))
> 
> Together with the following YAML file fed as input to the parameter
> system::
> 
>   avocado:
>      streams:
>       - 1:
>           type: remote
>           host: foo.example.com
> 
> Would result in the execution of ``import mylib; mylib.action()``
> in a Python interpreter on host ``foo.example.com``.
> 
> If test environments are refered to on a test, but have not been defined
> in the outlined schema, Avocado's ``streams`` attribute implementation
> can use a default Execution Stream implementation, such as a local process
> based one.  This default implementation can, of course, also be configured
> at the system and user level by means of configuration files, command line
> arguments and so on.

Grammar & typo change:
s/refered to on a test/referred to by a test/
s/process based/process-based/

> Another possibility is an "execution stream strict mode", in which no
> default implementation would be used, but an error condition would be
> generated.  This may be useful on environments or tests that are
> really tied to their execution stream types.
> 
> Intercommunication Test Example
> -------------------------------
> 
> This is a simple example that exercises the most important aspects
> proposed here.  The use case is to check that different hosts can
> communicate among themselves.  To do that, we define two streams as
> parameters (using YAML here), backed by a "remote" implementation::
> 
>   avocado:
>      streams:
>       - 1:
>           type: remote
>           host: foo.example.com
>       - 2:
>           type: remote
>           host: bar.example.com
> 
> Then, the following Avocado Test code makes use of them::
> 
>   from avocado import Test
>   from avocado.streams.code import shell
> 
>   class InterCommunication(Test):
>       def test(self):
>           self.streams[1].run(shell("ping -c 1 %s" % self.streams[2].host))
>           self.streams[2].run(shell("ping -c 1 %s" % self.streams[1].host))
>           self.streams.wait()
>           self.assertTrue(self.streams.success)
> 
> The ``streams`` attribute provide a aggregated interface for all the
> streams.
> Calling ``self.streams.wait()`` waits for all execution streams (and their
> block of code) to finish execution.

Grammar:
s/a aggregated/an aggregated/


> Support for slicing, if execution streams names based on integers only could
> be added, allowing for writing tests such as::

Slicing is a new concept that has not yet been defined. What is it?


>   avocado:
>      streams:
>       - 1:
>           type: remote
>           host: foo.example.com
>       - 2:
>           type: remote
>           host: bar.example.com
>       - 3:
>           type: remote
>           host: blackhat.example.com
>       - 4:
>           type: remote
>           host: pentest.example.com
> 
>   from avocado import Test
>   from avocado.streams.code import shell
> 
>   class InterCommunication(Test):
>       def test(self):
>           self.streams[1].run(shell("ping -c 1 %s" % self.streams[2].host))
>           self.streams[2].run(shell("ping -c 1 %s" % self.streams[1].host))
>           self.streams[3].run(shell("ping -c 1 %s" % self.streams[1].host))
>           self.streams[4].run(shell("ping -c 1 %s" % self.streams[1].host))
>           self.streams.wait()
>           self.assertTrue(self.streams[1:2].success)
>           self.assertFalse(self.streams[3:4].success)
>
> Support for synchronized execution also maps really well to the
> slicing example.  For instance, consider this::
> 
>   from avocado import Test
>   from avocado.streams.code import shell
> 
>   class InterCommunication(Test):
>       def test(self):
>           self.streams[1].run(shell("ping -c 60 %s" % self.streams[2].host)
>           self.streams[2].run(shell("ping -c 60 %s" % self.streams[1].host))
>           ddos = shell("ddos --target %s" self.streams[1].host)
>           self.streams[3:4].run(ddos, synchronized=True)
>           self.streams[1:2].wait()
>           self.assertTrue(self.streams.success)
> 
> This instructs streams 1 and 2 to start connectivity checks as soon as
> they **individually** can, while, for a full DDOS effect, streams 3
> and 4 would start only when they are both ready to do so.
> 
> Feedback and future versions
> ============================
> 
> This being an RFC, feedback is extremely welcome.  Also, exepect new
> versions
> based on feedback, discussions and further development of the ideas
> initially
> exposed here.
> 

Final thoughts: I like this proposal and think it has a lot of good
ideas. It really does a lot to advance the architecture and
usefulness of avocado.

One thing I started noticing towards the middle of the RFC is that the
term 'execution stream' seems to be overloaded; it has two meanings.
First, it's used as the overall name given to this entire concept.
Second, it's used as the name of a particular attribute (the "where")
so that it can be reasoned about separately from the block of code (the
"what"). Usually I can determine which meaning is intended by the
context in which it is used. But this was sometimes challenging. I'm
wondering if one or the other concepts should have a different name. 

Thanks.

-Jeff




More information about the Avocado-devel mailing list