[Libguestfs] [libnbd PATCH v2 2/5] commands: Allow for a command queue

Richard W.M. Jones rjones at redhat.com
Wed May 22 09:23:03 UTC 2019


On Tue, May 21, 2019 at 10:15:49PM -0500, Eric Blake wrote:
> Previously, our 'cmds_to_issue' list was at most 1 element long,
> because we reject all commands except from state READY, but don't get
> back to state READY until the issue-commands sequence has completed.
> However, this is not as friendly on the client - once we are in
> transmission phase, a client may want to queue up another command
> whether or not the state machine is still tied up in processing a
> previous command.  We still want to reject commands sent before the
> first time we reach READY, as well as keep the cmd_issue event for
> kicking the state machine into action when there is no previous
> command being worked on, but otherwise, the state machine itself can
> recognize when the command queue needs draining.

I have a queued series which adds some new calls that may make
this patch simpler:

  int nbd_[unlocked_]aio_is_created (conn)
     Returns true if the connection is in the START state

  int nbd_[unlocked_]aio_is_connecting (conn)
     Returns true if the connection is connecting or handshaking

  int nbd_[unlocked_]aio_is_processing (conn)
     Returns true if the connection is issuing or replying to
     commands (but not in the READY state)

I will post these later this morning.

I believe they will let you get rid of the reached_ready flag, as well
as making this patch more correct because you probably want to avoid
issuing commands on a connection which is closed or dead.

You'd use a test like:

  if (nbd_unlocked_aio_is_ready (conn)) {
    // call nbd_internal_run
  }
  else if (nbd_unlocked_aio_is_processing (conn)) {
    // can't call nbd_internal_run because we're in the
    // wrong state, but we can enqueue the command and it
    // will be processed next time we get to READY
  }
  else {
    // this is an error
  }

> @@ -296,9 +297,24 @@ command_common (struct nbd_connection *conn,
>    if (conn->structured_replies && cmd->data && type == NBD_CMD_READ)
>      memset (cmd->data, 0, cmd->count);
> 
> -  cmd->next = conn->cmds_to_issue;
> -  conn->cmds_to_issue = cmd;
> -  if (nbd_internal_run (conn->h, conn, cmd_issue) == -1)
> +  /* If we have not reached READY yet, sending the event lets the
> +   * state machine fail for requesting a command too early. If there
> +   * are no queued commands and we are already in state READY, send an
> +   * event to kick the state machine. In all other cases, append our
> +   * command to the end of the queue, and the state machine will
> +   * eventually get to it when it cycles back to READY.
> +   */

I think this test is wrong.  We don't know what the failure was,
it might have failed for a variety of reasons.

It's better to simple test if nbd_unlocked_aio_is_ready, then
call nbd_internal_run (if true), otherwise queue the command.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top




More information about the Libguestfs mailing list