Skip to content
Snippets Groups Projects
  1. Jul 21, 2021
    • Peter Maydell's avatar
    • Peter Maydell's avatar
      Merge remote-tracking branch 'remotes/stefanha-gitlab/tags/block-pull-request' into staging · 29c7daa0
      Peter Maydell authored
      
      Pull request
      
      Stefano's performance regression fix for commit 2558cb8d ("linux-aio:
      increasing MAX_EVENTS to a larger hardcoded value").
      
      # gpg: Signature made Wed 21 Jul 2021 14:12:47 BST
      # gpg:                using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8
      # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full]
      # gpg:                 aka "Stefan Hajnoczi <stefanha@gmail.com>" [full]
      # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35  775A 9CA4 ABB3 81AB 73C8
      
      * remotes/stefanha-gitlab/tags/block-pull-request:
        linux-aio: limit the batch size using `aio-max-batch` parameter
        iothread: add aio-max-batch parameter
        iothread: generalize iothread_set_param/iothread_get_param
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      29c7daa0
    • Stefano Garzarella's avatar
      linux-aio: limit the batch size using `aio-max-batch` parameter · d7ddd0a1
      Stefano Garzarella authored and Stefan Hajnoczi's avatar Stefan Hajnoczi committed
      
      When there are multiple queues attached to the same AIO context,
      some requests may experience high latency, since in the worst case
      the AIO engine queue is only flushed when it is full (MAX_EVENTS) or
      there are no more queues plugged.
      
      Commit 2558cb8d ("linux-aio: increasing MAX_EVENTS to a larger
      hardcoded value") changed MAX_EVENTS from 128 to 1024, to increase
      the number of in-flight requests. But this change also increased
      the potential maximum batch to 1024 elements.
      
      When there is a single queue attached to the AIO context, the issue
      is mitigated from laio_io_unplug() that will flush the queue every
      time is invoked since there can't be others queue plugged.
      
      Let's use the new `aio-max-batch` IOThread parameter to mitigate
      this issue, limiting the number of requests in a batch.
      
      We also define a default value (32): this value is obtained running
      some benchmarks and it represents a good tradeoff between the latency
      increase while a request is queued and the cost of the io_submit(2)
      system call.
      
      Signed-off-by: Stefano Garzarella's avatarStefano Garzarella <sgarzare@redhat.com>
      Message-id: 20210721094211.69853-4-sgarzare@redhat.com
      Signed-off-by: Stefan Hajnoczi's avatarStefan Hajnoczi <stefanha@redhat.com>
      d7ddd0a1
    • Stefano Garzarella's avatar
      iothread: add aio-max-batch parameter · 1793ad02
      Stefano Garzarella authored and Stefan Hajnoczi's avatar Stefan Hajnoczi committed
      
      The `aio-max-batch` parameter will be propagated to AIO engines
      and it will be used to control the maximum number of queued requests.
      
      When there are in queue a number of requests equal to `aio-max-batch`,
      the engine invokes the system call to forward the requests to the kernel.
      
      This parameter allows us to control the maximum batch size to reduce
      the latency that requests might accumulate while queued in the AIO
      engine queue.
      
      If `aio-max-batch` is equal to 0 (default value), the AIO engine will
      use its default maximum batch size value.
      
      Signed-off-by: Stefano Garzarella's avatarStefano Garzarella <sgarzare@redhat.com>
      Message-id: 20210721094211.69853-3-sgarzare@redhat.com
      Signed-off-by: Stefan Hajnoczi's avatarStefan Hajnoczi <stefanha@redhat.com>
      1793ad02
    • Stefano Garzarella's avatar
      iothread: generalize iothread_set_param/iothread_get_param · 0445409d
      Stefano Garzarella authored and Stefan Hajnoczi's avatar Stefan Hajnoczi committed
      
      Changes in preparation for next patches where we add a new
      parameter not related to the poll mechanism.
      
      Let's add two new generic functions (iothread_set_param and
      iothread_get_param) that we use to set and get IOThread
      parameters.
      
      Signed-off-by: Stefano Garzarella's avatarStefano Garzarella <sgarzare@redhat.com>
      Message-id: 20210721094211.69853-2-sgarzare@redhat.com
      Signed-off-by: Stefan Hajnoczi's avatarStefan Hajnoczi <stefanha@redhat.com>
      0445409d
    • Peter Maydell's avatar
      Merge remote-tracking branch 'remotes/cleber-gitlab/tags/python-next-pull-request' into staging · 033bd16b
      Peter Maydell authored
      
      Acceptance Tests
      
      - Fix for tests/acceptance/virtio-gpu.py to match the change in device
        name
      - Fix for failure caught by tests/acceptance/multiprocess.py
      
      PS: While not a maintainer for the subsystem in PATCH 7, I'm including
      it as a one-off to facilitate the landing of the fix as discussed in
      the mailing list.
      
      # gpg: Signature made Wed 21 Jul 2021 00:26:17 BST
      # gpg:                using RSA key 7ABB96EB8B46B94D5E0FE9BB657E8D33A5F209F3
      # gpg: Good signature from "Cleber Rosa <crosa@redhat.com>" [marginal]
      # gpg: WARNING: This key is not certified with sufficiently trusted signatures!
      # gpg:          It is not certain that the signature belongs to the owner.
      # Primary key fingerprint: 7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3
      
      * remotes/cleber-gitlab/tags/python-next-pull-request:
        remote/memory: Replace share parameter with ram_flags
        tests/acceptance/virtio-gpu.py: provide kernel and initrd hashes
        tests/acceptance/virtio-gpu.py: use virtio-vga-gl
        tests/acceptance/virtio-gpu.py: combine kernel command line
        tests/acceptance/virtio-gpu.py: combine CPU tags
        tests/acceptance/virtio-gpu.py: combine x86_64 arch tags
        tests/acceptance/virtio-gpu.py: use require_accelerator()
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      033bd16b
  2. Jul 20, 2021
Loading