1. 15 11月, 2019 1 次提交
  2. 09 11月, 2019 1 次提交
    • M
      Turn on RasterCache based on view hierarchy (#13762) · 8a99d107
      Michael Klimushyn 提交于
      This is a duplicate of flutter/engine#13360 with the test switched to use the software backend instead of the GL backend.
      
      After some debugging and testing on another GL embedder I think the issue with the test is some bug having to do with the GL implementation in the test harness specifically. 
      
      Fixes flutter/flutter#38903
      8a99d107
  3. 08 11月, 2019 1 次提交
    • C
      Create a new picture recorder even when the embedder supplied render target is recycled. (#13744) · 7590336b
      Chinmay Garde 提交于
      The earlier assumption was that the render target would be re-materialized per frame. The render target needs its own picture recorder to be create per frame as well. When render targets are cached in the registry, an existing target will be reused. But submitting the previous frame would have discarded the recorder already. The layer tree paint would then attempt to dererence a null canvas causing a crash at runtime.
      
      Added tests to ensure that this does not happen both with and without a custom compositor specified by the embedder. I am going to rework this code so that the external view embedders thinks of render target access on a per frame basis but that is a larger change. This smaller patchset should unblock broken builds.
      
      Fixes b/144093523
      7590336b
  4. 07 11月, 2019 1 次提交
  5. 31 10月, 2019 2 次提交
  6. 30 10月, 2019 1 次提交
  7. 25 10月, 2019 2 次提交
  8. 24 10月, 2019 1 次提交
    • C
      Add FlutterEngineRunsAOTCompiledDartCode to the embedder API. (#13319) · 1663ac9e
      Chinmay Garde 提交于
      For embedder code that is configured for both AOT and JIT mode Dart execution
      based on the Flutter engine being linked to, this runtime check may be used to
      appropriately configure the `FlutterProjectArgs`. In JIT mode execution, the
      kernel snapshots must be present in the Flutter assets directory specified in
      the `FlutterProjectArgs`. For AOT execution, the fields `vm_snapshot_data`,
      `vm_snapshot_instructions`, `isolate_snapshot_data` and
      `isolate_snapshot_instructions` (along with their size fields) must be specified
      in `FlutterProjectArgs`.
      1663ac9e
  9. 23 10月, 2019 1 次提交
  10. 18 10月, 2019 3 次提交
  11. 16 10月, 2019 1 次提交
    • C
      Allow embedders to specify a render task runner description. (#13124) · bf81971f
      Chinmay Garde 提交于
      Embedders may use this to specify a thread whose event loop is managed by them
      instead of the engine. In addition, specifying the same task runner for both
      the platform and render task runners allows embedders to effectively perform
      GPU rendering operations on the platform thread.
      
      To affect this change, the following non breaking changes to the API have been
      made:
      
      * The `FlutterCustomTaskRunners` struct now has a new field `render_task_runner`
        for the specification of a custom render task runner.
      * The `FlutterTaskRunnerDescription` has a new field `identifier`. Embedders
        must supply a unique identifier for each task runner they specify. In
        addition, when describing multiple task runners that run their tasks on the
        same thread, their identifiers must match.
      * The embedder may need to process tasks during `FlutterEngineRun` and
        `FlutterEngineShutdown`. However, the embedder doesn't have the Flutter engine
        handle before `FlutterEngineRun` and is supposed to relinquish handle right
        before `FlutterEngineShutdown`. Since the embedder needs the Flutter engine
        handle to service tasks on other threads while these calls are underway,
        there exist opportunities for deadlock. To work around this scenario, three
        new calls have been added that allow more deliberate management of the Flutter
        engine instance.
        * `FlutterEngineRun` can be replaced with `FlutterEngineInitialize` and
          `FlutterEngineRunInitialized`. The embedder can obtain a handle to the
          engine after the first call but the engine will not post any tasks to custom
          task runners specified by the embedder till the
          `FlutterEngineRunInitialized` call. Embedders can guard the Flutter engine
          handle behind a mutex for safe task runner interop.
        * `FlutterEngineShutdown` can be preceded by the `FlutterEngineDeinitialize`
          call. After this call the Flutter engine will no longer post tasks onto
          embedder managed task runners. It is still embedder responsibility to
          collect the Flutter engine handle via `FlutterEngineShutdown`.
      * To maintain backwards compatibility with the old APIs, `FlutterEngineRun` is
        now just a convenience for `FlutterEngineInitialize` and
        `FlutterEngineRunInitilaized`. `FlutterEngineShutdown` now implicitly calls
        `FlutterEngineDeinitialize` as well. This allows existing users who don't care
        are custom task runner interop to keep using the old APIs.
      * Adds complete test coverage for both old and new paths.
      
      Fixes https://github.com/flutter/flutter/issues/42460
      Prerequisite for https://github.com/flutter/flutter/issues/17579
      bf81971f
  12. 10 10月, 2019 1 次提交
  13. 09 10月, 2019 1 次提交
    • C
      Add a unit-test to verify that root surface transformation affect platform... · 03d1bba6
      Chinmay Garde 提交于
      Add a unit-test to verify that root surface transformation affect platform view coordinates. (#12783)
      
      See b/141980393 for details.
      
      In the issue, the embedder (assumed to render Flutter contents of size 800 x 600 [1]) is meant to be displayed on its side. To achieve this, it specifies a root surface transformation that translates the surface by its width (or height when it held in the correct viewing position) and then rotates it counter-clockwise by 90 degrees. This test verifies that the Flutter Engine accounts for those transformations in the custom compositor platform view coodinates.
      
      [1] The actual size is something different. 800x600 is for illustrative purposes.
      03d1bba6
  14. 05 10月, 2019 1 次提交
  15. 28 9月, 2019 1 次提交
  16. 26 9月, 2019 1 次提交
    • D
      [fuchsia] Wire up OpacityLayer to Scenic (#11322) · fcc4ab32
      David Worsham 提交于
      On Fuchsia, add a build flag for compositing OpacityLayers using the system
      compositor vs Skia, which exposes a fastpath for opacity via Scenic.
      This will only work under certain circumstances, in particular nested
      OpacityLayers will not render correctly!
      
      On Fuchsia, add a build flag for compositing PhysicalShapeLayers using
      the system compositor vs Skia. Set to off by default, which restores
      performant shadows on Fuchsia.
      
      Remove the opacity exposed from ChildView, as that was added mistakenly.
      
      Finally, we centralize the logic for switching between the
      system-composited and in-process-composited paths inside of
      ContainerLayer. We also centralize the logic for computing elevation
      there. This allows the removal of many OS_FUCHSIA-specific code-paths.
      
      Test: Ran workstation on Fuchsia; benchmarked before and after
      Bug: 23711
      Bug: 24163
      
      * Fix broken tests
      fcc4ab32
  17. 24 9月, 2019 1 次提交
  18. 18 9月, 2019 2 次提交
    • C
      Account for root surface transformation on the surfaces managed by the... · 1c7300ed
      Chinmay Garde 提交于
      Account for root surface transformation on the surfaces managed by the external view embedder. (#11384)
      
      The earlier design speculated that embedders could affect the same
      transformations on the layers post engine compositor presentation but before
      final composition.
      
      However, the linked issue points out that this design is not suitable for use
      with hardware overlay planes. When rendering to the same, to affect the
      transformation before composition, embedders would have to render to an
      off-screen render target and then apply the transformation before presentation.
      This patch negates the need for that off-screen render pass.
      
      To be clear, the previous architecture is still fully viable. Embedders still
      have full control over layer transformations before composition. This is an
      optimization for the hardware overlay planes use-case.
      
      Fixes b/139758641
      1c7300ed
    • C
      Shuffle test order and repeat test runs once. (#12275) · b4d81583
      Chinmay Garde 提交于
      The tests we write must be resilient to the order in which they are run in the
      harness. That is, they must not rely on global state set by other tests that
      have already run in the process. Also, these tests must themselves be
      repeatable. That is, they must correctly clean up after themselves and be able
      to run successfully again in the same process.
      
      This patch adds some safeguards against (but does NOT guarantee) the addition of
      tests that violate the dictum.
      
      Additionally, test failures must be easily reproducible for folks investigating
      the test failure. Also, tests that assert correctness of unrelated code must not
      stop progress on the authors patch.
      
      This changes does not hinder reproducibility of test failures because the random
      seed is printed in the logs before running each test. Developers attempting to
      reproduce the failure locally can do the same via the following invocation
      `--gtest_shuffle --gtest_repeat=<the count> --gtest_random_seed=<seed from failing run>`.
      
      This change does introduce potential burden on patch authors that may see
      failures in unrelated code as a newly failing shuffle seed is used on their
      runs. To ameliorate this, we will formulate guidance for them to aggressively
      mark such tests as disabled and file bugs to enable the same.
      
      The test seed is intentionally kept low because it’s purpose is to test that
      individual tests are repeatable. It must not be used as a replacement for
      fuzzing.
      b4d81583
  19. 10 9月, 2019 1 次提交
  20. 27 8月, 2019 1 次提交
    • A
      Skip empty platform view overlays. (#11427) · a34f9a81
      Amir Hardon 提交于
      This change sets up a "spying canvas" to try and detect empty canvases.
      When using platform views with a custom embedder, if a platform view
      overlay canvas is known to be empty we skip creating a compositor layer
      for that overlay.
      a34f9a81
  21. 24 8月, 2019 2 次提交
  22. 23 8月, 2019 1 次提交
  23. 22 8月, 2019 1 次提交
  24. 21 8月, 2019 2 次提交
  25. 20 8月, 2019 1 次提交
  26. 14 8月, 2019 1 次提交
    • C
      Allow embedder controlled composition of Flutter layers. (#10195) · e8f95440
      Chinmay Garde 提交于
      This patch allows embedders to split the Flutter layer tree into multiple
      chunks. These chunks are meant to be composed one on top of another. This gives
      embedders a chance to interleave their own contents between these chunks.
      
      The Flutter embedder API already provides hooks for the specification of
      textures for the Flutter engine to compose within its own hierarchy (for camera
      feeds, video, etc..). However, not all embedders can render the contents of such
      sources into textures the Flutter engine can accept. Moreover, this composition
      model may have overheads that are non-trivial for certain use cases. In such
      cases, the embedder may choose to specify multiple render target for Flutter to
      render into instead of just one.
      
      The use of this API allows embedders to perform composition very similar to the
      iOS embedder. This composition model is used on that platform for the embedding
      of UIKit view such and web view and map views within the Flutter hierarchy.
      However, do note that iOS also has threading configurations that are currently
      not available to custom embedders.
      
      The embedder API updates in this patch are ABI stable and existing embedders
      will continue to work are normal. For embedders that want to enable this
      composition mode, the API is designed to make it easy to opt into the same in an
      incremental manner.
      
      Rendering of contents into the “root” rendering surface remains unchanged.
      However, now the application can push “platform views” via a scene builder.
      These platform views need to handled by a FlutterCompositor specified in a new
      field at the end of the FlutterProjectArgs struct.
      
      When a new platform view in introduced within the layer tree, the compositor
      will ask the embedder to create a new render target for that platform view.
      Render targets can currently be OpenGL framebuffers, OpenGL textures or software
      buffers. The type of the render target returned by the embedder must be
      compatible with the root render surface. That is, if the root render surface is
      an OpenGL framebuffer, the render target for each platform view must either be a
      texture or a framebuffer in the same OpenGL context. New render target types as
      well as root renderers for newer APIs like Metal & Vulkan can and will be added
      in the future. The addition of these APIs will be done in an ABI & API stable
      manner.
      
      As Flutter renders frames, it gives the embedder a callback with information
      about the position of the various platform views in the effective hierarchy.
      The embedder is then meant to put the contents of the render targets that it
      setup and had previously given to the engine onto the screen (of course
      interleaving the contents of the platform views).
      
      Unit-tests have been added that test not only the structure and properties of
      layer hierarchy given to the compositor, but also the contents of the texels
      rendered by a test compositor using both the OpenGL and software rendering
      backends.
      
      Fixes b/132812775
      Fixes flutter/flutter#35410
      e8f95440
  27. 09 8月, 2019 1 次提交
  28. 07 8月, 2019 1 次提交
    • C
      Allow embedders to control Dart VM lifecycle on engine shutdown. (#10652) · b769353c
      Chinmay Garde 提交于
      This exposes the `Settings::leak_vm` flag to custom embedders. All embedder
      unit-tests now shut down the VM on the shutdown of the last engine in the
      process. The mechanics of VM shutdown are already tested in the Shell unit-tests
      harness in the DartLifecycleUnittests set of of assertions. This just exposes
      that functionality to custom embedders. Since it is part of the public stable
      API, I also switched the name of the field to be something less snarky than the
      field in private shell settings.
      b769353c
  29. 17 7月, 2019 1 次提交
  30. 10 7月, 2019 1 次提交
    • C
      Rework image & texture management to use concurrent message queues. (#9486) · ad582b50
      Chinmay Garde 提交于
      This patch reworks image decompression and collection in the following ways
      because of misbehavior in the described edge cases.
      
      The current flow for realizing a texture on the GPU from a blob of compressed
      bytes is to first pass it to the IO thread for image decompression and then
      upload to the GPU. The handle to the texture on the GPU is then passed back to
      the UI thread so that it can be included in subsequent layer trees for
      rendering. The GPU contexts on the Render & IO threads are in the same
      sharegroup so the texture ends up being visible to the Render Thread context
      during rendering. This works fine and does not block the UI thread. All
      references to the image are owned on UI thread by Dart objects. When the final
      reference to the image is dropped, the texture cannot be collected on the UI
      thread (because it has not GPU context). Instead, it must be passed to either
      the GPU or IO threads. The GPU thread is usually in the middle of a frame
      workload so we redirect the same to the IO thread for eventual collection. While
      texture collections are usually (comparatively) fast, texture decompression and
      upload are slow (order of magnitude of frame intervals).
      
      For application that end up creating (by not necessarily using) numerous large
      textures in straight-line execution, it could be the case that texture
      collection tasks are pending on the IO task runner after all the image
      decompressions (and upload) are done. Put simply, the collection of the first
      image could be waiting for the decompression and upload of the last image in the
      queue.
      
      This is exacerbated by two other hacks added to workaround unrelated issues.
      * First, creating a codec with a single image frame immediately kicks of
        decompression and upload of that frame image (even if the frame was never
        request from the codec). This hack was added because we wanted to get rid of
        the compressed image allocation ASAP. The expectation was codecs would only be
        created with the sole purpose of getting the decompressed image bytes.
        However, for applications that only create codecs to get image sizes (but
        never actually decompress the same), we would end up replacing the compressed
        image allocation with a larger allocation (device resident no less) for no
        obvious use. This issue is particularly insidious when you consider that the
        codec is usually asked for the native image size first before the frame is
        requested at a smaller size (usually using a new codec with same data but new
        targetsize). This would cause the creation of a whole extra texture (at 1:1)
        when the caller was trying to “optimize” for memory use by requesting a
        texture of a smaller size.
      * Second, all image collections we delayed in by the unref queue by 250ms
        because of observations that the calling thread (the UI thread) was being
        descheduled unnecessarily when a task with a timeout of zero was posted from
        the same (recall that a task has to be posted to the IO thread for the
        collection of that texture). 250ms is multiple frame intervals worth of
        potentially unnecessary textures.
      
      The net result of these issues is that we may end up creating textures when all
      that the application needs is to ask it’s codec for details about the same (but
      not necessarily access its bytes). Texture collection could also be delayed
      behind other jobs to decompress the textures on the IO thread. Also, all texture
      collections are delayed for an arbitrary amount of time.
      
      These issues cause applications to be susceptible to OOM situations. These
      situations manifest in various ways. Host memory exhaustion causes the usual OOM
      issues. Device memory exhaustion seems to manifest in different ways on iOS and
      Android. On Android, allocation of a new texture seems to be causing an
      assertion (in the driver). On iOS, the call hangs (presumably waiting for
      another thread to release textures which we won’t do because those tasks are
      blocked behind the current task completing).
      
      To address peak memory usage, the following changes have been made:
      * Image decompression and upload/collection no longer happen on the same thread.
        All image decompression will now be handled on a workqueue. The number of
        worker threads in this workqueue is equal to the number of processors on the
        device. These threads have a lower priority that either the UI or Render
        threads. These workers are shared between all Flutter applications in the
        process.
      * Both the images and their codec now report the correct allocation size to Dart
        for GC purposes. The Dart VM uses this to pick objects for collection. Earlier
        the image allocation was assumed to 32bpp with no mipmapping overhead
        reported. Now, the correct image size is reported and the mipmapping overhead
        is accounted for. Image codec sizes were not reported to the VM earlier and
        now are. Expect “External” VM allocations to be higher than previously
        reported and the numbers in Observatory to line up more closely with actual
        memory usage (device and host).
      * Decoding images to a specific size used to decode to 1:1 before performing a
        resize to the correct dimensions before texture upload. This has now been
        reworked so that images are first decompressed to a smaller size supported
        natively by the codec before final resizing to the requested target size. The
        intermediate copy is now smaller and more promptly collected. Resizing also
        happens on the workqueue worker.
      * The drain interval of the unref queue is now sub-frame-interval. I am hesitant
        to remove the delay entirely because I have not been able to instrument the
        performance overhead of the same. That is next on my list. But now, multiple
        frame intervals worth of textures no longer stick around.
      
      The following issues have been addressed:
      * https://github.com/flutter/flutter/issues/34070 Since this was the first usage
        of the concurrent message loops, the number of idle wakes were determined to
        be too high and this component has been rewritten to be simpler and not use
        the existing task runner and MessageLoopImpl interface.
      * Image decoding had no tests. The new `ui_unittests` harness has been added
        that sets up a GPU test harness on the host using SwiftShader. Tests have been
        added for image decompression, upload and resizing.
      * The device memory exhaustion in this benchmark has been addressed. That
        benchmark is still not viable for inclusion in any harness however because it
        creates 9 million codecs in straight-line execution. Because these codecs are
        destroyed in the microtask callbacks, these are referenced till those
        callbacks are executed. So now, instead of device memory exhaustion, this will
        lead to (slower) exhaustion of host memory. This is expected and working as
        intended.
      
      This patch only addresses peak memory use and makes collection of unused images
      and textures more prompt. It does NOT address memory use by images referenced
      strongly by the application or framework.
      ad582b50
  31. 07 7月, 2019 1 次提交
  32. 04 7月, 2019 1 次提交
  33. 01 7月, 2019 1 次提交