1. 21 10月, 2020 1 次提交
  2. 17 10月, 2020 2 次提交
    • C
      Eliminate unnecessary linter opt-outs (#21935) · 49c35b61
      Chris Bracken 提交于
      Eliminates FLUTTER_NOLINT where they can be landed without triggering
      lint failures.
      49c35b61
    • C
      Enable loading snapshots with sound null safety enabled. (#21820) · 5bd7260a
      Chinmay Garde 提交于
      Snapshots compiled with sound null-safety enabled require changes to the way in
      which isolates are launched. Specifically, the `Dart_IsolateFlags::null_safety`
      field needs to be known upfront. The value of this field can only be determined
      once the kernel snapshot is available. This poses a problem in the engine
      because the engine used to launch the isolate at shell initialization and only
      need the kernel mappings later at isolate launch (when transitioning the root
      isolate to the `DartIsolate::Phase::Running` phase). This patch delays launch of
      the isolate on the UI task runner till a kernel mapping is available. The side
      effects of this delay (callers no longer having access to the non-running
      isolate handle) have been addressed in this patch. The DartIsolate API has also
      been amended to hide the method that could return a non-running isolate to the
      caller.  Instead, it has been replaced with a method that requires a valid
      isolate configuration that returns a running root isolate. The isolate will be
      launched by asking the isolate configuration for its null-safety
      characteristics.
      
      A side effect of enabling null-safety is that Dart APIs that work with legacy
      types will now terminate the process if used with an isolate that has sound
      null-safety enabled. These APIs may no longer be used in the engine. This
      primarily affects the Dart Convertors in Tonic that convert certain C++ objects
      into the Dart counterparts. All known Dart Converters have been updated to
      convert C++ objects to non-nullable Dart types inferred using type traits of the
      corresponding C++ object. The few spots in the engine that used the old Dart
      APIs directly have been manually updated. To ensure that no usage of the legacy
      APIs remain in the engine (as these would cause runtime process terminations),
      the legacy APIs were prefixed with the `DART_LEGACY_API` macro and the macro
      defined to `[[deprecated]]` in all engine translation units. While the engine
      now primarily works with non-nullable Dart types, callers can still use
      `Dart_TypeToNonNullableType` to acquire nullable types for use directly or with
      Tonic. One use case that is not addressed with the Tonic Dart Convertors is the
      creation of non-nullable lists of nullable types. This hasn’t come up so far in
      the engine.
      
      A minor related change is reworking tonic to define a single library target.
      This allows the various tonic subsystems to depend on one another. Primarily,
      this is used to make the Dart convertors use the logging utilities. This now
      allows errors to be more descriptive as the presence of error handles is caught
      (and logged) earlier.
      
      Fixes https://github.com/flutter/flutter/issues/59879
      5bd7260a
  3. 12 9月, 2020 1 次提交
    • C
      Clean up C++ includes (#21127) · 08dabe96
      Chris Bracken 提交于
      Cleans up header order/grouping for consistency: associated header, C/C++ system/standard library headers, library headers, platform-specific #includes.
      
      Adds <cstring> where strlen, memcpy are being used: there are a bunch of places we use them transitively.
      
      Applies linter-required cleanups. Disables linter on one file due to included RapidJson header. See https://github.com/flutter/flutter/issues/65676
      
      This patch does not cover flutter/shell/platform/darwin. There's a separate, slightly more intensive cleanup for those in progress.
      08dabe96
  4. 03 9月, 2020 1 次提交
  5. 01 8月, 2020 1 次提交
  6. 23 7月, 2020 1 次提交
  7. 01 7月, 2020 1 次提交
  8. 05 3月, 2020 1 次提交
  9. 31 1月, 2020 1 次提交
  10. 11 12月, 2019 1 次提交
    • J
      Create separate objects for isolate state and isolate group state (#14268) · b7d4278b
      Jason Simmons 提交于
      Isolate data may need to be deleted on the same thread where it was allocated.
      In particular, the task observer set up in the UIDartState ctor must be removed
      from the same message loop where it was added.
      
      The engine had been using the same DartIsolate object as the root isolate data
      and as the isolate group data.  This object would be deleted when the isolate
      group was shut down.  However, group shutdown may occur on a thread associated
      with a secondary isolate.  When this happens, cleanup of any state tied to the
      root isolate's thread will fail.
      
      This change adds a DartIsolateGroupData object holding state that is common
      among all isolates in a group.  DartIsolateGroupData can be deleted on any
      thread.
      
      See https://github.com/flutter/flutter/issues/45578
      b7d4278b
  11. 16 11月, 2019 1 次提交
  12. 01 11月, 2019 1 次提交
  13. 23 10月, 2019 1 次提交
    • R
      Roll Dart to 6a65ea9cad4b014f88d2f1be1b321db493725a1c. (#13294) · f6900001
      Ryan Macnak 提交于
      Remove dead shared snapshot arguments to Dart_CreateIsolateGroup.
      
      6a65ea9cad4b [vm] Remove shared snapshot and reused instructions features.
      db8370e36147 [gardening] Fix frontend-server dartdevc windows test.
      4601bd7bffea Modified supertype check error message to be more descriptive.
      0449905e2de6 [CFE] Add a serialization-and-unserialization step to strong test
      c8b903c2f94f Update CHANGELOG.md
      2a12a13d9684 [Test] Skips emit_aot_size_info_flag_test on crossword.
      b26127fe01a5 [cfe] Add reachability test skeleton
      f6900001
  14. 22 10月, 2019 1 次提交
  15. 11 10月, 2019 2 次提交
  16. 24 9月, 2019 1 次提交
  17. 24 8月, 2019 1 次提交
  18. 16 7月, 2019 1 次提交
  19. 10 7月, 2019 1 次提交
    • C
      Rework image & texture management to use concurrent message queues. (#9486) · ad582b50
      Chinmay Garde 提交于
      This patch reworks image decompression and collection in the following ways
      because of misbehavior in the described edge cases.
      
      The current flow for realizing a texture on the GPU from a blob of compressed
      bytes is to first pass it to the IO thread for image decompression and then
      upload to the GPU. The handle to the texture on the GPU is then passed back to
      the UI thread so that it can be included in subsequent layer trees for
      rendering. The GPU contexts on the Render & IO threads are in the same
      sharegroup so the texture ends up being visible to the Render Thread context
      during rendering. This works fine and does not block the UI thread. All
      references to the image are owned on UI thread by Dart objects. When the final
      reference to the image is dropped, the texture cannot be collected on the UI
      thread (because it has not GPU context). Instead, it must be passed to either
      the GPU or IO threads. The GPU thread is usually in the middle of a frame
      workload so we redirect the same to the IO thread for eventual collection. While
      texture collections are usually (comparatively) fast, texture decompression and
      upload are slow (order of magnitude of frame intervals).
      
      For application that end up creating (by not necessarily using) numerous large
      textures in straight-line execution, it could be the case that texture
      collection tasks are pending on the IO task runner after all the image
      decompressions (and upload) are done. Put simply, the collection of the first
      image could be waiting for the decompression and upload of the last image in the
      queue.
      
      This is exacerbated by two other hacks added to workaround unrelated issues.
      * First, creating a codec with a single image frame immediately kicks of
        decompression and upload of that frame image (even if the frame was never
        request from the codec). This hack was added because we wanted to get rid of
        the compressed image allocation ASAP. The expectation was codecs would only be
        created with the sole purpose of getting the decompressed image bytes.
        However, for applications that only create codecs to get image sizes (but
        never actually decompress the same), we would end up replacing the compressed
        image allocation with a larger allocation (device resident no less) for no
        obvious use. This issue is particularly insidious when you consider that the
        codec is usually asked for the native image size first before the frame is
        requested at a smaller size (usually using a new codec with same data but new
        targetsize). This would cause the creation of a whole extra texture (at 1:1)
        when the caller was trying to “optimize” for memory use by requesting a
        texture of a smaller size.
      * Second, all image collections we delayed in by the unref queue by 250ms
        because of observations that the calling thread (the UI thread) was being
        descheduled unnecessarily when a task with a timeout of zero was posted from
        the same (recall that a task has to be posted to the IO thread for the
        collection of that texture). 250ms is multiple frame intervals worth of
        potentially unnecessary textures.
      
      The net result of these issues is that we may end up creating textures when all
      that the application needs is to ask it’s codec for details about the same (but
      not necessarily access its bytes). Texture collection could also be delayed
      behind other jobs to decompress the textures on the IO thread. Also, all texture
      collections are delayed for an arbitrary amount of time.
      
      These issues cause applications to be susceptible to OOM situations. These
      situations manifest in various ways. Host memory exhaustion causes the usual OOM
      issues. Device memory exhaustion seems to manifest in different ways on iOS and
      Android. On Android, allocation of a new texture seems to be causing an
      assertion (in the driver). On iOS, the call hangs (presumably waiting for
      another thread to release textures which we won’t do because those tasks are
      blocked behind the current task completing).
      
      To address peak memory usage, the following changes have been made:
      * Image decompression and upload/collection no longer happen on the same thread.
        All image decompression will now be handled on a workqueue. The number of
        worker threads in this workqueue is equal to the number of processors on the
        device. These threads have a lower priority that either the UI or Render
        threads. These workers are shared between all Flutter applications in the
        process.
      * Both the images and their codec now report the correct allocation size to Dart
        for GC purposes. The Dart VM uses this to pick objects for collection. Earlier
        the image allocation was assumed to 32bpp with no mipmapping overhead
        reported. Now, the correct image size is reported and the mipmapping overhead
        is accounted for. Image codec sizes were not reported to the VM earlier and
        now are. Expect “External” VM allocations to be higher than previously
        reported and the numbers in Observatory to line up more closely with actual
        memory usage (device and host).
      * Decoding images to a specific size used to decode to 1:1 before performing a
        resize to the correct dimensions before texture upload. This has now been
        reworked so that images are first decompressed to a smaller size supported
        natively by the codec before final resizing to the requested target size. The
        intermediate copy is now smaller and more promptly collected. Resizing also
        happens on the workqueue worker.
      * The drain interval of the unref queue is now sub-frame-interval. I am hesitant
        to remove the delay entirely because I have not been able to instrument the
        performance overhead of the same. That is next on my list. But now, multiple
        frame intervals worth of textures no longer stick around.
      
      The following issues have been addressed:
      * https://github.com/flutter/flutter/issues/34070 Since this was the first usage
        of the concurrent message loops, the number of idle wakes were determined to
        be too high and this component has been rewritten to be simpler and not use
        the existing task runner and MessageLoopImpl interface.
      * Image decoding had no tests. The new `ui_unittests` harness has been added
        that sets up a GPU test harness on the host using SwiftShader. Tests have been
        added for image decompression, upload and resizing.
      * The device memory exhaustion in this benchmark has been addressed. That
        benchmark is still not viable for inclusion in any harness however because it
        creates 9 million codecs in straight-line execution. Because these codecs are
        destroyed in the microtask callbacks, these are referenced till those
        callbacks are executed. So now, instead of device memory exhaustion, this will
        lead to (slower) exhaustion of host memory. This is expected and working as
        intended.
      
      This patch only addresses peak memory use and makes collection of unused images
      and textures more prompt. It does NOT address memory use by images referenced
      strongly by the application or framework.
      ad582b50
  20. 25 4月, 2019 1 次提交
  21. 21 4月, 2019 1 次提交
  22. 20 4月, 2019 1 次提交
  23. 18 4月, 2019 2 次提交
  24. 10 4月, 2019 2 次提交
  25. 09 4月, 2019 1 次提交
  26. 04 4月, 2019 1 次提交
  27. 03 4月, 2019 1 次提交
  28. 02 4月, 2019 1 次提交
  29. 30 3月, 2019 2 次提交
  30. 21 2月, 2019 1 次提交
  31. 20 2月, 2019 1 次提交
  32. 16 2月, 2019 1 次提交
    • C
      Shut down and restart the Dart VM as needed. (#7832) · 0d6ff166
      Chinmay Garde 提交于
      The shell was already designed to cleanly shut down the VM but it couldnt
      earlier as |Dart_Initialize| could never be called after a |Dart_Cleanup|. This
      meant that shutting down an engine instance could not shut down the VM to save
      memory because newly created engines in the process after that point couldn't
      restart the VM. There can only be one VM running in a process at a time.
      
      This patch separate the previous DartVM object into one that references a
      running instance of the DartVM and a set of immutable dependencies that
      components can reference even as the VM is shutting down.
      
      Unit tests have been added to assert that non-overlapping engine launches use
      difference VM instances.
      0d6ff166
  33. 16 1月, 2019 2 次提交
  34. 15 1月, 2019 1 次提交