- 07 4月, 2021 1 次提交
-
-
由 Zachary Anderson 提交于
-
- 03 4月, 2021 1 次提交
-
-
由 Emmanuel Garcia 提交于
-
- 25 3月, 2021 1 次提交
-
-
由 Zachary Anderson 提交于
-
- 23 1月, 2021 1 次提交
-
-
由 gaaclarke 提交于
-
- 22 1月, 2021 3 次提交
-
-
由 Emmanuel Garcia 提交于
-
由 Dan Field 提交于
This time making sure to deref the native object on GC.
-
由 Dan Field 提交于
-
- 13 1月, 2021 1 次提交
-
-
由 gaaclarke 提交于
Did the plumbing refactor that allows us to call Dart_CreateIsolateInGroup when applicable.
-
- 23 12月, 2020 1 次提交
-
- 22 12月, 2020 1 次提交
-
- 16 12月, 2020 1 次提交
-
-
由 Gary Qian 提交于
-
- 15 12月, 2020 1 次提交
-
- 12 12月, 2020 1 次提交
-
- 11 12月, 2020 1 次提交
-
-
由 Dan Field 提交于
This patch defaults the volatility bit on SkPaths to false, and then flips it to true if the path survives at least two frames.
-
- 03 12月, 2020 1 次提交
-
-
由 Gary Qian 提交于
-
- 22 10月, 2020 2 次提交
-
-
由 Chinmay Garde 提交于
This regression was introduced in https://github.com/flutter/engine/pull/21820 for sound-null safety. The settings used to launch the VM were incorrectly used to determine the isolate lifecycle callbacks. Since the first shell/engine in the process also starts the VM, these objects are usually identical. However, for subsequent engine shell/engine launches, the callbacks attached to the new settings object would be ignored. The unit-test harness is also structured in such a way that each test case tears down the VM before the next. So all existing tests created a bespoke VM for the test run, and, the tests that did create multiple isolates did not also test attaching callbacks to the settings object. Fixes https://github.com/flutter/engine/pull/22041
-
由 Chinmay Garde 提交于
Embedders that have access to the Dart native API (only Fuchsia now) may perform library setup in the isolate create callback. The engine used to depend on the fact the root isolate entrypoint is invoked in the next iteration of message loop (via the `_startIsolate` trampoline in `isolate_patch.dart`) to ensure that library setup occur before the main entrypoint was invoked. However, due to differences in the way in which message loops are setup in Fuchsia, this entrypoint was run before the callback could be executed. Dart code on Fuchsia also has the ability to access the underlying event loops directly. This patch moves the invocation of the create callback to before user dart code has a chance to run. This difference in behavior on Fuchsia became an issue when the isolate initialization was reworked in https://github.com/flutter/engine/pull/21820 for null-safety. Another issue was discovered in that the callback was being invoked twice, I fixed that too and added a test. Fixes https://github.com/flutter/flutter/issues/68732
-
- 21 10月, 2020 1 次提交
-
-
由 Chinmay Garde 提交于
-
- 17 10月, 2020 2 次提交
-
-
由 Chris Bracken 提交于
Eliminates FLUTTER_NOLINT where they can be landed without triggering lint failures.
-
由 Chinmay Garde 提交于
Snapshots compiled with sound null-safety enabled require changes to the way in which isolates are launched. Specifically, the `Dart_IsolateFlags::null_safety` field needs to be known upfront. The value of this field can only be determined once the kernel snapshot is available. This poses a problem in the engine because the engine used to launch the isolate at shell initialization and only need the kernel mappings later at isolate launch (when transitioning the root isolate to the `DartIsolate::Phase::Running` phase). This patch delays launch of the isolate on the UI task runner till a kernel mapping is available. The side effects of this delay (callers no longer having access to the non-running isolate handle) have been addressed in this patch. The DartIsolate API has also been amended to hide the method that could return a non-running isolate to the caller. Instead, it has been replaced with a method that requires a valid isolate configuration that returns a running root isolate. The isolate will be launched by asking the isolate configuration for its null-safety characteristics. A side effect of enabling null-safety is that Dart APIs that work with legacy types will now terminate the process if used with an isolate that has sound null-safety enabled. These APIs may no longer be used in the engine. This primarily affects the Dart Convertors in Tonic that convert certain C++ objects into the Dart counterparts. All known Dart Converters have been updated to convert C++ objects to non-nullable Dart types inferred using type traits of the corresponding C++ object. The few spots in the engine that used the old Dart APIs directly have been manually updated. To ensure that no usage of the legacy APIs remain in the engine (as these would cause runtime process terminations), the legacy APIs were prefixed with the `DART_LEGACY_API` macro and the macro defined to `[[deprecated]]` in all engine translation units. While the engine now primarily works with non-nullable Dart types, callers can still use `Dart_TypeToNonNullableType` to acquire nullable types for use directly or with Tonic. One use case that is not addressed with the Tonic Dart Convertors is the creation of non-nullable lists of nullable types. This hasn’t come up so far in the engine. A minor related change is reworking tonic to define a single library target. This allows the various tonic subsystems to depend on one another. Primarily, this is used to make the Dart convertors use the logging utilities. This now allows errors to be more descriptive as the presence of error handles is caught (and logged) earlier. Fixes https://github.com/flutter/flutter/issues/59879
-
- 12 9月, 2020 1 次提交
-
-
由 Chris Bracken 提交于
Cleans up header order/grouping for consistency: associated header, C/C++ system/standard library headers, library headers, platform-specific #includes. Adds <cstring> where strlen, memcpy are being used: there are a bunch of places we use them transitively. Applies linter-required cleanups. Disables linter on one file due to included RapidJson header. See https://github.com/flutter/flutter/issues/65676 This patch does not cover flutter/shell/platform/darwin. There's a separate, slightly more intensive cleanup for those in progress.
-
- 03 9月, 2020 1 次提交
-
-
由 Dan Field 提交于
* Use hint freed specifically for image disposal
-
- 01 8月, 2020 1 次提交
-
-
由 Zachary Anderson 提交于
-
- 23 7月, 2020 1 次提交
-
-
由 gaaclarke 提交于
-
- 01 7月, 2020 1 次提交
-
-
- 05 3月, 2020 1 次提交
-
-
由 Dan Field 提交于
Make the test harness reusable for other tests that want to launch a Dart VM
-
- 31 1月, 2020 1 次提交
-
-
由 Chinmay Garde 提交于
These add no value to engine developers anymore and are not visible to external users because of the low log severity.
-
- 11 12月, 2019 1 次提交
-
-
由 Jason Simmons 提交于
Isolate data may need to be deleted on the same thread where it was allocated. In particular, the task observer set up in the UIDartState ctor must be removed from the same message loop where it was added. The engine had been using the same DartIsolate object as the root isolate data and as the isolate group data. This object would be deleted when the isolate group was shut down. However, group shutdown may occur on a thread associated with a secondary isolate. When this happens, cleanup of any state tied to the root isolate's thread will fail. This change adds a DartIsolateGroupData object holding state that is common among all isolates in a group. DartIsolateGroupData can be deleted on any thread. See https://github.com/flutter/flutter/issues/45578
-
- 16 11月, 2019 1 次提交
-
-
由 Alexander Aprelev 提交于
* Revert "Revert "Provide dart vm initalize isolate callback so that children isolates belong to parent's isolate group. (#9888)" (#12327)" * Ensure that when isolate shuts down it calls isolate_data, rather than isolage_group_data callback.
-
- 01 11月, 2019 1 次提交
-
-
由 gaaclarke 提交于
Put `Picture.toImage` back on the GPU thread. Left the unit tests intact.
-
- 23 10月, 2019 1 次提交
-
-
由 Ryan Macnak 提交于
Remove dead shared snapshot arguments to Dart_CreateIsolateGroup. 6a65ea9cad4b [vm] Remove shared snapshot and reused instructions features. db8370e36147 [gardening] Fix frontend-server dartdevc windows test. 4601bd7bffea Modified supertype check error message to be more descriptive. 0449905e2de6 [CFE] Add a serialization-and-unserialization step to strong test c8b903c2f94f Update CHANGELOG.md 2a12a13d9684 [Test] Skips emit_aot_size_info_flag_test on crossword. b26127fe01a5 [cfe] Add reachability test skeleton
-
- 22 10月, 2019 1 次提交
-
-
由 Jason Simmons 提交于
Obtaining the SkiaUnrefQueue through the IOManager is unsafe because UIDartState has a weak pointer to the IOManager that can not be dereferenced on the UI thread.
-
- 11 10月, 2019 2 次提交
-
-
由 Chinmay Garde 提交于
This reverts commit e96c7404.
-
由 Gary Qian 提交于
-
- 24 9月, 2019 1 次提交
-
-
由 Alexander Aprelev 提交于
* Update secondary-isolate-launch test to verify that secondary isolate gets shutdown before root isolate exits. * ci/format.sh
-
- 24 8月, 2019 1 次提交
-
-
由 Chinmay Garde 提交于
We will end up creating fewer threads in tests.
-
- 16 7月, 2019 1 次提交
-
-
由 gaaclarke 提交于
Made Picture::toImage happen on the IO thread with no need for a surface.
-
- 10 7月, 2019 1 次提交
-
-
由 Chinmay Garde 提交于
This patch reworks image decompression and collection in the following ways because of misbehavior in the described edge cases. The current flow for realizing a texture on the GPU from a blob of compressed bytes is to first pass it to the IO thread for image decompression and then upload to the GPU. The handle to the texture on the GPU is then passed back to the UI thread so that it can be included in subsequent layer trees for rendering. The GPU contexts on the Render & IO threads are in the same sharegroup so the texture ends up being visible to the Render Thread context during rendering. This works fine and does not block the UI thread. All references to the image are owned on UI thread by Dart objects. When the final reference to the image is dropped, the texture cannot be collected on the UI thread (because it has not GPU context). Instead, it must be passed to either the GPU or IO threads. The GPU thread is usually in the middle of a frame workload so we redirect the same to the IO thread for eventual collection. While texture collections are usually (comparatively) fast, texture decompression and upload are slow (order of magnitude of frame intervals). For application that end up creating (by not necessarily using) numerous large textures in straight-line execution, it could be the case that texture collection tasks are pending on the IO task runner after all the image decompressions (and upload) are done. Put simply, the collection of the first image could be waiting for the decompression and upload of the last image in the queue. This is exacerbated by two other hacks added to workaround unrelated issues. * First, creating a codec with a single image frame immediately kicks of decompression and upload of that frame image (even if the frame was never request from the codec). This hack was added because we wanted to get rid of the compressed image allocation ASAP. The expectation was codecs would only be created with the sole purpose of getting the decompressed image bytes. However, for applications that only create codecs to get image sizes (but never actually decompress the same), we would end up replacing the compressed image allocation with a larger allocation (device resident no less) for no obvious use. This issue is particularly insidious when you consider that the codec is usually asked for the native image size first before the frame is requested at a smaller size (usually using a new codec with same data but new targetsize). This would cause the creation of a whole extra texture (at 1:1) when the caller was trying to “optimize” for memory use by requesting a texture of a smaller size. * Second, all image collections we delayed in by the unref queue by 250ms because of observations that the calling thread (the UI thread) was being descheduled unnecessarily when a task with a timeout of zero was posted from the same (recall that a task has to be posted to the IO thread for the collection of that texture). 250ms is multiple frame intervals worth of potentially unnecessary textures. The net result of these issues is that we may end up creating textures when all that the application needs is to ask it’s codec for details about the same (but not necessarily access its bytes). Texture collection could also be delayed behind other jobs to decompress the textures on the IO thread. Also, all texture collections are delayed for an arbitrary amount of time. These issues cause applications to be susceptible to OOM situations. These situations manifest in various ways. Host memory exhaustion causes the usual OOM issues. Device memory exhaustion seems to manifest in different ways on iOS and Android. On Android, allocation of a new texture seems to be causing an assertion (in the driver). On iOS, the call hangs (presumably waiting for another thread to release textures which we won’t do because those tasks are blocked behind the current task completing). To address peak memory usage, the following changes have been made: * Image decompression and upload/collection no longer happen on the same thread. All image decompression will now be handled on a workqueue. The number of worker threads in this workqueue is equal to the number of processors on the device. These threads have a lower priority that either the UI or Render threads. These workers are shared between all Flutter applications in the process. * Both the images and their codec now report the correct allocation size to Dart for GC purposes. The Dart VM uses this to pick objects for collection. Earlier the image allocation was assumed to 32bpp with no mipmapping overhead reported. Now, the correct image size is reported and the mipmapping overhead is accounted for. Image codec sizes were not reported to the VM earlier and now are. Expect “External” VM allocations to be higher than previously reported and the numbers in Observatory to line up more closely with actual memory usage (device and host). * Decoding images to a specific size used to decode to 1:1 before performing a resize to the correct dimensions before texture upload. This has now been reworked so that images are first decompressed to a smaller size supported natively by the codec before final resizing to the requested target size. The intermediate copy is now smaller and more promptly collected. Resizing also happens on the workqueue worker. * The drain interval of the unref queue is now sub-frame-interval. I am hesitant to remove the delay entirely because I have not been able to instrument the performance overhead of the same. That is next on my list. But now, multiple frame intervals worth of textures no longer stick around. The following issues have been addressed: * https://github.com/flutter/flutter/issues/34070 Since this was the first usage of the concurrent message loops, the number of idle wakes were determined to be too high and this component has been rewritten to be simpler and not use the existing task runner and MessageLoopImpl interface. * Image decoding had no tests. The new `ui_unittests` harness has been added that sets up a GPU test harness on the host using SwiftShader. Tests have been added for image decompression, upload and resizing. * The device memory exhaustion in this benchmark has been addressed. That benchmark is still not viable for inclusion in any harness however because it creates 9 million codecs in straight-line execution. Because these codecs are destroyed in the microtask callbacks, these are referenced till those callbacks are executed. So now, instead of device memory exhaustion, this will lead to (slower) exhaustion of host memory. This is expected and working as intended. This patch only addresses peak memory use and makes collection of unused images and textures more prompt. It does NOT address memory use by images referenced strongly by the application or framework.
-
- 25 4月, 2019 1 次提交
-
-
由 Zachary Anderson 提交于
-
- 21 4月, 2019 1 次提交
-
-
由 Chinmay Garde 提交于
-