Geprüft mit Version: 2018.1
Native memory is a key component when optimizing applications as most of the engine code is in resident memory. When you integrate code in Native Plugins you can control it directly, however it is not always possible to control and optimize the native memory consumption from Unity internal systems. Internal systems use different buffers and resources, and it may not always be apparent how that influences memory consumption. The following section details Unity internal systems and explains memory data you often see in a native profiler.
Unity uses many different native allocators and buffers. Some are persistent, such as the constant buffer, while others are dynamic, such as the back buffer. The following subsections describe buffers and their behavior.
Unity stores constants in a 4MB buffer pool and cycles through the pool between frames. The pool is bound to the GPU for the duration of its lifetime and shows up in frame capture tools such as XCode or Snapdragon.
Unity uses block allocators in some internal systems. There is memory and CPU overhead anytime Unity needs to allocate a new page block of memory. Usually, the block size of the page is large enough that the allocation only appears the first time Unity uses a system. After the first allocation, the page block is reused. There are small differences in how internal systems use the block allocator.
The first time you load an AssetBundle, additional CPU and memory overhead is required as the block allocators spin up, allowing the Asset Bundle system to allocate the first page block of memory.
Unity reuses the pages that the Asset Bundle system allocates, however, if you want to load many Asset Bundles at once you may have to allocate a second or third block. All of these stay allocated until the application terminates.
Resources use a block allocator shared with other systems, so there is no CPU or memory overhead when loading an Asset from Resources for the first time (as it already happened earlier during startup).
Unity uses a ring buffer to push textures to the GPU. You can adjust this async texture buffer via QualitySettings.asyncUploadBufferSize. Note: You cannot return Ring buffer memory to the system after Unity allocates it.
Assets cause native and managed memory implications during runtime. Beyond managed memory, Unity returns native memory to the operating system when no longer needed. Since every byte counts - especially on mobile devices - you can try the following to reduce native runtime memory:
Remove unused channels from meshes
Remove redundant keyframes from animations
Check the Editor.log after a build to ensure that the size of each Asset on disk is proportional to its runtime memory use
Reduce memory uploaded to GPU memory by using the Texture Quality setting in the Rendering section of the Quality Settings to force lower texture resolutions via mipmaps
Normal maps need not be the same size as diffuse maps (1:1), so you can use a smaller resolution for normal maps while still achieving high visual fidelity and saving memory and disk space
Be aware that managed memory implications can often surpass native memory problems, due to heavy fragmentation of the managed heap.
Beware of cloned materials, as accessing the material property of any renderer causes the material to be cloned even if nothing is assigned. This cloned material will not be garbage collected and is only cleared up when you change Scenes or call Resources.UnloadUnusedAssets(). You can use customRenderer.sharedMaterial if you want to access a read-only material.
Call UnloadScene() to destroy and unload the GameObjects associated with a Scene. Note: This does not unload the associated Assets. In order to unload the Assets and free both managed and native memory, you need to call Resources.UnloadUnusedAssets() after the Scene has been unloaded.
Unity dynamically sets voices as either virtual or real, depending on the real time audibility of the platform. For example, Unity sets sounds that are playing far off or with a low volume as virtual, but will change these sounds to a real voice if they come closer or become louder. The default values in the Audio Settings are great values for mobile devices.
|Max Virtual Voice Count||Max Real Voice Count|
Unity uses the DSP buffer sizes to control the mixer latency. The underlying Audio System FMOD defines the platform dependent DSP buffer sizes. The buffer size influences the latency and should be treated carefully. The number of buffers defaults to 4. The audio system in Unity uses the following sample counts for the Audio Settings in Unity:
|Latency = Samples * Number of Buffers||Samples||Number of Buffers|
|Default||iOS & Desktop: 1024 Android: 512||4|
Using the correct settings can save runtime memory and CPU performance.
Enable Force to mono option on audio files if they do not require stereo sound; doing so will reduce runtime memory and disk space. This is mostly used on mobile platforms with a mono speaker.
Larger AudioClips should be set to Streaming. Streaming in Unity 5.0 and later has a 200KB overhead so you should set audio files smaller than 200KB to Compressed into Memory instead.
For longer clips, import AudioClips as Compressed into Memory to save runtime memory (if the clips are not set to Streaming).
Use Decompress On Load only if you have plenty of memory but are constrained by CPU performance, as this option requires a significant amount of memory.
Various platforms also have preferred Compression Format settings to save runtime memory and disk space:
Set Compression Format to ADPCM for very short clips such as sound effects which are played often. ADPCM offers a fixed 3.5:1 compression ratio and is inexpensive to decompress.
Use Vorbis on Android for longer clips. Unity does not use hardware accelerated decoding.
Use MP3 or Vorbis on iOS for longer clips. Unity does not use hardware accelerated decoding.
MP3 or Vorbis need more resources for decompression but offer significantly smaller file size. High-quality MP3s require fewer resources for decompression, while middle and low-quality files of either format require almost the same CPU time for decompression.
Tip: Use Vorbis for longer looping sounds since it handles looping better. MP3 contains data blocks of predetermined sizes so if the loop is not an exact multiple of the block size then the MP3 encoding will add silence while Vorbis does not.