My expo app is crashing in prod, how do I troubleshoot it?

I have been struggling too. I use native-base and that used to not be a problem with a standalone build, as I have released this before.

I think there might be two things going on. One is having Sentry and the other is my usage of AsyncStorage. AsyncStorage has been updated but I feel the usage of one or both of those libraries is what is causing the app to crash on launch. I am going to remove both and see what happens.

EDIT: Disabling both did not solve my crash problems.


I suspect the problem with your app is not the same as @dmitryame’s. Try the suggestions that @notbrent posted in the third message in this thread to see if that gives you more info.

Correct, the problem is probably different. I have a separate thread going with my own crash logs pasted but no help yet from the community - so I keep poking my head in here to see if any suggestions help.

I went with the above suggestions and none are pointing me at the true cause of what is broken with my app when in standalone.

bycedric followed your advice and got the prod logs from the device that crashes. I tried to make sure it’s the bare minimum log of the events right around the crash (about 2 seconds worth of events).

The crash happened when I was minimizing the app by clicking the home button.

The lines that look interesting:

Jun  7 20:29:29 kernel[0] <Notice>: EXC_RESOURCE -> backboardd[591] exceeded mem limit: InactiveHard 2098 MB (fatal)

Jun  7 20:29:29 chronod(UIKitCore)[354] <Notice>: Received memory warning.


Jun  7 20:29:29 ComEchowaves(UIKitCore)[680] <Notice>: Received memory warning.

followed by

Jun  7 20:29:29 ComEchowaves(UIKitCore)[680] <Notice>: Received memory warning.


Jun  7 20:29:29 mediaserverd(MediaExperience)[352] <Notice>: -MXSession- -[MXSession(InterfaceImpl) _dealloc]: Deallocating <MXSessionID: 26, CoreSessionID = 28 Name = sid:0x16017b, corespeechd(648), 'prim' ,Muted = NO, Playing = NO, MutePriority = MutePriority_Two, PiP = NO, DoesntActuallyPlayAudio = NO, clientType = 2. Current MXSessions count for CoreSession = 0
Jun  7 20:29:29 kernel[0] <Notice>: 1805.067 memorystatus: killing_idle_process pid 107 [accountsd] (vm-pageshortage 0) 9632KB - memorystatus_available_pages: 7757

As you may have guessed, the application package is com.echowaves, so the log file can be grepped for echowaves
I totally understand, it may be tough to make sense out of this pile of information, but in case someone can spot anything useful, would appreciate any help very much!

Did you intend to paste the same log message twice?

I haven’t looked at the full log yet, but it looks like the device is running out of RAM. So maybe the app is using huge amounts itself, or it is somehow triggering (parts of) the OS to use too much.

Seems like a bad thing for backboardd to get a fatal error:

Good catch, sorry, it’s unintended dup. Thanks for catching it.

1 Like

Had a quick look at the log. Definitely out of RAM. The kernel killed a bunch of processes because of it:

Jun  7 20:29:29 kernel[0] <Notice>: 1805.088 memorystatus: killing_top_process pid 604 [securityd] (vm-pageshortage 1) 6480KB - memorystatus_available_pages: 7069
Jun  7 20:29:29 kernel[0] <Notice>: 1805.094 memorystatus: killing_top_process pid 616 [trustd] (vm-pageshortage 1) 3184KB - memorystatus_available_pages: 6715
Jun  7 20:29:29 wifid[353] <Notice>: __WiFiServerClientTerminationCallback: Client carkitd terminated, cleanup state
Jun  7 20:29:29 kernel[0] <Notice>: 1805.096 memorystatus: killing_top_process pid 355 [assistantd] (vm-pageshortage 1) 9696KB - memorystatus_available_pages: 5276
Jun  7 20:29:29 kernel[0] <Notice>: 1805.100 memorystatus: killing_top_process pid 391 [biometrickitd] (vm-pageshortage 1) 2512KB - memorystatus_available_pages: 5013
Jun  7 20:29:29 kernel[0] <Notice>: 1805.102 memorystatus: killing_top_process pid 324 [mediaremoted] (vm-pageshortage 1) 2976KB - memorystatus_available_pages: 5042
Jun  7 20:29:29 kernel[0] <Notice>: 1805.103 memorystatus: killing_top_process pid 383 [cloudpaird] (vm-pageshortage 1) 3760KB - memorystatus_available_pages: 5038
Jun  7 20:29:29 kernel[0] <Notice>: 1805.104 memorystatus: killing_top_process pid 483 [suggestd] (vm-pageshortage 1) 11616KB - memorystatus_available_pages: 5079
Jun  7 20:29:29 kernel[0] <Notice>: 1805.106 memorystatus: killing_top_process pid 384 [searchpartyd] (vm-pageshortage 1) 5952KB - memorystatus_available_pages: 5065
Jun  7 20:29:29 kernel[0] <Notice>: 1805.109 memorystatus: killing_top_process pid 331 [nanomediaremotelinkagent] (vm-pageshortage 3) 2752KB - memorystatus_available_pages: 5078
Jun  7 20:29:29 SpringBoard(FrontBoard)[619] <Notice>: [application<>:656] Workspace connection invalidated.
Jun  7 20:29:29 kernel[0] <Notice>: 1805.109 memorystatus: killing_top_process pid 329 [bulletindistributord] (vm-pageshortage 3) 6592KB - memorystatus_available_pages: 5095
Jun  7 20:29:29 kernel[0] <Notice>: 1805.111 memorystatus: killing_top_process pid 327 [appconduitd] (vm-pageshortage 3) 3104KB - memorystatus_available_pages: 5086
Jun  7 20:29:29 kernel[0] <Notice>: 1805.112 memorystatus: killing_top_process pid 330 [useractivityd] (vm-pageshortage 3) 2832KB - memorystatus_available_pages: 5155
Jun  7 20:29:29 kernel[0] <Notice>: 1805.113 memorystatus: killing_top_process pid 332 [nsurlsessiond] (vm-pageshortage 3) 5088KB - memorystatus_available_pages: 5470
Jun  7 20:29:29 kernel[0] <Notice>: 1805.114 memorystatus: killing_top_process pid 335 [newsd] (vm-pageshortage 3) 3296KB - memorystatus_available_pages: 5626
Jun  7 20:29:29 kernel[0] <Notice>: 1805.115 memorystatus: killing_top_process pid 336 [nptocompaniond] (vm-pageshortage 3) 4512KB - memorystatus_available_pages: 5839
Jun  7 20:29:29 kernel[0] <Notice>: 1805.116 memorystatus: killing_top_process pid 472 [destinationd] (vm-pageshortage 3) 4224KB - memorystatus_available_pages: 5940
Jun  7 20:29:29 kernel[0] <Notice>: 1805.117 memorystatus: killing_top_process pid 334 [parsecd] (vm-pageshortage 3) 4160KB - memorystatus_available_pages: 6150
Jun  7 20:29:29 kernel[0] <Notice>: 1805.118 memorystatus: killing_top_process pid 419 [cloudd] (vm-pageshortage 3) 11008KB - memorystatus_available_pages: 6637
Jun  7 20:29:29 kernel[0] <Notice>: 1805.119 memorystatus: killing_top_process pid 543 [axassetsd] (vm-pageshortage 3) 1152KB - memorystatus_available_pages: 6689
Jun  7 20:29:29 kernel[0] <Notice>: 1805.120 memorystatus: killing_top_process pid 657 [coreauthd] (vm-pageshortage 3) 2096KB - memorystatus_available_pages: 6759
Jun  7 20:29:29 kernel[0] <Notice>: 1805.120 memorystatus: killing_top_process pid 359 [awdd] (vm-pageshortage 3) 1904KB - memorystatus_available_pages: 6727
Jun  7 20:29:29 kernel[0] <Notice>: 1805.121 memorystatus: killing_top_process pid 639 [duetexpertd] (vm-pageshortage 3) 8448KB - memorystatus_available_pages: 6528
Jun  7 20:29:29 kernel[0] <Notice>: 1805.122 memorystatus: killing_top_process pid 354 [chronod] (vm-pageshortage 3) 8224KB - memorystatus_available_pages: 6780
Jun  7 20:29:29 kernel[0] <Notice>: 1805.124 memorystatus: killing_top_process pid 387 [dasd] (vm-pageshortage 3) 5472KB - memorystatus_available_pages: 6725

Before getting to that point it killed a bunch of “idle” processes.

Jun  7 20:29:29 kernel[0] <Notice>: 1804.845 memorystatus: killing_idle_process pid 586 [dprivacyd] (vm-pageshortage 0) 2208KB - memorystatus_available_pages: 17760
Jun  7 20:29:29 kernel[0] <Notice>: 1804.856 memorystatus: killing_idle_process pid 596 [ACCHWComponentAuthService] (vm-pageshortage 0) 1168KB - memorystatus_available_pages: 16931
Jun  7 20:29:29 kernel[0] <Notice>: 1804.858 memorystatus: killing_idle_process pid 597 [coresymbolicationd] (vm-pageshortage 0) 960KB - memorystatus_available_pages: 14687
Jun  7 20:29:29 kernel[0] <Notice>: 1804.866 memorystatus: killing_idle_process pid 592 [remotemanagementd] (vm-pageshortage 0) 1904KB - memorystatus_available_pages: 13868

and also notified processes that there was a shortage of memory:

Jun  7 20:29:29 chronod(UIKitCore)[354] <Notice>: Received memory warning.

How much RAM does the device have?

But I’m not doing anything fancy. The issue was reproduced on my device with 64G as well as on some other friend’s devices with 128Gb.

Oh well, after all, I think I may have traced it to a particular line of code that is causing the issue.

Here is what’s going on:
I’m implementing image cache component. My app serves tons of images so the caching is essential.
The imgURI is a string pointing to a file in cache folder, and I set it to like this:

  const filesystemURI = `${CONST.IMAGE_CACHE_FOLDER}${cacheKey}`

  const [imgURI, setImgURI] = useState(filesystemURI)

The cacheKey is passed into the component as prop.

I suspected for a while there is a problem with the loadImage method which is invoked on line 21 in useEffect hook – this method implements some logic which invokes expo filesystem api, and I thought for a while that may be a cause of the issue, but when I commented out the line 21 completely, I was still able to reproduce the crash). Doing this test would only render the cached images, the images that have not been previously cached would show blank, which was fine for the test purpose.

Interesting enough, it will not crash any more if I reference the remote URI (rather than the cached file location) on line 60. instead of this:

        uri: imgURI,

use the uri which is passed in as prop:


In this case the image will still be cached while calling the loadImage method, so the logic that invokes filesystem API works reliable and does not cause crash. It’s the react-native <Image/> that crashes when serving images from cache, and it will only crash when there are certain number of files in the cache folder. I tried to use <Image/> tag from react-native-elements and unfortunately it works exactly the same – crashes my device when serving the images from cache folder while there are lots of cached files.

Where do I take it from here? I would assume that iOS should be responsible for cleaning cache folder when the device gets low on memory? When I check iPhone storage in preferences, my app never shows as an offender, regardless of how many files are stored in the cache folder. I’m guessing the cache folder is not considered as part of the app memory because it can be cleaned by the iOS.

So, to sum it all up, there 3 potential issues here:

  1. there is some issue with how expo filesystem interacts with iOS application cache folder
  2. iOS does not interpret the cache folder the way expo (or common sense) intends
  3. <Image/> tag from react-native get confused when serving images from the cache folder while there are too many files stored in that folder.

That was long explanation, let me know if I can provide more insights or if anyone has any suggestions what to do next? Thank you all for being patient with me – appreciate everyone’s help.

I don’t know enough about iOS etc. to know what’s wrong, but RAM should not be used up just because you have a lot of files in the cache directory.

Is the app displaying a large number of images at the same time?

Random thought: what happens if you split the files into a bunch of subdirectories under the cache directory?

but RAM should not be used up just because you have a lot of files in the cache directory.

I’m really not doing anything fancy or non standard. Isn’t it a function of OS to manage the cache folder and clean it up when the device runs low on memory? I don’t think the sum size of files in cache is reflected anywhere in the device stats when I’m looking in device’s Settings/General/IphoneStorage – as such the device should never run out of memory because some app is writing files to cache folder.

Is the app displaying a large number of images at the same time?

Again, nothing fancy is going on here. The app does not show all the images all at once, it shows few images at a time in a component that is derived from <FlatList/>, which shows images lazily, and discards the ones that are off screen. And, as I mentioned before, it takes long time to reproduce the crash – anywhere between 1 to 2 month (or longer) of normal usage – plenty of time for the OS to clean up the cache if it starts to take too much memory.

what happens if you split the files into a bunch of subdirectories under the cache directory?

I’ll think about it, thanks for the suggestion. I did try to store files at the root of cache instead of creating the subfolder – that didn’t help. I’m falling back to the original idea of cleaning up the cache folder myself every time on the start of the application – not ideal but should work.

Not sure if this is such a rare occurrence which effects only handful of appp, but If anyone thinks this issue warrants a bug report to react-native team, let me know – I’d be happy to submit it.

RAM and files stored on the phone’s flash storage are separate, so whether the phone is running out of RAM should not have anything to do with how many files there are in your app’s cache directory, or how big they are. Nevertheless it seems as if something is somehow using up a lot of RAM when your cache directory has a lot of files in it. I don’t know iOS well enough to answer your question, but I doubt the OS will automatically clear the cache. It might prompt you when your flash starts getting full and clear the cache if you agree to go ahead, though.

Yes, I agree! Yet it seems to be happening. So either our understanding of what is happening is flawed, or there is actually something that is using up more RAM when there are more files in your app’s cache.

Since there’s no inherent link between RAM usage and flash storage the OS would not know that cleaning up the cache would help with a RAM shortage.

I think without a way to easily reproduce the problem it will be difficult for anyone to fix it :-/
So it might be worth trying to get a better understanding of what’s going wrong before reporting a bug.
I assume you weren’t able to reproduce the problem with a simple app that just writes a lot of files to the cache. Did you perhaps try doing that and also displaying them in a FlatList like your app does?

Did you perhaps try doing that and also displaying them in a FlatList like your app does?

My test was rudimentary – just tried to dump thousands of files in cache, but that didn’t kill the app.
Yes, I was not able to artificially reproduce the problem yet.

Experimentally confirmed – the app starts to crash when I have more than 600MB of files in the cache folder.
Introducing the cache cleanup procedure to keep the total file size down to 500MB – this seems to work fairly reliable.

Does it also happen if e.g. you have 10 files that are each 60MB? Or 6000 files that are each 0.1MB?

I think it’s total size that matters.