27
votes

I have been running a particular python script for some time. All of the script had been running perfectly fine (including in Jupyter) for many months before this. Now, somehow, the jupyter in my system has started showing the following error message at one particular line of the code (the last line of the below mentioned code). All parts of the code run fine, except for the last line of the code (where I call a user defined function to do pair counts). The user defined function (correlation.polepy) can be found from https://github.com/OMGitsHongyu/N-body-analysis

This is the error message that I am getting:

 Kernel Restarting
 The kernel appears to have died. It will restart automatically.

And, here is the skeleton of my Python Code:

from __future__ import division
import numpy as np
import correlation
from scipy.spatial import cKDTree

File1 = np.loadtxt('/Users/Sidd/Research/fname1.txt')
File2 = np.loadtxt('/Users/Sidd/Research/fname2.txt')

masscut = 1.1*np.power(10,13)
mark1 = (np.where(File1[:,0]>masscut))[0]
mark2 = (np.where(File2[:,0]>masscut))[0]

Data1 = File1[mark1,1:8]
Data2 = File2[mark2,1:8]

Xi_masscut = correlation.polepy(p1=Data1, p2=Data2, rlim=150, nbins=150, nhocells=100, blen=1024, dis_f=100)

Similar problem happens (last line of the code) when I try to use IPython. When I try to use Python (implement in terminal), I get an error message (at the last line) which says "Segmentation fault: 11". I am using Python 2.7.13 :: Anaconda 2.5.0 (x86_64).

I have tried the following methods already in search for a solution:

1.> I checked some of the previous links on stackoverflow where this problem has been asked: The kernel appears to have died. It will restart automatically

I tried the solution given in the link above; sadly it doesn't seem to work for my case. This is the solution that was mention in the link given above:

conda update mkl

2.> Just to check if the system is running out of memory, I closed all applications which are heavy on memory. My system has 16 GB physical memory and even when there is over 9 GB of free memory, this problem happens (again, this problem had not been happening before, even when I had been using 14 GB in other tasks and had less than 2 GB of memory. It's very surprising that I could run task with given inputs before and I am not able to replicate calculation with the same exact inputs now.)

3.> I saw another link: https://alpine.atlassian.net/wiki/plugins/servlet/mobile?contentId=134545485#content/view/134545485

This one appears to tackle similar problems and it speaks about there not being enough memory for the docker container. I had doubts about how to implement the suggestions mentioned in there.

All in all, I am not sure how this problem arose in the first place. How do I solve this problem? Any help will be much appreciated.

9
Could you post the code you are trying to run? It would be especially helpful if you could post a Minimal, Complete, and Verifiable exampleLouise Davies
@LouiseDavies Thanks for the reply. I have posted the skeleton of my code and have mentioned the steps that I have taken for troubleshooting. Sadly, none of the steps that I could try have worked for me so far. I will very much appreciate any help.Commoner
Problema: Jupyter the kernel appears to have died it will restart automatically I had the same problem, reinstalled numpy and keras, but to no avail, it seems to be a problem only with the cuda incompatible with mac OS 10.13.6 or higher. When I used the IDE spider the problem disappeared.Emerson Moreira

9 Answers

13
votes

This issue happens when I import sklearn PCA before numpy (not sure reverse the sequence will solve the problem)

But later I solved the issue by reinstalling numpy and mkl: conda install numpy and conda install -c intel mkl

4
votes

I tried conda install tensorflow which solved my problem.

2
votes

install library with conda instead pip this work for me

1
votes

When this happened to me, I just uploaded my notebook to google colab and it started working. It seems, though, that the issue is a bottleneck in compute/memory resources in training these big models, and places like colab have a lot more bandwidth than does your machine.

0
votes

Reinstall your library with conda instead of pip.

0
votes

To solve this problem, just upgrade the numpy library using this following command :

conda install numpy if you are using anaconda

or

pip install -U numpy if not

0
votes

For macOS Versions of 12.0 and above, Tensorflow GPU isn't supported. So try this piece of code, it worked for me -

import os 
os.environ['KMP_DUPLICATE_LIB_OK']='True'
0
votes

Using the command:

conda install -c anaconda keras

worked for me.

0
votes

In my case, the error was caused due to an issue with hdf5(Version mismatch) library which suggests that whenver kernel dies unexpectedly any library during imports can also cause the issue.

In such cases, it would be best to first check the corresponding command prompt window which was used to trigger the jupyter notebook. It provides logs of such errors and can be used to troubleshoot such issues.

Issue caused due to: import tensorflow
Message: Version mismatch of hdf5 library
Resolution: Set environment variable 'HDF5_DISABLE_VERSION_CHECK' = 2