Hi i am trying to load the keras model which was created in Keras: 2.2.4 version and i am trying to load in below specified version.
- ubuntu : 18.04
- python : 3.6.9
- tensorflow version : 1.13.1
- Keras version : 2.3.1
I tried to load the model as below mentioned .
import tensorflow as tf
classifierLoad = tf.keras.models.load_model('w.hdf5')
while Loading it showing error like this.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. 2020-02-20 18:17:45.291135: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency 2020-02-20 18:17:45.292283: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x27584260 executing computations on platform Host. Devices: 2020-02-20 18:17:45.292367: I tensorflow/compiler/xla/service/service.cc:168] StreamExecutor device (0): , 2020-02-20 18:17:45.438308: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:965] ARM64 does not support NUMA - returning NUMA node zero 2020-02-20 18:17:45.438696: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x237da820 executing computations on platform CUDA. Devices: 2020-02-20 18:17:45.438755: I tensorflow/compiler/xla/service/service.cc:168] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3 2020-02-20 18:17:45.439077: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216 pciBusID: 0000:00:00.0 totalMemory: 3.87GiB freeMemory: 569.37MiB 2020-02-20 18:17:45.439136: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2020-02-20 18:17:50.292455: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-02-20 18:17:50.295363: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2020-02-20 18:17:50.295391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2020-02-20 18:17:50.295579: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 105 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3) Traceback (most recent call last): File "conversion_keras_to_trt.py", line 32, in model = load_model(model_fname, custom_objects={'Adam': lambda **kwargs: hvd.DistributedOptimizer(keras.optimizers.Adam(**kwargs))} File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/saving.py", line 249, in load_model optimizer_config, custom_objects=custom_objects) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py", line 838, in deserialize printable_module_name='optimizer') File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py", line 194, in deserialize_keras_object return cls.from_config(cls_config) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py", line 159, in from_config return cls(**config) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py", line 471, in init super(Adam, self).init(**kwargs) File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py", line 68, in init 'passed to optimizer: ' + str(k)) TypeError: Unexpected keyword argument passed to optimizer: name
Any suggestion