3
votes

I have trained a 3-layer (input, hidden and output) feedforward neural network in Matlab. After training, I would like to simulate the trained network with an input test vector and obtain the response of the neurons of the hidden layer (not the final output layer). How can I go about doing this?

Additionally, after training a neural network, is it possible to "cut away" the final output layer and make the current hidden layer as the new output layer (for any future use)?

Extra-info: I'm building an autoencoder network.

2
I'm not entirely clear on what you're asking. In principle, there's no reason you can't chop off the output layer and see the output of the hidden units. The computation is just a chained matrix multiplication with a transfer function applied, and all you're doing is removing one of the matrix factors. Are you asking if the Matlab NN toolbox exposes that in a way you can access?deong
Yes, I could always take the trained weights from desired layers and apply the computations manually but it's a bit tedious. What I'd like to know is whether Matlab NN toolbox has built-in ready made functions which enable this (clipping away the neural network final layers, tracing the response of the network simulations through layers (layer by layer), etc.user1246209
I haven't used it in quite a while, so I can't really say for sure, but I don't recall anything off the top of my head that would do that "automatically". I know the net structures should have enough information exposed to build it yourself, but like you said, that's tedious.deong

2 Answers

1
votes

The trained weights for a trained network are available in the net.LW property. You can use these weights to get the hidden layer outputs

From Matlab Documentation nnproperty.net_LW

Neural network LW property.

NET.LW

This property defines the weight matrices of weights going to layers from other layers. It is always an Nl x Nl cell array, where Nl is the number of network layers (net.numLayers).

The weight matrix for the weight going to the ith layer from the jth layer (or a null matrix []) is located at net.LW{i,j} if net.layerConnect(i,j) is 1 (or 0).

The weight matrix has as many rows as the size of the layer it goes to (net.layers{i}.size). It has as many columns as the product of the size of the layer it comes from with the number of delays associated with the weight:

  net.layers{j}.size * length(net.layerWeights{i,j}.delays)
0
votes

Addition to using input and layer weights and biases, you may add a output connect from desired layer (after training the network). I found it possible and easy but I didn't exam the correctness of it.