Looks nice ! You can have 3 hidden layers, but you'll see with experimenting, you will rarely need that many layers. What is your training pattern ?
Answer to your question depends on your training pattern and purpose of input neurons.. when e.g. some input neuron has a different type of value, you could use another threshold function or different settings for parameters in neurons connected to that input neuron.
But in general, it is better to feed neural network input into seperate perceptrons. So, the answer is: in theory, you could preset individual properties of neurons.. but in practice of back-propagation learning, it is not needed. There are no "individual properties" of neurons, the weight values that result of your training cycles will differ every time. All initial weights can be set on a small random value, transfer threshold and learning rate are to be set per layer.