9
votes

The general rule of thumb mentioned in all of books I have read so far is that you have to use non-blocking assignments in always blocks that are driven by the raising or falling edge of the clock. On a contrary, blocking assignments must be used for combinatorial logic description. This rule makes sense to me and authors of examples follow it thoroughly.

However, I spotted the following piece of Verilog in one of the production code:

always @* begin
   in_ready <= out_ready || ~out_valid;
end

Note that non-blocking assignment <= is being used. I don't think it makes any difference in this case because there are no multiple assignments. However, I cannot seem to find any explanation for this. So the question is - does it or does not make any difference, both in the scope of a given always block and as part of the larger design?

4

4 Answers

8
votes

Of course this violates my coding guideline #3: http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf) but it will work.

The reason to avoid using nonblocking assignments to code combinational logic is simulation performance. In Munkymorgy's example, after the always block triggers, you will evaluate the right-hand-side (RHS) of all of the equations, go back to the top of the always block, update the LHS of the equations, which will again trigger the always block, which will again force the simulator to evaluate the RHS of the equations, go to the top of the always block, then update the LHS of the equations. For larger blocks, this could cause multiple iterations through the always block, with the corresponding simulation penalty.

In your simple 1-line example, there is no internal simulation penalty, but there may be cross-assignment penalties elsewhere.

Good coders use consistently good coding habits. I would change the code. If changing the code breaks the simulation results, then there are additional bad coding habits buried elsewhere in the code. The code should not be that fragile.

5
votes

Irrelevant but bad practise.

I doubt that the single assignment causes any side effects. The always block will trigger for any change on the right hand side, updating in_ready. There is nothing to block, so non-blocking will not cause issues.

If a larger design had :

always @* begin 
  in_ready    <= out_ready || ~out_valid  ;
  other_ready <= in_ready  || other_ready ;
end

I am not too sure, as it is combinatorial it might just take an extra delta step to resolve.

2
votes

its not bad its your choice if you understand the way the circuit will behave its very much fine so for example

always @* begin b<=a+c; a=b; end

  1. so in this sample code the complete circuit developed inside the always block will get activated when anything inside the sensitivity list which is all inputs will either rise or fall it change its current state
  2. now b <=a+c now here a full adder will be created with 'a' and 'c' as input but
  3. now the design is made or the compiler synthesize the circuit in such a way that the next statement a=b here the wire is taken out from the old value and not from the updated b and is been provided to a ; so simply
  4. if you want the same to happen you are welcome to do no issue will come in synthesizable
1
votes

Non-blocking assignment in always @(*) is required if one wants to simulate gate delays.

For example, the below code properly simulates an OR gate with 3 ns delay at the output. Blocking assignment would not work in this case.

always @(*) begin
  a <= #3 b | c;
end  

Further reading:

  1. http://www.sunburst-design.com/papers/CummingsHDLCON1999_BehavioralDelays_Rev1_1.pdf
  2. https://electronics.stackexchange.com/q/572643/238188