2
votes

I use MS SQL Server 2008 R2 I have a C# app – its client. I have met the following bug (and still cannot reproduce it. The following client code brings the “count of rows affected” more than it should:

…
DbCommand cmd = db.GetStoredProcCommand("TheStoredProc");
int numberOfAffectedRows = db.ExecuteNonQuery(cmd);
…

The stored proc TheStoredProc in the DB looks like the following:

PROCEDURE TheStoredProc 
as

    declare @Var1 int

    **SET NOCOUNT ON;**
…
insert into [Workflows]
    (
        WorkflowInstanceID,
        Status
    )
    select WorkflowInstanceID, 1
        from #tmp


    DROP TABLE #tmp

This is inherited code; I am not its author. But from the context I see – the Client expects the ExecuteNonQuery to bring the count of rows inserted by the “insert into [Workflows] …” operator.

And by only this operator INSERT. The proc contains one more INSERT before that one; but Client needs numberOfAffectedRows to be filled with count of only rows inserted by the “insert into [Workflows]…”.

The bug is the following: the numberOfAffectedRows got value 464. But from the DB content I see it should have been 419. And the bug is not reproducible – to the moment I saw it only once on the Productions erver; and still cannot reproduce it on my test server. On test server the numberOfAffectedRows is correct.

I have the following version: the reason of the bug is the “SET NOCOUNT ON” inb the begin of TheStoredProc. Here I have found the same problem:

… I have heard few side effects of using "SET NOCOUNT ON" a. SQLCommand.ExecuteNonQuery function returning wrong number of rows if the stored procedure has SET NOCOUNT ON. …

More often (on other forums) I met the complaint that (-1) was returned by ExecuteNonQuery in this situation.

What do you think – am I right?

The bugfix should be the following: to insert “SET NOCOUNT OFF” right before the “insert into [Workflows]…”. On my test server this works OK – but the initial code works OK too on the test server….

1
It is difficult to tell if you are asking a question, or already have the answercrthompson
Both :-) I have the answer; but not 100%-sure in it - since, as I said, just cannot reproduce the situation on the test server. So all I can do before put the fix on Production server is to ask people - "am I right; or you may suggest another version explaining what happened? Or maybe you see a defect in my version?"Victor Sotnikov
I see. Could your problem be caching? If you changed the sproc, it would force the server to recompile and then give you different results. Perhaps the nocount is inconsequential and it just needed a refresh.crthompson
"Could your problem be caching?" - I have met the situations when the caching affected the performance (the "parameter sniffing"). But how could it affect the result itself - I mean, when the number 464 is returned instead of the 419?Victor Sotnikov
Caching was not the right term, my apologies. Rather, a previously compiled sproc vs a newly compiled sproc.crthompson

1 Answers

3
votes

I would think that having SET NOCOUNT ON would always return -1 (there may be cases where that isn't true, but...)

ExecuteNonQuery counts the number of rows affected by the insert/update, but also any rows modified due to triggers. Do you have any triggers on the affected tables in the test or production databases that might behave differently?

Is there any reason why you wouldn't want to explicitly return the count of rows you're interested in? For example, change add this after the insert you're interested in:

select @rowcount = @@ROWCOUNT

and then either return that @rowcount value, or pass it back in an output parameter. Seems like doing this would future-proof you, in case a trigger is added later that causes your counts to change without materially affecting the functionality of your system.