Repeatable Read will put locks on all rows that have
been fetched. In situations where you are working with
cursors fetching large amounts of data this can cause
contention with other users because they cannot
obtain locks to update any of the rows read by cursors
with Repeatable Read until the cursor is closed.
The risk of performance degradation is that transactions may
suffer an increased number of timeouts and/or deadlocks. This risk
is proportional to the probability that two transactions need to
read/update the same rows at the same time. Another factor that
can impact your application is the size of lock taken. If locks are
taken at a page level then contention may occur if the data different
transactions need to access lie on the same page - not necessarily the same row.
On the other hand, when you use a lower isolation level,
cursor stability for example, you leave open the possibility
that rows you have previously fetched during your transaction may be
updated by other transactions before your unit of work has
completed.
SQL Server
returns an error message saying that a deadlock had been resolved, or you transactions just hang? The latter is not a deadlock, it's a lock contention. – Quassnoi