8
votes

Recently I was updating some code used to take screenshots using the GetWindowDC -> CreateCompatibleDC -> CreateCompatibleBitmap -> SelectObject -> BitBlt -> GetDIBits series of WinAPI functions. Now I check all those for failure because they can and sometimes do fail. But then I have to perform cleanup by deleting the created bitmap, deleting the created dc, and releasing the window dc. In any example I've seen -- even on MSDN -- the related functions (DeleteObject, DeleteDC< ReleaseDC) aren't checked for failure, presumably because if they were retrieved/created OK, they will always be deleted/released OK. But, they still can fail.

That's just one noteable example since the calls are all right next to each other. But occasionally there are other functions that can fail but in practice never do. Such as GetCursorPos. Or functions that can fail only if passed invalid data, such as FileTimeToSytemTime.

So, is it good-practice to check ALL functions that can fail for failure? Or are some OK not to check? And as a corollary, when checking these should-never-fail functions for failure, what is proper? Throwing a runtime exception, using an assert, something else?

7
MSDN examples often seem to try to make things seem easy rather than showing robust error handling: I wouldn't use them as a benchmark of professionalism.Tony Delroy

7 Answers

5
votes

The question whether to test or not depends on what you would do if it failed. Most samples exit once cleanup is finished, so verifying proper clean up serves no purpose, the program is exiting in either case.

Not checking something like GetCursorPos could lead to bugs, but depending on the code required to avoid this determines whether you should check or not. If checking it would add 3 lines around all your calls then you are likely better off to take the risk. However if you have a macro setup to handle it then it wouldn't hurt to add that macro just in case.

FileTimeToSystemTime being checked depends on what you are passing into it. A file time from the system? probably safe to ignore it. A custom string built from user input? probably better to make sure.

4
votes

Yes. You never know when a promised service will surprise by not working. Best to report an error even for the surprises. Otherwise you will find yourself with a customer saying your application doesn't work, and the reason will be a complete mystery; you won't be able to respond in a timely, useful way to your customer and you both lose.

If you organize your code to always do such checks, it isn't that hard to add the next check to that next API you call.

3
votes

It's funny that you mention GetCursorPos since that fails on Wow64 processes when the address passed is >2Gb. It fails every time. The bug was fixed in Windows 7.

So, yes, I think it's wise to check for errors even when you don't expect them.

2
votes

Yes, you need to check, but if you're using C++ you can take advantage of RAII and leave cleanup to the various resources that you are using.

The alternative would be to have a jumble of if-else statements, and that's really ugly and error-prone.

1
votes

Yes. Suppose you don't check what a function returned and the program just continues after the function failure. What happens next? How will you know why your program misbehaves long time later?

One quite reliable solution is to throw an exception, but this will require your code to be exception-safe.

1
votes

Yes. If a function can fail, then you should protect against it.

One helpful way to categorise potential problems in code is by the potential causes of failure:

  1. invalid operations in your code
  2. invalid operations in client code (code that call yours, written by someone else)
  3. external dependencies (file system, network connection etc.)

In situation 1, it is enough to detect the error and not perform recovery, as this is a bug that should be fixable by you.

In situation 2, the error should be notified to client code (e.g. by throwing an exception).

In situation 3, your code should recover as far as possible automatically, and notify any client code if necessary.

In both situations 2 & 3, you should endeavour to make sure that your code recovers to a valid state, e.g. you should try to offer "strong exception guarentee" etc.

1
votes

The longer I've coded with WinAPIs with C++ and to a lesser extent PInvoke and C#, the more I've gone about it this way:

  1. Design the usage to assume it will fail (eventually) regardless of what the documentation seems to imply
  2. Make sure to know the return value indication for pass/fail, as sometimes 0 means pass, and vice-versa
  3. Check to see if GetLastError is noted, and decide what value that info can give your app

If robustness is a serious enough goal, you may consider it a worthy time-investment to see if you can do a somewhat fault-tolerant design with redundant means to get whatever it is you need. Many times with WinAPIs there's more than one way to get to the specific info or functionality you're looking for, and sometimes that means using other Windows libraries/frameworks that work in-conjunction with the WinAPIs.

For example, getting screen data can be done with straight WinAPIs, but a popular alternative is to use GDI+, which plays well with WinAPIs.