I've learned that if throwing out of destructor program will abort if that happens during stack unwinding, because then more than 1 exception will propagate.
Here is example with comment that demonstrates this:
class Foo
{
public:
~Foo()
{
ReleaseResources();
}
private:
int* pInt;
void ReleaseResources()
{
if (!pInt)
throw 0;
else delete pInt;
}
};
int main() try
{
{
Foo local;
throw 1;
} // aborting here, because now 2 exceptions are propagating!
return 0;
}
catch (int& ex)
{
return ex;
}
However I have a class hierarchy where one of destructors calls a function that may throw, and because of that entry hierarchy is poisoned meaning that now all destructors are marked as noexcept(false)
.
while this is OK for compiler to insert exception code, it is not OK for the user of these classes because it does not prevent aborting the program if scenario from above code sample happens.
Because I want destructors to be exception safe, I come to idea to mark them all as noexcept
but handle possible exceptions inside destructor like this:
Same sample but reworked such that abort not possible, and destructors exception safe:
class Foo
{
public:
~Foo() noexcept
{
try
{
ReleaseResources();
}
catch (int&)
{
// handle exception here
return;
}
}
private:
int* pInt;
void ReleaseResources()
{
if (!pInt)
throw 0;
else delete pInt;
}
};
int main() try
{
{
Foo local;
throw 1;
} // OK, not aborting here...
return 0;
}
catch (int& ex)
{
return ex;
}
The question is, is this normal approach to handle exceptions inside destrucotrs? are there any examples that could make this design go wrong?
Primary goal is to have exception safe destructors.
Also a side question, in this second example, during stack unwinding there are still 2 exceptions propagating, how is that no abort is called? if only one exception is allowed during stack unwinding?