2
votes

Reading from Herb Sutter, I can see this point:

Guideline #4: A base class destructor should be either public and >virtual, or protected and nonvirtual.

However, I am struggling to understand the reasons behind the latter. -- having a non-virtual destructor... I do understand why it needs to be protected.

here is my code:

#include <iostream>
using namespace std;

class base {
public:
    base () {cout << "base ctor" << endl;}
    virtual void foo () = 0;
protected:
    ~base() {cout << "base dtor" << endl;}
};
class derived : public base {
public:
        derived () {cout << "derived ctor" << endl;}
        virtual void foo () override {cout << "derived foo" << endl;}
        virtual ~derived(){cout << "derived dtor" << endl;}
};

int main(){
    derived* b = new derived();
        delete b;
    cout <<"done"<<endl;
}

What do I gain by making the base dtor non-virtual vs virtual? I can see the same effect whether its virtual or non-virtual.

base ctor
derived ctor
derived dtor
base dtor
done
1

1 Answers

2
votes

I can see the same effect whether its virtual or non-virtual.

This is because you call delete through a pointer to the derived type. If you instead make the destructor public, but not virtual, and do

base* b = new derived();
delete b;

it'll (most likely) print

base ctor
derived ctor
base dtor
done

(I can't guarantee what it'll print because the behavior is undefined when deleting an object of derived type through a pointer to the base class if the destructor isn't virtual)

In this case where the compiler can see both the call to new and to delete it will most likely tell you that you have undefined behavior here, but if one translation unit just hands you a pointer to base and you call delete in another there is no way to know for the compiler. If you want to be sure you can't make that mistake there are two ways to avoid the problem.

The first is to simply make the destructor virtual; then there won't be undefined behavior. But of course this has a small performance const since the destruction now has one additional level of indirection through the vtable.

So if you never intend to store objects in (smart-)pointers to the base class and only use the polymorphism through references, then you might not want to pay that additional cost.

Therefore you need another way to prevent someone from accidentally calling delete on a pointer to the base class when you actually have an object of a derived class: Making it impossible to ever call delete on objects of the base class. This is precisely what making the destructor protected gives you: The derived classes' destructors can still invoke the base class destructor as they need to, but users of pointers to the base class can no longer accidentally delete through that pointer because it isn't accessible.