10
votes

I know that derived classes can simply "redefine" base class member functions, and that when that function of a derived class object is called, the function defined in the derived class is used, but... Doesn't this render the "virtual" keyword redundant? I have read of some obviously significant differences between these two cases (ie: if you have a base class pointer pointing to a derived class and you call a function, if it is virtual the derived class function will be called, but if not, the base class function will be called).

Put another way, what is the purpose of being able to redefine member functions as non-virtual functions, and is this a commonly used practice?

Personally, it seems to me like it would just get very confusing.

Thanks!

8
Soundslike you need a good C++ bookBen Voigt

8 Answers

7
votes

The most common approach on the most common OOP languages (Java, SmallTalk, Python, etc.) is to have, by default, every member function as virtual.

The drawback is that there is a small performance penalty for every time a virtual call is made. For that reason, C++ lets you choose if you want your methods defined as virtual, or not.

However, there is a very important difference between a virtual and a non-virtual method. For example:

class SomeClass { ... };
class SomeSubclassOfSomeClass : public SomeClass { ... };
class AnotherSubclassOfSomeClass : public SomeClass { ... };

SomeClass* p = ...;

p->someVirtualMethod();

p->someNonVirtualMethod();

The actual code executed when the someVirtualMethod call is made depends on the concrete type of the referenced pointer p, depending entirely on SomeClass subclasses redefinition.

But the code executed on the someNonVirtualMethod call is clear: always the one on SomeClass, since the type of the p variable is SomeClass.

2
votes

It sounds like you already know the difference between virtual and non-virtual methods, so I won't go into that as others have. The question is, when would a non-virtual method be more useful than a virtual one?

There are cases where you don't want the overhead of having a vtable pointer included in every object, so you take pains to make sure there are no virtual methods in the class. Take for example a class that represents a point and has two members, x and y - you might have a very large collection of these points, and a vtable pointer would increase the size of the object by at least 50%.

1
votes

Addressing: "Put another way, what is the purpose of being able to redefine member functions as non-virtual functions, and is this a commonly used practice?"

Well, you can't. If the base class method is virtual, so is the corresponding derived class method, if it exists, whether or not the 'virtual' keyword is used.

So: "Doesn't this render the "virtual" keyword redundant? ". Yes, it is redundant in the derived class method, but not in the base class.

However, note that it is unusual (being polite) to wish to have a non-virtual method and then redefine it it a derived class.

1
votes

It will work for an instance of the derived class and a pointer to the derived class. However, if you pass your derived class into a function that takes a pointer to Base, the Base version of the function will be called. This is probably not desirable. For instance, the following will return 5

#include "stdafx.h"
#include <conio.h>
#include <iostream>

class Base
{
public:
    int Foo(){return 5;}
};

class Derived:public Base
{
    int Foo(){return 6;}
};

int Func(Base* base)
{
    return base->Foo();
}

int _tmain(int argc, _TCHAR* argv[])
{
    Derived asdf;

    std::cout << Func(&asdf);
    getch();

    return 0;
}

This is because of the way virtual works. When an object has a virtual call, the correct function is looked up in the v-table when you call the virtual function. Otherwise you don't really have inheritance, you have the Base pointer acting like the base class not the derived class.

0
votes

It is important to be careful when destructors are marked as non-virtual and "overridden" in derived classes - it is possible the class may not clean up properly if the destructor is invoked on a pointer to the base class.

0
votes

seems you haven't discovered the power of polymorphism. Polymorphism works in this way:

You have a function which takes a pointer of a base class, you can call it by passing in derived class objects, according to different implementation of derived class, the function will act differently. It's a wonderful way because the function is stable, even if we expand the hierarchy of the inheritance tree.

without this mechanism, you can't achieve this. And this mechanism needs "virtual"

0
votes

C++ scoping and name-lookup rules allow very strange things, and methods are not alone here. Indeed Hiding (or Shadowing) can occur in many different situations:

int i = 3;
for (int i = 0; i != 5; ++i) { ... } // the `i` in `for` hides the `i` out of it

struct Base
{
  void foo();
  int member;
};

struct Derived: Base
{
  void foo(); // hides Base::foo
  int member; // hides Base::member
};

Why then ? For resiliency.

When modifying a Base class I know not of all its possible children. Because of the hiding rule (and despite the confusion it may create), I can add an attribute or method and use it without a care in the world:

  • My call will always call the Base method, whether some child override it or not
  • A call on the child will be unaffected, thus I won't suddenly crash some other programmer's code

It's evident that if you look at the program as a finite work it does not make sense, but programs are evolving and the hiding rule eases the evolution.

0
votes

The simplest way to explain it is probably this:

Virtual does some lookup for you, by adding a virtual lookup table.

In other words, if you didn't have the virtual keyword, and overrode a method, you would still have to call that method manually [forgive me if my memory for C++ syntax is a little rusty in spots]:

class A { void doSomething() { cout << "1"; } }
class B: public A { void doSomething() { cout << "2"; } }
class C: public A { void doSomething() { cout << "3"; } }

void someOtherFunc(A* thing) {
    if (typeid(thing) == typeid(B)) {
        static_cast<B*>(thing)->doSomething();
    } else if (typeid(thing) == typeid(C)) {
        static_cast<C*>(thing)->doSomething();
    } else {
        // not a derived class -- just call A's method
        thing->doSomething();
    }
}

You could optimise this a little (for readability AND performance, most likely), using a lookup table:

typedef doSomethingWithAnA(A::*doSomethingPtr)();
map<type_info, doSomethingWithAnA> A_doSomethingVTable;

void someOtherFuncA* thing) {
   doSomethingWithAnA methodToCall = A_doSomethingVTable[typeid(thing)];
   thing->(*methodToCall)();
}

Now, that's more of a high-level approach. The C++ compiler can obviously optimise this a bit more, by knowing exactly what "type_info" is, and so on. So probably, instead of that "map" and the lookup "methodToCall = aDoSomethingVTable[typeid(thing)]", then call, ", the compiler is inserting something much smaller and faster, like "doSomethingWithAnA* A_doSomethingVTable;" followed by "A_doSomethingTablething->type_number".

So you're right that C++ doesn't really NEED virtual, but it does add a lot of syntactic sugar to make your life easier, and can optimise it better too.

That said, I still think C++ is a horribly outdated language, with lots of unnecessary complications. Virtual, for example, could (and probably should) be assumed by default, and optimised out where unnecessary. A Scala-like "override" keyword would be much more useful than "virtual".