Are not algorithms supposed to have the same time complexity across any programming language, then why do we consider such programming language differences in time complexity such as pass by value or pass by reference in the arguments when finding the total time complexity of the algorithm? Or am I wrong in saying we should not consider such programming language differences in implementation when finding the time complexity of an algorithm? I cannot think of any other instance as of now, but for example, if there was an algorithm in C++ that either passed its arguments by reference or by value, would there be difference in time complexity? If so, why? CLRS, the famous book about algorithms never discusses this point, so I am unsure if I should consider this point when finding time complexity.
1 Answers
I think what you are refer to is not language but compiler differencies... This is how I see it (however take in mind I am no expert in the matter)
Yes complexity of some source code ported to different langauges should be the same (if language capabilities allows it). The problem is compiler implementation and conversion to binary (or asm if you want). Each language usually adds its engine to the program that is handling the stuff like variables, memory allocation, heap/stack ,... Its similar to OS. This stuff is hidden but it have its own complexities lets call them hidden therms of complexity
For example using reference instead of direct operand in function have big impact on speed. Because while we are coding we often consider only complexity of the code and forgetting about the heap/stack. This however does not change complexity, its just affect n
of the hidden therms of complexity (but heavily).
The same goes for compilers specially designed for recursion (functional programing) like LISP. In those the iteration is much slower or even not possible but recursion is way faster than on standard compilers... These I think change complexity but only of the hidden parts (related to language engine)
Some languages use internally bignums like Python. These affect complexity directly as arithmetics on CPU native datatypes is often considered O(1)
however on Bignums its no longer true and can be O(1),O(log(n)),O(n),O(n.log),O(n^2)
and worse depending on the operation. However while comparing to different language on the same small range the bignums are also considered O(1)
however usually much slower than CPU native datatype.
Interpreted and virtual machine languages like Python,JAVA,LUA are adding much bigger overhead as they hidden therms usually not only contain langue engine but also the interpreter which must decode code, then interpret it or emulate or what ever which changes the hidden therms of complexity greatly. Not precompiled interpreters are even worse as you first need to parse text which is way much slower...
If I put all together its a matter of perspective if you consider hidden therms of complexity as constant time or complexity. In most cases the average speed difference between language/compilers/interpreters is constant (so you can consider it as constant time so no complexity change) but not always. To my knowledge the only exception is functional programming where the difference is almost never constant ...