1
votes

The full definition of "Turing Completeness" requires infinite memory.

Is there a better term than Turing Complete for a programming language and implementation that seems useably complete, except for being limited by finite (say 100 word or 16-bit or 32-bit, etc.) address space?

2
stackoverflow.com/a/449170/110933 might shed some lightdavek

2 Answers

1
votes

I guess, you can bring the limited memory into the definition. Something like

A programming language for a given architecture (!) is Limitedly Turing Complete, if for every Turing Machine there exists a program that either

a) simulates the Turing Machine and returns the same result (iff the Turing Machine returns) or

b) at some point uses at least one available limited resource (e.g. memory) completely and returns an arbitrary result.

The question is, whether this intuitive definition really helps, or if it is better to assume that your architecture has unlimited memory (even though it is actually finite). Note that you don't even have to try hard in order to satisfy Limited Turing Completeness (as defined above), if you simply go into an infinite loop that mallocs one byte each time, you found your program for all Turing Machines.

The problem seems to be that you cannot pin implementation specific properties. For instance, if you have 500K ram, you may be able to express a program that computes 1+1 but maybe you're not, who knows.

I'd argue that languages like Haskell and Brainfuck (yes, I'm serious) are actually Turing Complete because they abstract resources away. While languages like C++ are only Limitedly Turing Complete, because at some point the address-space of pointers is exhausted and it is not possible to address any more data (e.g. sort a list of 2^2^2^2^100 items).

0
votes

You could say that an implementation requires infinite memory to be truly turing complete, but languages themselves have no concept of a memory limit. You can make a compiler for a million-bit machine or a 4-bit machine without changing the language.