In J. Barkley Rosser's "Logic for Mathematicians" he uses a notation to avoid too many parentheses. Although I don't know when logicians start using this notation, but I know that book first published in 1957, and J. G. P. Nicod's paper which was published in 1916 is also using this notation. So obviously it has a pretty long history, although nowadays this is not favoured by modern logicians.
In the programming world, in LISP alike programming languages there is a big challenge for programmers to keep track of right (huge!) amount of parentheses. Haskell provides an operator $
that provide part of the functionalities, but since you cannot say 2 * $ 3 + 4
it is not as powerful as the dots (see examples below). The C language serials use a conventional operation precedence, but in some cases deep nested parentheses are still required. So I wonder why no actual languages uses this strategy? I tried, but I found not even able to write a grammar for it!
Let me show some example of a toy calculator language with only two operators +
and *
, and all terms are integers.
With this notation, a translater shall pass the following test cases:
1 + 3 .* 2
= (1 + 3) * 2
1 *. 3 + 2
= 1 * (3 + 2)
1 *. 2 +. 2
= (1 * 2) + 2
2 *: 2 + 3 .* 4
= 2 * ((2 + 3) * 4)
I cann't explain all detail of this notation, it costs almost 5 pages in Rosser's book. But in genaral (and short), a dot .
before or after an operator represents a "separator", to push the two sides away. A colon :
is a stronger separator, three dots .:
or :.
is even stronger, but less then ::
, and so on.
I wonder how can we write a grammar for the above language and then parse it? Also although this notation has been obsoleted I found it appears very clear to the programmer's eye. So what is its pros and cons for it?
(2 *) $ 3 + 4
in Haskell -- but indeed, that introduces parens as part of the function slice. – Fred Foo