A parser is a something that takes an input string and spits out an AST.
A parser generator is something that takes a grammar and spits out a parser.
A static parser generator is something that takes a grammar and spits out the code for a parser.
A dynamic parser generator is something that takes a grammar at runtime and spits out a parser at runtime.
This blows my mind, because metaprogramming is generally harder than the runtime alternative. I understand why it is more efficient, I understand why it is less erroneous; what I don't understand is how it came to be the norm.
Entering the world of parsers was frustrating. I couldn't understand why everyone kept pointing to Yacc or Bison. I just wanted my program to take an arbitrary EBNF, an arbitrary input string; and spit out the AST.
"Each language has a well defined EBNF available somewhere, in some standard "grammar file" format. I can write an editor to support any language!"
"Okay, not happening. What in the heck are parser combinators? They look cool, but there's no easy way to convert an EBNF to one."
"Okay... so I have the EBNF somehow, how on earth do I parse my text? What?! Generate an entire parser?!"
I've been thinking about this. Here is what I've come up with:
- Computers were slow. Compilers were necessary. At the time, writing something that generated a parser seemed saner than writing it by hand.
- Parsers are hard to reason about, so much so that writing one that is dynamically generated is actually tougher than one statically.
- Some individual decided that static parser generators were they way to go, wrote a successful implementation which is now the norm because of popular usage.
I'm probably wrong, hence this question:
Why are static parser generators more prevalent than dynamic ones?