16
votes

Edit 2: For a practical demonstration of why this remains important, look no further than stackoverflow's own regex-caused outage today (2016-07-20)!

Edit: This question has considerably evolved since I first asked it. See below for two fast+compatible, but not completely fully featured implementations. If you know of more or better implementations, please mention them, there still isn't an ideal implementation here yet!

Where can I find reliably fast Regex implementation?

Does anyone know of a normal non-backtracking (System.Text.RegularExpressions backtracks) linear time regex implementation either for .NET or native and reasonably usable from .NET? To be useful, it would need to:

  • have a worst case time-complexity of regex evaluation of O(m*n) where m is the length of the regex, and n the length of the input.
  • have a normal time-complexity of O(n), since almost no regular expressions actually trigger the exponential state-space, or, if they can, only do so on a minute subset of the input.
  • have a reasonable construction speed (i.e. no potentially exponential DFA's)
  • be intended for use by human beings, not mathematicians - e.g. I don't want to reimplement unicode character classes: .NET or PCRE style character classes are a plus.

Bonus Points:

  • bonus points for practicality if it implements stack-based features which let it handle nesting at the expense of consuming O(n+m) memory rather than O(m) memory.
  • bonus points for either capturing subexpressions or replacements (if there are an exponential number of possible subexpression matches, then enumerating all of them is inherently exponential - but enumerating the first few shouldn't be, and similarly for replacements). You can workaround missing either feature by using the other, so having either one is sufficient.
  • lotsa bonus points for treating regexes as first class values (so you can take the union, intersection, concatenation, negation - in particular negation and intersection as those are very hard to do by string manipulation of the regex definition)
  • lazy matching i.e. matching on unlimited streams without putting it all in memory is a plus. If the streams don't support seeking, capturing subexpressions and/or replacements aren't (in general) possible in a single pass.
  • Backreferences are out, they are fundamentally unreliable; i.e. can always exhibit exponential behavior given pathological input cases.

Such algorithms exist (This is basic automata theory...) - but are there any practically usable implementations accessible from .NET?

Background: (you can skip this)

I like using Regex's for quick and dirty text clean-ups, but I've repeatedly run into issues where the common backtracking NFA implemtation used by perl/java/python/.NET shows exponential behavior. These cases are unfortunately rather easy to trigger as soon as you start automatically generating your regular expressions. Even non-exponential performance can become exceedingly poor when you alternate between regexes that match the same prefix - for instance, in a really basic example, if you take a dictionary and turn it into a regular expression, expect terrible performance.

For a quick overview of why better implementations exist and have since the 60s, see Regular Expression Matching Can Be Simple And Fast.

Not quite practical options:

  • Almost ideal: FSA toolkit. Can compile regexes to fast C implementations of DFA's+NFA's, allows transducers(!) too, has first class regexes (encapsulation yay!) including syntax for intersection and parametrization. But it's in prolog... (why is something with this kind of practical features not available in a mainstream language???)
  • Fast but impractical: a full parser, such as the excellent ANTLR generally supports reliably fast regexes. However, antlr's syntax is far more verbose, and of course permits constructs that may not generate valid parsers, so you'd need to find some safe subset.

Good implementations:

  • RE2 - a google open source library aiming for reasonable PCRE compatibility minus backreferences. I think this is the successor to the unix port of plan9's regex lib, given the author.
  • TRE - also mostly compatible with PCRE and even does backreferences, although using those you lose speed guarantees. And it has a mega-nifty approximate matching mode!

Unfortunately both implementations are C++ and would require interop to use from .NET.

5
That sounds like you may be writing your regexs in an inefficient manner.Brad Gilbert
The whole point is that there exist implementations since the 60s (!) for which no regexes are inefficient in this sense. All regular expressions (without backreferences) can be evaluated in linear time - I'm looking for an implementation that dumps backreferences and gives me reliable performance instead.Eamon Nerbonne
See [swtch.com/~rsc/regexp/regexp1.html Regular Expression Matching Can Be Simple And Fast] for an explanation.Eamon Nerbonne

5 Answers

11
votes

First, what your suggesting is possible and you certainly know your subject. You also know that the trade-off of not using back-referencing implementations is memory. If you control your environment enough this is likely a reasonable approach.

The only thing I will comment on before continuing is that I would encourage you to question the choice of using RegEx. You are clearly more familiar with your specific problem and what your trying to solve so only you can answer the question. I don't think ANTLR would be a good alternative; however, A home-brew rules engine (if limited in scope) can be highly tuned to your specific needs. It all depends on your specific problem.

For those reading this and 'missing the point', here is some background reading:

From the same site, there are a number of implementations linked on this page.

The gist of the entire discussion of the above article is that the best answer is to use both. To that end, the only widely used implementation I'm aware of is the one used by the TCL language. As I understand it was originally written by Henry Spencer and it employs this hybrid approach. There have been a few attempts at porting it to a c library, though I'm not aware of any that are in wide use. Walter Waldo's and Thomas Lackner's are both mentioned and linked here. Also mentioned is the boost library though I'm not sure of the implementation. You can also look at the TCL code itself (linked from their site) and work from there.

In short, I'd go with TRE or Plan 9 as these are both actively supported.

Obviously none of these are C#/.Net and I'm not aware of one that is.

3
votes

If you can handle using unsafe code (and the licensing issue) you could take the implementation from this TRE windows port.

You might be able to use this directly with P/Invoke and explicit layout structs for the following:

typedef int regoff_t;
typedef struct {
  size_t re_nsub;  /* Number of parenthesized subexpressions. */
  void *value;     /* For internal use only. */
} regex_t;

typedef struct {
  regoff_t rm_so;
  regoff_t rm_eo;
} regmatch_t;


typedef enum {
  REG_OK = 0,       /* No error. */
  /* POSIX regcomp() return error codes.  (In the order listed in the
     standard.)  */
  REG_NOMATCH,      /* No match. */
  REG_BADPAT,       /* Invalid regexp. */
  REG_ECOLLATE,     /* Unknown collating element. */
  REG_ECTYPE,       /* Unknown character class name. */
  REG_EESCAPE,      /* Trailing backslash. */
  REG_ESUBREG,      /* Invalid back reference. */
  REG_EBRACK,       /* "[]" imbalance */
  REG_EPAREN,       /* "\(\)" or "()" imbalance */
  REG_EBRACE,       /* "\{\}" or "{}" imbalance */
  REG_BADBR,        /* Invalid content of {} */
  REG_ERANGE,       /* Invalid use of range operator */
  REG_ESPACE,       /* Out of memory.  */
  REG_BADRPT            /* Invalid use of repetition operators. */
} reg_errcode_t;

Then use the exports capable of handling strings with embedded nulls (with wide character support)

/* Versions with a maximum length argument and therefore the capability to
   handle null characters in the middle of the strings (not in POSIX.2). */
int regwncomp(regex_t *preg, const wchar_t *regex, size_t len, int cflags);

int regwnexec(const regex_t *preg, const wchar_t *string, size_t len,
      size_t nmatch, regmatch_t pmatch[], int eflags);

Alternatively wrap it via a C++/CLI solution for easier translation and more flexibility (I would certainly suggest this is sensible if you are comfortable with C++/CLI).

1
votes

Where can I find robustly fast Regex implementation?

You can't.

Someone has to say it, the answer to this question given the restrictions is surely you can't - its unlikely you will find an implementation matching your constraints.

Btw, I am sure you have already tried so, but have you compiled the regex (with the option that outputs to an assembly) - I say because:

if you have a complex Regex and millions of short strings to test

0
votes

Consider how DFAs are created from regular expressions:

You start with a regular expression. Each operation (concat, union, Kleene closure) represents a transition between states in an NFA. The resulting DFA's states represent power sets of the states in the NFA. The states in the NFA are linear to the size of the regular expression, and therefore the DFA's states are exponential to the size of the regular expression.

So your first constraint,

have a worst case time-complexity of regex evaluation of O(m*n) where m is the length of the regex, and n the length of the input

Is impossible. The regex needs to be compiled to a 2^m-state DFA (worst case), which won't be done in linear time.

This is always the case with all but the simplest regular expressions. Ones that are so simple you can just write a quick .contains expression more easily.

0
votes

A quick comment: Just because you can simulate DFA construction by simulating with multiple states does not mean you are not doing the work of the NFA-DFA conversion. The difference is that you are distributing the effort over the search itself. I.e., worst case performance is unchanged.