I'm using list labels to gather tokens and semantic predicates to validate sequences in my parser grammar. E.g.
line
:
(text+=WORD | text+=NUMBER)+ ((BLANK | SKIP)+ (text+=WORD | text+=NUMBER)+)+
{Parser.validateContext(_localctx)}?
(BLANK | SKIP)*
;
where
WORD: [\u0021-\u002F\u003A-\u007E]+; // printable ASCII characters (excluding SP and numbers)
NUMBER: [\u0030-\u0039]+; // printable ASCII number characters
BLANK: '\u0020';
SKIP: '\u0020\u0020' | '\t'; // two SPs or a HT symbol
The part of Parser.validateContext
used to validate the line
rule would be implemented like this
private static final boolean validateContext(ParserRuleContext context) {
//.. other contexts
if(context instanceof LineContext)
return "<reference-sequence>".equals(Parser.joinTokens(((LineContext) context).text, " "));
return false;}
where Parser.joinTokens
is defined as
private static String joinTokens(java.util.List<org.antlr.v4.runtime.Token> tokens, String delimiter) {
StringBuilder builder = new StringBuilder();
int i = 0, n;
if((n = tokens.size()) == 0) return "";
builder.append(tokens.get(0).getText());
while(++i < n) builder.append(delimiter + tokens.get(i).getText());
return builder.toString();}
Both are put in a @parser::members
clause a the beginning of the grammar file.
My problem is this: sometimes the _localctx
reference is null
and I receive "no viable alternative" errors. These are probably caused because the failing predicate guards the respective rule and no alternative is given.
Is there a reason–potentially an error on my part–why _localctx
would be null
?
UPDATE: The answer to this question seems to suggest that semantic predicates are also called during prediction. Maybe during prediction no context is created and _localctx
is set to null
.