diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..fc8a5de
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,165 @@
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+
+ This version of the GNU Lesser General Public License incorporates
+the terms and conditions of version 3 of the GNU General Public
+License, supplemented by the additional permissions listed below.
+
+ 0. Additional Definitions.
+
+ As used herein, "this License" refers to version 3 of the GNU Lesser
+General Public License, and the "GNU GPL" refers to version 3 of the GNU
+General Public License.
+
+ "The Library" refers to a covered work governed by this License,
+other than an Application or a Combined Work as defined below.
+
+ An "Application" is any work that makes use of an interface provided
+by the Library, but which is not otherwise based on the Library.
+Defining a subclass of a class defined by the Library is deemed a mode
+of using an interface provided by the Library.
+
+ A "Combined Work" is a work produced by combining or linking an
+Application with the Library. The particular version of the Library
+with which the Combined Work was made is also called the "Linked
+Version".
+
+ The "Minimal Corresponding Source" for a Combined Work means the
+Corresponding Source for the Combined Work, excluding any source code
+for portions of the Combined Work that, considered in isolation, are
+based on the Application, and not on the Linked Version.
+
+ The "Corresponding Application Code" for a Combined Work means the
+object code and/or source code for the Application, including any data
+and utility programs needed for reproducing the Combined Work from the
+Application, but excluding the System Libraries of the Combined Work.
+
+ 1. Exception to Section 3 of the GNU GPL.
+
+ You may convey a covered work under sections 3 and 4 of this License
+without being bound by section 3 of the GNU GPL.
+
+ 2. Conveying Modified Versions.
+
+ If you modify a copy of the Library, and, in your modifications, a
+facility refers to a function or data to be supplied by an Application
+that uses the facility (other than as an argument passed when the
+facility is invoked), then you may convey a copy of the modified
+version:
+
+ a) under this License, provided that you make a good faith effort to
+ ensure that, in the event an Application does not supply the
+ function or data, the facility still operates, and performs
+ whatever part of its purpose remains meaningful, or
+
+ b) under the GNU GPL, with none of the additional permissions of
+ this License applicable to that copy.
+
+ 3. Object Code Incorporating Material from Library Header Files.
+
+ The object code form of an Application may incorporate material from
+a header file that is part of the Library. You may convey such object
+code under terms of your choice, provided that, if the incorporated
+material is not limited to numerical parameters, data structure
+layouts and accessors, or small macros, inline functions and templates
+(ten or fewer lines in length), you do both of the following:
+
+ a) Give prominent notice with each copy of the object code that the
+ Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the object code with a copy of the GNU GPL and this license
+ document.
+
+ 4. Combined Works.
+
+ You may convey a Combined Work under terms of your choice that,
+taken together, effectively do not restrict modification of the
+portions of the Library contained in the Combined Work and reverse
+engineering for debugging such modifications, if you also do each of
+the following:
+
+ a) Give prominent notice with each copy of the Combined Work that
+ the Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the Combined Work with a copy of the GNU GPL and this license
+ document.
+
+ c) For a Combined Work that displays copyright notices during
+ execution, include the copyright notice for the Library among
+ these notices, as well as a reference directing the user to the
+ copies of the GNU GPL and this license document.
+
+ d) Do one of the following:
+
+ 0) Convey the Minimal Corresponding Source under the terms of this
+ License, and the Corresponding Application Code in a form
+ suitable for, and under terms that permit, the user to
+ recombine or relink the Application with a modified version of
+ the Linked Version to produce a modified Combined Work, in the
+ manner specified by section 6 of the GNU GPL for conveying
+ Corresponding Source.
+
+ 1) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (a) uses at run time
+ a copy of the Library already present on the user's computer
+ system, and (b) will operate properly with a modified version
+ of the Library that is interface-compatible with the Linked
+ Version.
+
+ e) Provide Installation Information, but only if you would otherwise
+ be required to provide such information under section 6 of the
+ GNU GPL, and only to the extent that such information is
+ necessary to install and execute a modified version of the
+ Combined Work produced by recombining or relinking the
+ Application with a modified version of the Linked Version. (If
+ you use option 4d0, the Installation Information must accompany
+ the Minimal Corresponding Source and Corresponding Application
+ Code. If you use option 4d1, you must provide the Installation
+ Information in the manner specified by section 6 of the GNU GPL
+ for conveying Corresponding Source.)
+
+ 5. Combined Libraries.
+
+ You may place library facilities that are a work based on the
+Library side by side in a single library together with other library
+facilities that are not Applications and are not covered by this
+License, and convey such a combined library under terms of your
+choice, if you do both of the following:
+
+ a) Accompany the combined library with a copy of the same work based
+ on the Library, uncombined with any other library facilities,
+ conveyed under the terms of this License.
+
+ b) Give prominent notice with the combined library that part of it
+ is a work based on the Library, and explaining where to find the
+ accompanying uncombined form of the same work.
+
+ 6. Revised Versions of the GNU Lesser General Public License.
+
+ The Free Software Foundation may publish revised and/or new versions
+of the GNU Lesser General Public License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Library as you received it specifies that a certain numbered version
+of the GNU Lesser General Public License "or any later version"
+applies to it, you have the option of following the terms and
+conditions either of that published version or of any later version
+published by the Free Software Foundation. If the Library as you
+received it does not specify a version number of the GNU Lesser
+General Public License, you may choose any version of the GNU Lesser
+General Public License ever published by the Free Software Foundation.
+
+ If the Library as you received it specifies that a proxy can decide
+whether future versions of the GNU Lesser General Public License shall
+apply, that proxy's public statement of acceptance of any version is
+permanent authorization for you to choose that version for the
+Library.
diff --git a/MANIFEST.MF b/MANIFEST.MF
new file mode 100644
index 0000000..262b5dc
--- /dev/null
+++ b/MANIFEST.MF
@@ -0,0 +1,3 @@
+Manifest-Version: 1.0
+Main-Class: net.sourceforge.jFuzzyLogic.JFuzzyLogic
+
diff --git a/README.txt b/README.txt
new file mode 100644
index 0000000..fd4cd21
--- /dev/null
+++ b/README.txt
@@ -0,0 +1,4 @@
+
+Documentation
+
+ http://jfuzzylogic.sourceforge.net
diff --git a/README_release.txt b/README_release.txt
new file mode 100644
index 0000000..9dd1640
--- /dev/null
+++ b/README_release.txt
@@ -0,0 +1,73 @@
+
+
+ Release instructions
+ --------------------
+
+
+Main JAR file
+-------------
+
+ 1) Create jFuzzyLogic.jar file
+
+ Eclipse -> Package explorer -> jFuzzyLogic -> Select file jFuzzyLogic.jardesc -> Right click "Create JAR"
+
+ 2) Upload JAR file SourceForge (use sf.net menu)
+
+
+HTML pages
+----------
+
+ 1) Upload HTML pages to SourceForge
+
+ cd ~/workspace/jFuzzyLogic
+ scp index.html pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/
+
+ cd ~/workspace/jFuzzyLogic/html
+ scp *.{html,css} pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html
+ scp images/*.png pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html/images/
+ scp videos/*.swf pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html/videos/
+ scp -R assets dist fcl pdf pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/html/
+
+Eclipse plugin
+--------------
+
+ 1) Create small jFuzzyLogic.jar file (it's better to use a small file and not the big JAR file that has all source files)
+
+ cd ~/workspace/jFuzzyLogic/
+ ant
+
+ # Check the JAR file
+ cd
+ java -jar jFuzzyLogic.jar
+
+
+ 2) Copy jFuzzyLogic.jar file to UI project
+
+ cp jFuzzyLogic.jar net.sourceforge.jFuzzyLogic.Fcl.ui/lib/jFuzzyLogic.jar
+
+ 3) Build eclipse update site
+
+ In Eclipse:
+ - In package explorer, refresh all net.sourceforge.jFuzzyLogic.Fcl.* projects
+
+ - Open the net.sourceforge.jFuzzyLogic.Fcl.updateSite project
+ - Delete the contents of the 'plugins' 'features' and dir
+
+ cd ~/workspace/net.sourceforge.jFuzzyLogic.Fcl.updateSite
+ rm -vf *.jar plugins/*.jar features/*.jar
+
+ - Open site.xml file
+ - Go to "Site Map" tab
+
+ - Open jFuzzyLogic category and remove the 'feature' (called something like "net.sourceforge.jFuzzyLogic.Fcl.sdk_1.1.0.201212101535.jar"
+ and add it again (just to be sure)
+
+ - Click the "Buid All" button
+
+ - Refresh the project (you should see the JAR files in the plugin folders now).
+
+ 4) Upload Eclipse plugin files to SourceForge (Eclipse update site)
+
+ cd ~/workspace/net.sourceforge.jFuzzyLogic.Fcl.updateSite
+ scp -r . pcingola,jfuzzylogic@frs.sourceforge.net:htdocs/eclipse/
+
diff --git a/antlr_3_1_source/Tool.java b/antlr_3_1_source/Tool.java
new file mode 100644
index 0000000..a2bbe5d
--- /dev/null
+++ b/antlr_3_1_source/Tool.java
@@ -0,0 +1,659 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr;
+
+import org.antlr.analysis.*;
+import org.antlr.codegen.CodeGenerator;
+import org.antlr.runtime.misc.Stats;
+import org.antlr.tool.*;
+
+import java.io.*;
+import java.util.*;
+
+/** The main ANTLR entry point. Read a grammar and generate a parser. */
+public class Tool {
+ public static final String VERSION = "3.1";
+
+ public static final String UNINITIALIZED_DIR = "";
+
+ // Input parameters / option
+
+ protected List grammarFileNames = new ArrayList();
+ protected boolean generate_NFA_dot = false;
+ protected boolean generate_DFA_dot = false;
+ protected String outputDirectory = UNINITIALIZED_DIR;
+ protected String libDirectory = ".";
+ protected boolean debug = false;
+ protected boolean trace = false;
+ protected boolean profile = false;
+ protected boolean report = false;
+ protected boolean printGrammar = false;
+ protected boolean depend = false;
+ protected boolean forceAllFilesToOutputDir = false;
+ protected boolean deleteTempLexer = true;
+
+ // the internal options are for my use on the command line during dev
+
+ public static boolean internalOption_PrintGrammarTree = false;
+ public static boolean internalOption_PrintDFA = false;
+ public static boolean internalOption_ShowNFAConfigsInDFA = false;
+ public static boolean internalOption_watchNFAConversion = false;
+
+ public static void main(String[] args) {
+ ErrorManager.info("ANTLR Parser Generator Version " +
+ VERSION + " (August 12, 2008) 1989-2008");
+ Tool antlr = new Tool(args);
+ antlr.process();
+ if ( ErrorManager.getNumErrors() > 0 ) {
+ System.exit(1);
+ }
+ System.exit(0);
+ }
+
+ public Tool() {
+ }
+
+ public Tool(String[] args) {
+ processArgs(args);
+ }
+
+ public void processArgs(String[] args) {
+ if ( args==null || args.length==0 ) {
+ help();
+ return;
+ }
+ for (int i = 0; i < args.length; i++) {
+ if (args[i].equals("-o") || args[i].equals("-fo")) {
+ if (i + 1 >= args.length) {
+ System.err.println("missing output directory with -fo/-o option; ignoring");
+ }
+ else {
+ if ( args[i].equals("-fo") ) { // force output into dir
+ forceAllFilesToOutputDir = true;
+ }
+ i++;
+ outputDirectory = args[i];
+ if ( outputDirectory.endsWith("/") ||
+ outputDirectory.endsWith("\\") )
+ {
+ outputDirectory =
+ outputDirectory.substring(0,outputDirectory.length()-1);
+ }
+ File outDir = new File(outputDirectory);
+ if( outDir.exists() && !outDir.isDirectory() ) {
+ ErrorManager.error(ErrorManager.MSG_OUTPUT_DIR_IS_FILE,outputDirectory);
+ libDirectory = ".";
+ }
+ }
+ }
+ else if (args[i].equals("-lib")) {
+ if (i + 1 >= args.length) {
+ System.err.println("missing library directory with -lib option; ignoring");
+ }
+ else {
+ i++;
+ libDirectory = args[i];
+ if ( libDirectory.endsWith("/") ||
+ libDirectory.endsWith("\\") )
+ {
+ libDirectory =
+ libDirectory.substring(0,libDirectory.length()-1);
+ }
+ File outDir = new File(libDirectory);
+ if( !outDir.exists() ) {
+ ErrorManager.error(ErrorManager.MSG_DIR_NOT_FOUND,libDirectory);
+ libDirectory = ".";
+ }
+ }
+ }
+ else if (args[i].equals("-nfa")) {
+ generate_NFA_dot=true;
+ }
+ else if (args[i].equals("-dfa")) {
+ generate_DFA_dot=true;
+ }
+ else if (args[i].equals("-debug")) {
+ debug=true;
+ }
+ else if (args[i].equals("-trace")) {
+ trace=true;
+ }
+ else if (args[i].equals("-report")) {
+ report=true;
+ }
+ else if (args[i].equals("-profile")) {
+ profile=true;
+ }
+ else if (args[i].equals("-print")) {
+ printGrammar = true;
+ }
+ else if (args[i].equals("-depend")) {
+ depend=true;
+ }
+ else if (args[i].equals("-message-format")) {
+ if (i + 1 >= args.length) {
+ System.err.println("missing output format with -message-format option; using default");
+ }
+ else {
+ i++;
+ ErrorManager.setFormat(args[i]);
+ }
+ }
+ else if (args[i].equals("-Xgrtree")) {
+ internalOption_PrintGrammarTree=true; // print grammar tree
+ }
+ else if (args[i].equals("-Xdfa")) {
+ internalOption_PrintDFA=true;
+ }
+ else if (args[i].equals("-Xnoprune")) {
+ DFAOptimizer.PRUNE_EBNF_EXIT_BRANCHES=false;
+ }
+ else if (args[i].equals("-Xnocollapse")) {
+ DFAOptimizer.COLLAPSE_ALL_PARALLEL_EDGES=false;
+ }
+ else if (args[i].equals("-Xdbgconversion")) {
+ NFAToDFAConverter.debug = true;
+ }
+ else if (args[i].equals("-Xmultithreaded")) {
+ NFAToDFAConverter.SINGLE_THREADED_NFA_CONVERSION = false;
+ }
+ else if (args[i].equals("-Xnomergestopstates")) {
+ DFAOptimizer.MERGE_STOP_STATES = false;
+ }
+ else if (args[i].equals("-Xdfaverbose")) {
+ internalOption_ShowNFAConfigsInDFA = true;
+ }
+ else if (args[i].equals("-Xwatchconversion")) {
+ internalOption_watchNFAConversion = true;
+ }
+ else if (args[i].equals("-XdbgST")) {
+ CodeGenerator.EMIT_TEMPLATE_DELIMITERS = true;
+ }
+ else if (args[i].equals("-Xmaxinlinedfastates")) {
+ if (i + 1 >= args.length) {
+ System.err.println("missing max inline dfa states -Xmaxinlinedfastates option; ignoring");
+ }
+ else {
+ i++;
+ CodeGenerator.MAX_ACYCLIC_DFA_STATES_INLINE = Integer.parseInt(args[i]);
+ }
+ }
+ else if (args[i].equals("-Xm")) {
+ if (i + 1 >= args.length) {
+ System.err.println("missing max recursion with -Xm option; ignoring");
+ }
+ else {
+ i++;
+ NFAContext.MAX_SAME_RULE_INVOCATIONS_PER_NFA_CONFIG_STACK = Integer.parseInt(args[i]);
+ }
+ }
+ else if (args[i].equals("-Xmaxdfaedges")) {
+ if (i + 1 >= args.length) {
+ System.err.println("missing max number of edges with -Xmaxdfaedges option; ignoring");
+ }
+ else {
+ i++;
+ DFA.MAX_STATE_TRANSITIONS_FOR_TABLE = Integer.parseInt(args[i]);
+ }
+ }
+ else if (args[i].equals("-Xconversiontimeout")) {
+ if (i + 1 >= args.length) {
+ System.err.println("missing max time in ms -Xconversiontimeout option; ignoring");
+ }
+ else {
+ i++;
+ DFA.MAX_TIME_PER_DFA_CREATION = Integer.parseInt(args[i]);
+ }
+ }
+ else if (args[i].equals("-Xnfastates")) {
+ DecisionProbe.verbose=true;
+ }
+ else if (args[i].equals("-X")) {
+ Xhelp();
+ }
+ else {
+ if (args[i].charAt(0) != '-') {
+ // Must be the grammar file
+ grammarFileNames.add(args[i]);
+ }
+ }
+ }
+ }
+
+ /*
+ protected void checkForInvalidArguments(String[] args, BitSet cmdLineArgValid) {
+ // check for invalid command line args
+ for (int a = 0; a < args.length; a++) {
+ if (!cmdLineArgValid.member(a)) {
+ System.err.println("invalid command-line argument: " + args[a] + "; ignored");
+ }
+ }
+ }
+ */
+
+ public void process() {
+ int numFiles = grammarFileNames.size();
+ boolean exceptionWhenWritingLexerFile = false;
+ String lexerGrammarFileName = null; // necessary at this scope to have access in the catch below
+ for (int i = 0; i < numFiles; i++) {
+ String grammarFileName = (String) grammarFileNames.get(i);
+ if ( numFiles > 1 && !depend ) {
+ System.out.println(grammarFileName);
+ }
+ try {
+ if ( depend ) {
+ BuildDependencyGenerator dep =
+ new BuildDependencyGenerator(this, grammarFileName);
+ List outputFiles = dep.getGeneratedFileList();
+ List dependents = dep.getDependenciesFileList();
+ //System.out.println("output: "+outputFiles);
+ //System.out.println("dependents: "+dependents);
+ System.out.println(dep.getDependencies());
+ continue;
+ }
+ Grammar grammar = getRootGrammar(grammarFileName);
+ // we now have all grammars read in as ASTs
+ // (i.e., root and all delegates)
+ grammar.composite.assignTokenTypes();
+ grammar.composite.defineGrammarSymbols();
+ grammar.composite.createNFAs();
+
+ generateRecognizer(grammar);
+
+ if ( printGrammar ) {
+ grammar.printGrammar(System.out);
+ }
+
+ if ( report ) {
+ GrammarReport report = new GrammarReport(grammar);
+ System.out.println(report.toString());
+ // print out a backtracking report too (that is not encoded into log)
+ System.out.println(report.getBacktrackingReport());
+ // same for aborted NFA->DFA conversions
+ System.out.println(report.getAnalysisTimeoutReport());
+ }
+ if ( profile ) {
+ GrammarReport report = new GrammarReport(grammar);
+ Stats.writeReport(GrammarReport.GRAMMAR_STATS_FILENAME,
+ report.toNotifyString());
+ }
+
+ // now handle the lexer if one was created for a merged spec
+ String lexerGrammarStr = grammar.getLexerGrammar();
+ //System.out.println("lexer grammar:\n"+lexerGrammarStr);
+ if ( grammar.type==Grammar.COMBINED && lexerGrammarStr!=null ) {
+ lexerGrammarFileName = grammar.getImplicitlyGeneratedLexerFileName();
+ try {
+ Writer w = getOutputFile(grammar,lexerGrammarFileName);
+ w.write(lexerGrammarStr);
+ w.close();
+ }
+ catch (IOException e) {
+ // emit different error message when creating the implicit lexer fails
+ // due to write permission error
+ exceptionWhenWritingLexerFile = true;
+ throw e;
+ }
+ try {
+ StringReader sr = new StringReader(lexerGrammarStr);
+ Grammar lexerGrammar = new Grammar();
+ lexerGrammar.composite.watchNFAConversion = internalOption_watchNFAConversion;
+ lexerGrammar.implicitLexer = true;
+ lexerGrammar.setTool(this);
+ File lexerGrammarFullFile =
+ new File(getFileDirectory(lexerGrammarFileName),lexerGrammarFileName);
+ lexerGrammar.setFileName(lexerGrammarFullFile.toString());
+
+ lexerGrammar.importTokenVocabulary(grammar);
+ lexerGrammar.parseAndBuildAST(sr);
+
+ sr.close();
+
+ lexerGrammar.composite.assignTokenTypes();
+ lexerGrammar.composite.defineGrammarSymbols();
+ lexerGrammar.composite.createNFAs();
+
+ generateRecognizer(lexerGrammar);
+ }
+ finally {
+ // make sure we clean up
+ if ( deleteTempLexer ) {
+ File outputDir = getOutputDirectory(lexerGrammarFileName);
+ File outputFile = new File(outputDir, lexerGrammarFileName);
+ outputFile.delete();
+ }
+ }
+ }
+ }
+ catch (IOException e) {
+ if (exceptionWhenWritingLexerFile) {
+ ErrorManager.error(ErrorManager.MSG_CANNOT_WRITE_FILE,
+ lexerGrammarFileName, e);
+ } else {
+ ErrorManager.error(ErrorManager.MSG_CANNOT_OPEN_FILE,
+ grammarFileName);
+ }
+ }
+ catch (Exception e) {
+ ErrorManager.error(ErrorManager.MSG_INTERNAL_ERROR, grammarFileName, e);
+ }
+ /*
+ finally {
+ System.out.println("creates="+ Interval.creates);
+ System.out.println("hits="+ Interval.hits);
+ System.out.println("misses="+ Interval.misses);
+ System.out.println("outOfRange="+ Interval.outOfRange);
+ }
+ */
+ }
+ }
+
+ /** Get a grammar mentioned on the command-line and any delegates */
+ public Grammar getRootGrammar(String grammarFileName)
+ throws IOException
+ {
+ //StringTemplate.setLintMode(true);
+ // grammars mentioned on command line are either roots or single grammars.
+ // create the necessary composite in case it's got delegates; even
+ // single grammar needs it to get token types.
+ CompositeGrammar composite = new CompositeGrammar();
+ Grammar grammar = new Grammar(this,grammarFileName,composite);
+ composite.setDelegationRoot(grammar);
+ FileReader fr = null;
+ fr = new FileReader(grammarFileName);
+ BufferedReader br = new BufferedReader(fr);
+ grammar.parseAndBuildAST(br);
+ composite.watchNFAConversion = internalOption_watchNFAConversion;
+ br.close();
+ fr.close();
+ return grammar;
+ }
+
+ /** Create NFA, DFA and generate code for grammar.
+ * Create NFA for any delegates first. Once all NFA are created,
+ * it's ok to create DFA, which must check for left-recursion. That check
+ * is done by walking the full NFA, which therefore must be complete.
+ * After all NFA, comes DFA conversion for root grammar then code gen for
+ * root grammar. DFA and code gen for delegates comes next.
+ */
+ protected void generateRecognizer(Grammar grammar) {
+ String language = (String)grammar.getOption("language");
+ if ( language!=null ) {
+ CodeGenerator generator = new CodeGenerator(this, grammar, language);
+ grammar.setCodeGenerator(generator);
+ generator.setDebug(debug);
+ generator.setProfile(profile);
+ generator.setTrace(trace);
+
+ // generate NFA early in case of crash later (for debugging)
+ if ( generate_NFA_dot ) {
+ generateNFAs(grammar);
+ }
+
+ // GENERATE CODE
+ generator.genRecognizer();
+
+ if ( generate_DFA_dot ) {
+ generateDFAs(grammar);
+ }
+
+ List delegates = grammar.getDirectDelegates();
+ for (int i = 0; delegates!=null && i < delegates.size(); i++) {
+ Grammar delegate = (Grammar)delegates.get(i);
+ if ( delegate!=grammar ) { // already processing this one
+ generateRecognizer(delegate);
+ }
+ }
+ }
+ }
+
+ public void generateDFAs(Grammar g) {
+ for (int d=1; d<=g.getNumberOfDecisions(); d++) {
+ DFA dfa = g.getLookaheadDFA(d);
+ if ( dfa==null ) {
+ continue; // not there for some reason, ignore
+ }
+ DOTGenerator dotGenerator = new DOTGenerator(g);
+ String dot = dotGenerator.getDOT( dfa.startState );
+ String dotFileName = g.name+"."+"dec-"+d;
+ if ( g.implicitLexer ) {
+ dotFileName = g.name+Grammar.grammarTypeToFileNameSuffix[g.type]+"."+"dec-"+d;
+ }
+ try {
+ writeDOTFile(g, dotFileName, dot);
+ }
+ catch(IOException ioe) {
+ ErrorManager.error(ErrorManager.MSG_CANNOT_GEN_DOT_FILE,
+ dotFileName,
+ ioe);
+ }
+ }
+ }
+
+ protected void generateNFAs(Grammar g) {
+ DOTGenerator dotGenerator = new DOTGenerator(g);
+ Collection rules = g.getAllImportedRules();
+ rules.addAll(g.getRules());
+
+ for (Iterator itr = rules.iterator(); itr.hasNext();) {
+ Rule r = (Rule) itr.next();
+ try {
+ String dot = dotGenerator.getDOT(r.startState);
+ if ( dot!=null ) {
+ writeDOTFile(g, r, dot);
+ }
+ }
+ catch (IOException ioe) {
+ ErrorManager.error(ErrorManager.MSG_CANNOT_WRITE_FILE, ioe);
+ }
+ }
+ }
+
+ protected void writeDOTFile(Grammar g, Rule r, String dot) throws IOException {
+ writeDOTFile(g, r.grammar.name+"."+r.name, dot);
+ }
+
+ protected void writeDOTFile(Grammar g, String name, String dot) throws IOException {
+ Writer fw = getOutputFile(g, name+".dot");
+ fw.write(dot);
+ fw.close();
+ }
+
+ private static void help() {
+ System.err.println("usage: java org.antlr.Tool [args] file.g [file2.g file3.g ...]");
+ System.err.println(" -o outputDir specify output directory where all output is generated");
+ System.err.println(" -fo outputDir same as -o but force even files with relative paths to dir");
+ System.err.println(" -lib dir specify location of token files");
+ System.err.println(" -depend generate file dependencies");
+ System.err.println(" -report print out a report about the grammar(s) processed");
+ System.err.println(" -print print out the grammar without actions");
+ System.err.println(" -debug generate a parser that emits debugging events");
+ System.err.println(" -profile generate a parser that computes profiling information");
+ System.err.println(" -nfa generate an NFA for each rule");
+ System.err.println(" -dfa generate a DFA for each decision point");
+ System.err.println(" -message-format name specify output style for messages");
+ System.err.println(" -X display extended argument list");
+ }
+
+ private static void Xhelp() {
+ System.err.println(" -Xgrtree print the grammar AST");
+ System.err.println(" -Xdfa print DFA as text ");
+ System.err.println(" -Xnoprune test lookahead against EBNF block exit branches");
+ System.err.println(" -Xnocollapse collapse incident edges into DFA states");
+ System.err.println(" -Xdbgconversion dump lots of info during NFA conversion");
+ System.err.println(" -Xmultithreaded run the analysis in 2 threads");
+ System.err.println(" -Xnomergestopstates do not merge stop states");
+ System.err.println(" -Xdfaverbose generate DFA states in DOT with NFA configs");
+ System.err.println(" -Xwatchconversion print a message for each NFA before converting");
+ System.err.println(" -XdbgST put tags at start/stop of all templates in output");
+ System.err.println(" -Xm m max number of rule invocations during conversion");
+ System.err.println(" -Xmaxdfaedges m max \"comfortable\" number of edges for single DFA state");
+ System.err.println(" -Xconversiontimeout t set NFA conversion timeout for each decision");
+ System.err.println(" -Xmaxinlinedfastates m max DFA states before table used rather than inlining");
+ System.err.println(" -Xnfastates for nondeterminisms, list NFA states for each path");
+ }
+
+ public void setOutputDirectory(String outputDirectory) {
+ this.outputDirectory = outputDirectory;
+ }
+
+ /** This method is used by all code generators to create new output
+ * files. If the outputDir set by -o is not present it will be created.
+ * The final filename is sensitive to the output directory and
+ * the directory where the grammar file was found. If -o is /tmp
+ * and the original grammar file was foo/t.g then output files
+ * go in /tmp/foo.
+ *
+ * The output dir -o spec takes precedence if it's absolute.
+ * E.g., if the grammar file dir is absolute the output dir is given
+ * precendence. "-o /tmp /usr/lib/t.g" results in "/tmp/T.java" as
+ * output (assuming t.g holds T.java).
+ *
+ * If no -o is specified, then just write to the directory where the
+ * grammar file was found.
+ *
+ * If outputDirectory==null then write a String.
+ */
+ public Writer getOutputFile(Grammar g, String fileName) throws IOException {
+ if ( outputDirectory==null ) {
+ return new StringWriter();
+ }
+ // output directory is a function of where the grammar file lives
+ // for subdir/T.g, you get subdir here. Well, depends on -o etc...
+ File outputDir = getOutputDirectory(g.getFileName());
+ File outputFile = new File(outputDir, fileName);
+
+ if( !outputDir.exists() ) {
+ outputDir.mkdirs();
+ }
+ FileWriter fw = new FileWriter(outputFile);
+ return new BufferedWriter(fw);
+ }
+
+ public File getOutputDirectory(String fileNameWithPath) {
+ File outputDir = new File(outputDirectory);
+ String fileDirectory = getFileDirectory(fileNameWithPath);
+ if ( outputDirectory!=UNINITIALIZED_DIR ) {
+ // -o /tmp /var/lib/t.g => /tmp/T.java
+ // -o subdir/output /usr/lib/t.g => subdir/output/T.java
+ // -o . /usr/lib/t.g => ./T.java
+ if ( fileDirectory!=null &&
+ (new File(fileDirectory).isAbsolute() ||
+ fileDirectory.startsWith("~")) || // isAbsolute doesn't count this :(
+ forceAllFilesToOutputDir
+ )
+ {
+ // somebody set the dir, it takes precendence; write new file there
+ outputDir = new File(outputDirectory);
+ }
+ else {
+ // -o /tmp subdir/t.g => /tmp/subdir/t.g
+ if ( fileDirectory!=null ) {
+ outputDir = new File(outputDirectory, fileDirectory);
+ }
+ else {
+ outputDir = new File(outputDirectory);
+ }
+ }
+ }
+ else {
+ // they didn't specify a -o dir so just write to location
+ // where grammar is, absolute or relative
+ String dir = ".";
+ if ( fileDirectory!=null ) {
+ dir = fileDirectory;
+ }
+ outputDir = new File(dir);
+ }
+ return outputDir;
+ }
+
+ /** Name a file in the -lib dir. Imported grammars and .tokens files */
+ public String getLibraryFile(String fileName) throws IOException {
+ return libDirectory+File.separator+fileName;
+ }
+
+ public String getLibraryDirectory() {
+ return libDirectory;
+ }
+
+ /** Return the directory containing the grammar file for this grammar.
+ * normally this is a relative path from current directory. People will
+ * often do "java org.antlr.Tool grammars/*.g3" So the file will be
+ * "grammars/foo.g3" etc... This method returns "grammars".
+ */
+ public String getFileDirectory(String fileName) {
+ File f = new File(fileName);
+ return f.getParent();
+ }
+
+ /** Return a File descriptor for vocab file. Look in library or
+ * in -o output path. antlr -o foo T.g U.g where U needs T.tokens
+ * won't work unless we look in foo too.
+ */
+ public File getImportedVocabFile(String vocabName) {
+ File f = new File(getLibraryDirectory(),
+ File.separator+
+ vocabName+
+ CodeGenerator.VOCAB_FILE_EXTENSION);
+ if ( f.exists() ) {
+ return f;
+ }
+
+ return new File(outputDirectory+
+ File.separator+
+ vocabName+
+ CodeGenerator.VOCAB_FILE_EXTENSION);
+ }
+
+ /** If the tool needs to panic/exit, how do we do that? */
+ public void panic() {
+ throw new Error("ANTLR panic");
+ }
+
+ /** Return a time stamp string accurate to sec: yyyy-mm-dd hh:mm:ss */
+ public static String getCurrentTimeStamp() {
+ GregorianCalendar calendar = new java.util.GregorianCalendar();
+ int y = calendar.get(Calendar.YEAR);
+ int m = calendar.get(Calendar.MONTH)+1; // zero-based for months
+ int d = calendar.get(Calendar.DAY_OF_MONTH);
+ int h = calendar.get(Calendar.HOUR_OF_DAY);
+ int min = calendar.get(Calendar.MINUTE);
+ int sec = calendar.get(Calendar.SECOND);
+ String sy = String.valueOf(y);
+ String sm = m<10?"0"+m:String.valueOf(m);
+ String sd = d<10?"0"+d:String.valueOf(d);
+ String sh = h<10?"0"+h:String.valueOf(h);
+ String smin = min<10?"0"+min:String.valueOf(min);
+ String ssec = sec<10?"0"+sec:String.valueOf(sec);
+ return new StringBuffer().append(sy).append("-").append(sm).append("-")
+ .append(sd).append(" ").append(sh).append(":").append(smin)
+ .append(":").append(ssec).toString();
+ }
+
+}
diff --git a/antlr_3_1_source/analysis/ActionLabel.java b/antlr_3_1_source/analysis/ActionLabel.java
new file mode 100644
index 0000000..1265364
--- /dev/null
+++ b/antlr_3_1_source/analysis/ActionLabel.java
@@ -0,0 +1,56 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.GrammarAST;
+import org.antlr.tool.Grammar;
+
+public class ActionLabel extends Label {
+ public GrammarAST actionAST;
+
+ public ActionLabel(GrammarAST actionAST) {
+ super(ACTION);
+ this.actionAST = actionAST;
+ }
+
+ public boolean isEpsilon() {
+ return true; // we are to be ignored by analysis 'cept for predicates
+ }
+
+ public boolean isAction() {
+ return true;
+ }
+
+ public String toString() {
+ return "{"+actionAST+"}";
+ }
+
+ public String toString(Grammar g) {
+ return toString();
+ }
+}
diff --git a/antlr_3_1_source/analysis/AnalysisRecursionOverflowException.java b/antlr_3_1_source/analysis/AnalysisRecursionOverflowException.java
new file mode 100644
index 0000000..6403ea9
--- /dev/null
+++ b/antlr_3_1_source/analysis/AnalysisRecursionOverflowException.java
@@ -0,0 +1,40 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+/** An NFA configuration context stack overflowed. */
+public class AnalysisRecursionOverflowException extends RuntimeException {
+ public DFAState ovfState;
+ public NFAConfiguration proposedNFAConfiguration;
+ public AnalysisRecursionOverflowException(DFAState ovfState,
+ NFAConfiguration proposedNFAConfiguration)
+ {
+ this.ovfState = ovfState;
+ this.proposedNFAConfiguration = proposedNFAConfiguration;
+ }
+}
diff --git a/antlr_3_1_source/analysis/AnalysisTimeoutException.java b/antlr_3_1_source/analysis/AnalysisTimeoutException.java
new file mode 100644
index 0000000..392b316
--- /dev/null
+++ b/antlr_3_1_source/analysis/AnalysisTimeoutException.java
@@ -0,0 +1,36 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+/** Analysis took too long; bail out of entire DFA construction. */
+public class AnalysisTimeoutException extends RuntimeException {
+ public DFA abortedDFA;
+ public AnalysisTimeoutException(DFA abortedDFA) {
+ this.abortedDFA = abortedDFA;
+ }
+}
diff --git a/antlr_3_1_source/analysis/DFA.java b/antlr_3_1_source/analysis/DFA.java
new file mode 100644
index 0000000..e69b99e
--- /dev/null
+++ b/antlr_3_1_source/analysis/DFA.java
@@ -0,0 +1,1061 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.codegen.CodeGenerator;
+import org.antlr.misc.IntSet;
+import org.antlr.misc.IntervalSet;
+import org.antlr.misc.Utils;
+import org.antlr.runtime.IntStream;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.*;
+
+import java.util.*;
+
+/** A DFA (converted from a grammar's NFA).
+ * DFAs are used as prediction machine for alternative blocks in all kinds
+ * of recognizers (lexers, parsers, tree walkers).
+ */
+public class DFA {
+ public static final int REACHABLE_UNKNOWN = -2;
+ public static final int REACHABLE_BUSY = -1; // in process of computing
+ public static final int REACHABLE_NO = 0;
+ public static final int REACHABLE_YES = 1;
+
+ /** Prevent explosion of DFA states during conversion. The max number
+ * of states per alt in a single decision's DFA.
+ public static final int MAX_STATES_PER_ALT_IN_DFA = 450;
+ */
+
+ /** Set to 0 to not terminate early (time in ms) */
+ public static int MAX_TIME_PER_DFA_CREATION = 1*1000;
+
+ /** How many edges can each DFA state have before a "special" state
+ * is created that uses IF expressions instead of a table?
+ */
+ public static int MAX_STATE_TRANSITIONS_FOR_TABLE = 65534;
+
+ /** What's the start state for this DFA? */
+ public DFAState startState;
+
+ /** This DFA is being built for which decision? */
+ public int decisionNumber = 0;
+
+ /** From what NFAState did we create the DFA? */
+ public NFAState decisionNFAStartState;
+
+ /** The printable grammar fragment associated with this DFA */
+ public String description;
+
+ /** A set of all uniquely-numbered DFA states. Maps hash of DFAState
+ * to the actual DFAState object. We use this to detect
+ * existing DFA states. Map. Use Map so
+ * we can get old state back (Set only allows you to see if it's there).
+ * Not used during fixed k lookahead as it's a waste to fill it with
+ * a dup of states array.
+ */
+ protected Map uniqueStates = new HashMap();
+
+ /** Maps the state number to the actual DFAState. Use a Vector as it
+ * grows automatically when I set the ith element. This contains all
+ * states, but the states are not unique. s3 might be same as s1 so
+ * s3 -> s1 in this table. This is how cycles occur. If fixed k,
+ * then these states will all be unique as states[i] always points
+ * at state i when no cycles exist.
+ *
+ * This is managed in parallel with uniqueStates and simply provides
+ * a way to go from state number to DFAState rather than via a
+ * hash lookup.
+ */
+ protected Vector states = new Vector();
+
+ /** Unique state numbers per DFA */
+ protected int stateCounter = 0;
+
+ /** count only new states not states that were rejected as already present */
+ protected int numberOfStates = 0;
+
+ /** User specified max fixed lookahead. If 0, nothing specified. -1
+ * implies we have not looked at the options table yet to set k.
+ */
+ protected int user_k = -1;
+
+ /** While building the DFA, track max lookahead depth if not cyclic */
+ protected int max_k = -1;
+
+ /** Is this DFA reduced? I.e., can all states lead to an accept state? */
+ protected boolean reduced = true;
+
+ /** Are there any loops in this DFA?
+ * Computed by doesStateReachAcceptState()
+ */
+ protected boolean cyclic = false;
+
+ /** Track whether this DFA has at least one sem/syn pred encountered
+ * during a closure operation. This is useful for deciding whether
+ * to retry a non-LL(*) with k=1. If no pred, it will not work w/o
+ * a pred so don't bother. It would just give another error message.
+ */
+ public boolean predicateVisible = false;
+
+ public boolean hasPredicateBlockedByAction = false;
+
+ /** Each alt in an NFA derived from a grammar must have a DFA state that
+ * predicts it lest the parser not know what to do. Nondeterminisms can
+ * lead to this situation (assuming no semantic predicates can resolve
+ * the problem) and when for some reason, I cannot compute the lookahead
+ * (which might arise from an error in the algorithm or from
+ * left-recursion etc...). This list starts out with all alts contained
+ * and then in method doesStateReachAcceptState() I remove the alts I
+ * know to be uniquely predicted.
+ */
+ protected List unreachableAlts;
+
+ protected int nAlts = 0;
+
+ /** We only want one accept state per predicted alt; track here */
+ protected DFAState[] altToAcceptState;
+
+ /** Track whether an alt discovers recursion for each alt during
+ * NFA to DFA conversion; >1 alt with recursion implies nonregular.
+ */
+ public IntSet recursiveAltSet = new IntervalSet();
+
+ /** Which NFA are we converting (well, which piece of the NFA)? */
+ public NFA nfa;
+
+ protected NFAToDFAConverter nfaConverter;
+
+ /** This probe tells you a lot about a decision and is useful even
+ * when there is no error such as when a syntactic nondeterminism
+ * is solved via semantic predicates. Perhaps a GUI would want
+ * the ability to show that.
+ */
+ public DecisionProbe probe = new DecisionProbe(this);
+
+ /** Track absolute time of the conversion so we can have a failsafe:
+ * if it takes too long, then terminate. Assume bugs are in the
+ * analysis engine.
+ */
+ protected long conversionStartTime;
+
+ /** Map an edge transition table to a unique set number; ordered so
+ * we can push into the output template as an ordered list of sets
+ * and then ref them from within the transition[][] table. Like this
+ * for C# target:
+ * public static readonly DFA30_transition0 =
+ * new short[] { 46, 46, -1, 46, 46, -1, -1, -1, -1, -1, -1, -1,...};
+ * public static readonly DFA30_transition1 =
+ * new short[] { 21 };
+ * public static readonly short[][] DFA30_transition = {
+ * DFA30_transition0,
+ * DFA30_transition0,
+ * DFA30_transition1,
+ * ...
+ * };
+ */
+ public Map edgeTransitionClassMap = new LinkedHashMap();
+
+ /** The unique edge transition class number; every time we see a new
+ * set of edges emanating from a state, we number it so we can reuse
+ * if it's every seen again for another state. For Java grammar,
+ * some of the big edge transition tables are seen about 57 times.
+ */
+ protected int edgeTransitionClass =0;
+
+ /* This DFA can be converted to a transition[state][char] table and
+ * the following tables are filled by createStateTables upon request.
+ * These are injected into the templates for code generation.
+ * See March 25, 2006 entry for description:
+ * http://www.antlr.org/blog/antlr3/codegen.tml
+ * Often using Vector as can't set ith position in a List and have
+ * it extend list size; bizarre.
+ */
+
+ /** List of special DFAState objects */
+ public List specialStates;
+ /** List of ST for special states. */
+ public List specialStateSTs;
+ public Vector accept;
+ public Vector eot;
+ public Vector eof;
+ public Vector min;
+ public Vector max;
+ public Vector special;
+ public Vector transition;
+ /** just the Vector indicating which unique edge table is at
+ * position i.
+ */
+ public Vector transitionEdgeTables; // not used by java yet
+ protected int uniqueCompressedSpecialStateNum = 0;
+
+ /** Which generator to use if we're building state tables */
+ protected CodeGenerator generator = null;
+
+ protected DFA() {;}
+
+ public DFA(int decisionNumber, NFAState decisionStartState) {
+ this.decisionNumber = decisionNumber;
+ this.decisionNFAStartState = decisionStartState;
+ nfa = decisionStartState.nfa;
+ nAlts = nfa.grammar.getNumberOfAltsForDecisionNFA(decisionStartState);
+ //setOptions( nfa.grammar.getDecisionOptions(getDecisionNumber()) );
+ initAltRelatedInfo();
+
+ //long start = System.currentTimeMillis();
+ nfaConverter = new NFAToDFAConverter(this);
+ try {
+ nfaConverter.convert();
+
+ // figure out if there are problems with decision
+ verify();
+
+ if ( !probe.isDeterministic() || probe.analysisOverflowed() ) {
+ probe.issueWarnings();
+ }
+
+ // must be after verify as it computes cyclic, needed by this routine
+ // should be after warnings because early termination or something
+ // will not allow the reset to operate properly in some cases.
+ resetStateNumbersToBeContiguous();
+
+ //long stop = System.currentTimeMillis();
+ //System.out.println("verify cost: "+(int)(stop-start)+" ms");
+ }
+ catch (AnalysisTimeoutException at) {
+ probe.reportAnalysisTimeout();
+ if ( !okToRetryDFAWithK1() ) {
+ probe.issueWarnings();
+ }
+ }
+ catch (NonLLStarDecisionException nonLL) {
+ probe.reportNonLLStarDecision(this);
+ // >1 alt recurses, k=* and no auto backtrack nor manual sem/syn
+ if ( !okToRetryDFAWithK1() ) {
+ probe.issueWarnings();
+ }
+ }
+ }
+
+ /** Walk all states and reset their numbers to be a contiguous sequence
+ * of integers starting from 0. Only cyclic DFA can have unused positions
+ * in states list. State i might be identical to a previous state j and
+ * will result in states[i] == states[j]. We don't want to waste a state
+ * number on this. Useful mostly for code generation in tables.
+ *
+ * At the start of this routine, states[i].stateNumber <= i by definition.
+ * If states[50].stateNumber is 50 then a cycle during conversion may
+ * try to add state 103, but we find that an identical DFA state, named
+ * 50, already exists, hence, states[103]==states[50] and both have
+ * stateNumber 50 as they point at same object. Afterwards, the set
+ * of state numbers from all states should represent a contiguous range
+ * from 0..n-1 where n is the number of unique states.
+ */
+ public void resetStateNumbersToBeContiguous() {
+ if ( getUserMaxLookahead()>0 ) {
+ // all numbers are unique already; no states are thrown out.
+ return;
+ }
+
+ // walk list of DFAState objects by state number,
+ // setting state numbers to 0..n-1
+ int snum=0;
+ for (int i = 0; i <= getMaxStateNumber(); i++) {
+ DFAState s = getState(i);
+ // some states are unused after creation most commonly due to cycles
+ // or conflict resolution.
+ if ( s==null ) {
+ continue;
+ }
+ // state i is mapped to DFAState with state number set to i originally
+ // so if it's less than i, then we renumbered it already; that
+ // happens when states have been merged or cycles occurred I think.
+ // states[50] will point to DFAState with s50 in it but
+ // states[103] might also point at this same DFAState. Since
+ // 50 < 103 then it's already been renumbered as it points downwards.
+ boolean alreadyRenumbered = s.stateNumber> which is the transition[][] table
+ for (int i = 0; i < transition.size(); i++) {
+ Vector transitionsForState = (Vector) transition.elementAt(i);
+ encoded.add(getRunLengthEncoding(transitionsForState));
+ }
+ return encoded;
+ }
+
+ /** Compress the incoming data list so that runs of same number are
+ * encoded as number,value pair sequences. 3 -1 -1 -1 28 is encoded
+ * as 1 3 3 -1 1 28. I am pretty sure this is the lossless compression
+ * that GIF files use. Transition tables are heavily compressed by
+ * this technique. I got the idea from JFlex http://jflex.de/
+ *
+ * Return List where each string is either \xyz for 8bit char
+ * and \uFFFF for 16bit. Hideous and specific to Java, but it is the
+ * only target bad enough to need it.
+ */
+ public List getRunLengthEncoding(List data) {
+ if ( data==null || data.size()==0 ) {
+ // for states with no transitions we want an empty string ""
+ // to hold its place in the transitions array.
+ List empty = new ArrayList();
+ empty.add("");
+ return empty;
+ }
+ int size = Math.max(2,data.size()/2);
+ List encoded = new ArrayList(size); // guess at size
+ // scan values looking for runs
+ int i = 0;
+ Integer emptyValue = Utils.integer(-1);
+ while ( i < data.size() ) {
+ Integer I = (Integer)data.get(i);
+ if ( I==null ) {
+ I = emptyValue;
+ }
+ // count how many v there are?
+ int n = 0;
+ for (int j = i; j < data.size(); j++) {
+ Integer v = (Integer)data.get(j);
+ if ( v==null ) {
+ v = emptyValue;
+ }
+ if ( I.equals(v) ) {
+ n++;
+ }
+ else {
+ break;
+ }
+ }
+ encoded.add(generator.target.encodeIntAsCharEscape((char)n));
+ encoded.add(generator.target.encodeIntAsCharEscape((char)I.intValue()));
+ i+=n;
+ }
+ return encoded;
+ }
+
+ public void createStateTables(CodeGenerator generator) {
+ //System.out.println("createTables:\n"+this);
+ this.generator = generator;
+ description = getNFADecisionStartState().getDescription();
+ description =
+ generator.target.getTargetStringLiteralFromString(description);
+
+ // create all the tables
+ special = new Vector(this.getNumberOfStates()); // Vector
+ special.setSize(this.getNumberOfStates());
+ specialStates = new ArrayList(); // List
+ specialStateSTs = new ArrayList(); // List
+ accept = new Vector(this.getNumberOfStates()); // Vector
+ accept.setSize(this.getNumberOfStates());
+ eot = new Vector(this.getNumberOfStates()); // Vector
+ eot.setSize(this.getNumberOfStates());
+ eof = new Vector(this.getNumberOfStates()); // Vector
+ eof.setSize(this.getNumberOfStates());
+ min = new Vector(this.getNumberOfStates()); // Vector
+ min.setSize(this.getNumberOfStates());
+ max = new Vector(this.getNumberOfStates()); // Vector
+ max.setSize(this.getNumberOfStates());
+ transition = new Vector(this.getNumberOfStates()); // Vector>
+ transition.setSize(this.getNumberOfStates());
+ transitionEdgeTables = new Vector(this.getNumberOfStates()); // Vector>
+ transitionEdgeTables.setSize(this.getNumberOfStates());
+
+ // for each state in the DFA, fill relevant tables.
+ Iterator it = null;
+ if ( getUserMaxLookahead()>0 ) {
+ it = states.iterator();
+ }
+ else {
+ it = getUniqueStates().values().iterator();
+ }
+ while ( it.hasNext() ) {
+ DFAState s = (DFAState)it.next();
+ if ( s==null ) {
+ // ignore null states; some acylic DFA see this condition
+ // when inlining DFA (due to lacking of exit branch pruning?)
+ continue;
+ }
+ if ( s.isAcceptState() ) {
+ // can't compute min,max,special,transition on accepts
+ accept.set(s.stateNumber,
+ Utils.integer(s.getUniquelyPredictedAlt()));
+ }
+ else {
+ createMinMaxTables(s);
+ createTransitionTableEntryForState(s);
+ createSpecialTable(s);
+ createEOTAndEOFTables(s);
+ }
+ }
+
+ // now that we have computed list of specialStates, gen code for 'em
+ for (int i = 0; i < specialStates.size(); i++) {
+ DFAState ss = (DFAState) specialStates.get(i);
+ StringTemplate stateST =
+ generator.generateSpecialState(ss);
+ specialStateSTs.add(stateST);
+ }
+
+ // check that the tables are not messed up by encode/decode
+ /*
+ testEncodeDecode(min);
+ testEncodeDecode(max);
+ testEncodeDecode(accept);
+ testEncodeDecode(special);
+ System.out.println("min="+min);
+ System.out.println("max="+max);
+ System.out.println("eot="+eot);
+ System.out.println("eof="+eof);
+ System.out.println("accept="+accept);
+ System.out.println("special="+special);
+ System.out.println("transition="+transition);
+ */
+ }
+
+ /*
+ private void testEncodeDecode(List data) {
+ System.out.println("data="+data);
+ List encoded = getRunLengthEncoding(data);
+ StringBuffer buf = new StringBuffer();
+ for (int i = 0; i < encoded.size(); i++) {
+ String I = (String)encoded.get(i);
+ int v = 0;
+ if ( I.startsWith("\\u") ) {
+ v = Integer.parseInt(I.substring(2,I.length()), 16);
+ }
+ else {
+ v = Integer.parseInt(I.substring(1,I.length()), 8);
+ }
+ buf.append((char)v);
+ }
+ String encodedS = buf.toString();
+ short[] decoded = org.antlr.runtime.DFA.unpackEncodedString(encodedS);
+ //System.out.println("decoded:");
+ for (int i = 0; i < decoded.length; i++) {
+ short x = decoded[i];
+ if ( x!=((Integer)data.get(i)).intValue() ) {
+ System.err.println("problem with encoding");
+ }
+ //System.out.print(", "+x);
+ }
+ //System.out.println();
+ }
+ */
+
+ protected void createMinMaxTables(DFAState s) {
+ int smin = Label.MAX_CHAR_VALUE + 1;
+ int smax = Label.MIN_ATOM_VALUE - 1;
+ for (int j = 0; j < s.getNumberOfTransitions(); j++) {
+ Transition edge = (Transition) s.transition(j);
+ Label label = edge.label;
+ if ( label.isAtom() ) {
+ if ( label.getAtom()>=Label.MIN_CHAR_VALUE ) {
+ if ( label.getAtom()smax ) {
+ smax = label.getAtom();
+ }
+ }
+ }
+ else if ( label.isSet() ) {
+ IntervalSet labels = (IntervalSet)label.getSet();
+ int lmin = labels.getMinElement();
+ // if valid char (don't do EOF) and less than current min
+ if ( lmin=Label.MIN_CHAR_VALUE ) {
+ smin = labels.getMinElement();
+ }
+ if ( labels.getMaxElement()>smax ) {
+ smax = labels.getMaxElement();
+ }
+ }
+ }
+
+ if ( smax<0 ) {
+ // must be predicates or pure EOT transition; just zero out min, max
+ smin = Label.MIN_CHAR_VALUE;
+ smax = Label.MIN_CHAR_VALUE;
+ }
+
+ min.set(s.stateNumber, Utils.integer((char)smin));
+ max.set(s.stateNumber, Utils.integer((char)smax));
+
+ if ( smax<0 || smin>Label.MAX_CHAR_VALUE || smin<0 ) {
+ ErrorManager.internalError("messed up: min="+min+", max="+max);
+ }
+ }
+
+ protected void createTransitionTableEntryForState(DFAState s) {
+ /*
+ System.out.println("createTransitionTableEntryForState s"+s.stateNumber+
+ " dec "+s.dfa.decisionNumber+" cyclic="+s.dfa.isCyclic());
+ */
+ int smax = ((Integer)max.get(s.stateNumber)).intValue();
+ int smin = ((Integer)min.get(s.stateNumber)).intValue();
+
+ Vector stateTransitions = new Vector(smax-smin+1);
+ stateTransitions.setSize(smax-smin+1);
+ transition.set(s.stateNumber, stateTransitions);
+ for (int j = 0; j < s.getNumberOfTransitions(); j++) {
+ Transition edge = (Transition) s.transition(j);
+ Label label = edge.label;
+ if ( label.isAtom() && label.getAtom()>=Label.MIN_CHAR_VALUE ) {
+ int labelIndex = label.getAtom()-smin; // offset from 0
+ stateTransitions.set(labelIndex,
+ Utils.integer(edge.target.stateNumber));
+ }
+ else if ( label.isSet() ) {
+ IntervalSet labels = (IntervalSet)label.getSet();
+ int[] atoms = labels.toArray();
+ for (int a = 0; a < atoms.length; a++) {
+ // set the transition if the label is valid (don't do EOF)
+ if ( atoms[a]>=Label.MIN_CHAR_VALUE ) {
+ int labelIndex = atoms[a]-smin; // offset from 0
+ stateTransitions.set(labelIndex,
+ Utils.integer(edge.target.stateNumber));
+ }
+ }
+ }
+ }
+ // track unique state transition tables so we can reuse
+ Integer edgeClass = (Integer)edgeTransitionClassMap.get(stateTransitions);
+ if ( edgeClass!=null ) {
+ //System.out.println("we've seen this array before; size="+stateTransitions.size());
+ transitionEdgeTables.set(s.stateNumber, edgeClass);
+ }
+ else {
+ edgeClass = Utils.integer(edgeTransitionClass);
+ transitionEdgeTables.set(s.stateNumber, edgeClass);
+ edgeTransitionClassMap.put(stateTransitions, edgeClass);
+ edgeTransitionClass++;
+ }
+ }
+
+ /** Set up the EOT and EOF tables; we cannot put -1 min/max values so
+ * we need another way to test that in the DFA transition function.
+ */
+ protected void createEOTAndEOFTables(DFAState s) {
+ for (int j = 0; j < s.getNumberOfTransitions(); j++) {
+ Transition edge = (Transition) s.transition(j);
+ Label label = edge.label;
+ if ( label.isAtom() ) {
+ if ( label.getAtom()==Label.EOT ) {
+ // eot[s] points to accept state
+ eot.set(s.stateNumber, Utils.integer(edge.target.stateNumber));
+ }
+ else if ( label.getAtom()==Label.EOF ) {
+ // eof[s] points to accept state
+ eof.set(s.stateNumber, Utils.integer(edge.target.stateNumber));
+ }
+ }
+ else if ( label.isSet() ) {
+ IntervalSet labels = (IntervalSet)label.getSet();
+ int[] atoms = labels.toArray();
+ for (int a = 0; a < atoms.length; a++) {
+ if ( atoms[a]==Label.EOT ) {
+ // eot[s] points to accept state
+ eot.set(s.stateNumber, Utils.integer(edge.target.stateNumber));
+ }
+ else if ( atoms[a]==Label.EOF ) {
+ eof.set(s.stateNumber, Utils.integer(edge.target.stateNumber));
+ }
+ }
+ }
+ }
+ }
+
+ protected void createSpecialTable(DFAState s) {
+ // number all special states from 0...n-1 instead of their usual numbers
+ boolean hasSemPred = false;
+
+ // TODO this code is very similar to canGenerateSwitch. Refactor to share
+ for (int j = 0; j < s.getNumberOfTransitions(); j++) {
+ Transition edge = (Transition) s.transition(j);
+ Label label = edge.label;
+ // can't do a switch if the edges have preds or are going to
+ // require gated predicates
+ if ( label.isSemanticPredicate() ||
+ ((DFAState)edge.target).getGatedPredicatesInNFAConfigurations()!=null)
+ {
+ hasSemPred = true;
+ break;
+ }
+ }
+ // if has pred or too big for table, make it special
+ int smax = ((Integer)max.get(s.stateNumber)).intValue();
+ int smin = ((Integer)min.get(s.stateNumber)).intValue();
+ if ( hasSemPred || smax-smin>MAX_STATE_TRANSITIONS_FOR_TABLE ) {
+ special.set(s.stateNumber,
+ Utils.integer(uniqueCompressedSpecialStateNum));
+ uniqueCompressedSpecialStateNum++;
+ specialStates.add(s);
+ }
+ else {
+ special.set(s.stateNumber, Utils.integer(-1)); // not special
+ }
+ }
+
+ public int predict(IntStream input) {
+ Interpreter interp = new Interpreter(nfa.grammar, input);
+ return interp.predict(this);
+ }
+
+ /** Add a new DFA state to this DFA if not already present.
+ * To force an acyclic, fixed maximum depth DFA, just always
+ * return the incoming state. By not reusing old states,
+ * no cycles can be created. If we're doing fixed k lookahead
+ * don't updated uniqueStates, just return incoming state, which
+ * indicates it's a new state.
+ */
+ protected DFAState addState(DFAState d) {
+ if ( getUserMaxLookahead()>0 ) {
+ return d;
+ }
+ // does a DFA state exist already with everything the same
+ // except its state number?
+ DFAState existing = (DFAState)uniqueStates.get(d);
+ if ( existing != null ) {
+ /*
+ System.out.println("state "+d.stateNumber+" exists as state "+
+ existing.stateNumber);
+ */
+ // already there...get the existing DFA state
+ return existing;
+ }
+
+ // if not there, then add new state.
+ uniqueStates.put(d,d);
+ numberOfStates++;
+ return d;
+ }
+
+ public void removeState(DFAState d) {
+ DFAState it = (DFAState)uniqueStates.remove(d);
+ if ( it!=null ) {
+ numberOfStates--;
+ }
+ }
+
+ public Map getUniqueStates() {
+ return uniqueStates;
+ }
+
+ /** What is the max state number ever created? This may be beyond
+ * getNumberOfStates().
+ */
+ public int getMaxStateNumber() {
+ return states.size()-1;
+ }
+
+ public DFAState getState(int stateNumber) {
+ return (DFAState)states.get(stateNumber);
+ }
+
+ public void setState(int stateNumber, DFAState d) {
+ states.set(stateNumber, d);
+ }
+
+ /** Is the DFA reduced? I.e., does every state have a path to an accept
+ * state? If not, don't delete as we need to generate an error indicating
+ * which paths are "dead ends". Also tracks list of alts with no accept
+ * state in the DFA. Must call verify() first before this makes sense.
+ */
+ public boolean isReduced() {
+ return reduced;
+ }
+
+ /** Is this DFA cyclic? That is, are there any loops? If not, then
+ * the DFA is essentially an LL(k) predictor for some fixed, max k value.
+ * We can build a series of nested IF statements to match this. In the
+ * presence of cycles, we need to build a general DFA and interpret it
+ * to distinguish between alternatives.
+ */
+ public boolean isCyclic() {
+ return cyclic && getUserMaxLookahead()==0;
+ }
+
+ public boolean canInlineDecision() {
+ return !isCyclic() &&
+ !probe.isNonLLStarDecision() &&
+ getNumberOfStates() < CodeGenerator.MAX_ACYCLIC_DFA_STATES_INLINE;
+ }
+
+ /** Is this DFA derived from the NFA for the Tokens rule? */
+ public boolean isTokensRuleDecision() {
+ if ( nfa.grammar.type!=Grammar.LEXER ) {
+ return false;
+ }
+ NFAState nfaStart = getNFADecisionStartState();
+ Rule r = nfa.grammar.getLocallyDefinedRule(Grammar.ARTIFICIAL_TOKENS_RULENAME);
+ NFAState TokensRuleStart = r.startState;
+ NFAState TokensDecisionStart =
+ (NFAState)TokensRuleStart.transition[0].target;
+ return nfaStart == TokensDecisionStart;
+ }
+
+ /** The user may specify a max, acyclic lookahead for any decision. No
+ * DFA cycles are created when this value, k, is greater than 0.
+ * If this decision has no k lookahead specified, then try the grammar.
+ */
+ public int getUserMaxLookahead() {
+ if ( user_k>=0 ) { // cache for speed
+ return user_k;
+ }
+ user_k = nfa.grammar.getUserMaxLookahead(decisionNumber);
+ return user_k;
+ }
+
+ public boolean getAutoBacktrackMode() {
+ return nfa.grammar.getAutoBacktrackMode(decisionNumber);
+ }
+
+ public void setUserMaxLookahead(int k) {
+ this.user_k = k;
+ }
+
+ /** Return k if decision is LL(k) for some k else return max int */
+ public int getMaxLookaheadDepth() {
+ if ( isCyclic() ) {
+ return Integer.MAX_VALUE;
+ }
+ return max_k;
+ }
+
+ /** Return a list of Integer alt numbers for which no lookahead could
+ * be computed or for which no single DFA accept state predicts those
+ * alts. Must call verify() first before this makes sense.
+ */
+ public List getUnreachableAlts() {
+ return unreachableAlts;
+ }
+
+ /** Once this DFA has been built, need to verify that:
+ *
+ * 1. it's reduced
+ * 2. all alts have an accept state
+ *
+ * Elsewhere, in the NFA converter, we need to verify that:
+ *
+ * 3. alts i and j have disjoint lookahead if no sem preds
+ * 4. if sem preds, nondeterministic alts must be sufficiently covered
+ *
+ * This is avoided if analysis bails out for any reason.
+ */
+ public void verify() {
+ doesStateReachAcceptState(startState);
+ }
+
+ /** figure out if this state eventually reaches an accept state and
+ * modify the instance variable 'reduced' to indicate if we find
+ * at least one state that cannot reach an accept state. This implies
+ * that the overall DFA is not reduced. This algorithm should be
+ * linear in the number of DFA states.
+ *
+ * The algorithm also tracks which alternatives have no accept state,
+ * indicating a nondeterminism.
+ *
+ * Also computes whether the DFA is cyclic.
+ *
+ * TODO: I call getUniquelyPredicatedAlt too much; cache predicted alt
+ */
+ protected boolean doesStateReachAcceptState(DFAState d) {
+ if ( d.isAcceptState() ) {
+ // accept states have no edges emanating from them so we can return
+ d.setAcceptStateReachable(REACHABLE_YES);
+ // this alt is uniquely predicted, remove from nondeterministic list
+ int predicts = d.getUniquelyPredictedAlt();
+ unreachableAlts.remove(Utils.integer(predicts));
+ return true;
+ }
+
+ // avoid infinite loops
+ d.setAcceptStateReachable(REACHABLE_BUSY);
+
+ boolean anEdgeReachesAcceptState = false;
+ // Visit every transition, track if at least one edge reaches stop state
+ // Cannot terminate when we know this state reaches stop state since
+ // all transitions must be traversed to set status of each DFA state.
+ for (int i=0; i0 ) {
+ buf.append(" && ");
+ }
+ buf.append("timed out (>");
+ buf.append(DFA.MAX_TIME_PER_DFA_CREATION);
+ buf.append("ms)");
+ }
+ buf.append("\n");
+ return buf.toString();
+ }
+
+ /** What GrammarAST node (derived from the grammar) is this DFA
+ * associated with? It will point to the start of a block or
+ * the loop back of a (...)+ block etc...
+ */
+ public GrammarAST getDecisionASTNode() {
+ return decisionNFAStartState.associatedASTNode;
+ }
+
+ public boolean isGreedy() {
+ GrammarAST blockAST = nfa.grammar.getDecisionBlockAST(decisionNumber);
+ Object v = nfa.grammar.getBlockOption(blockAST,"greedy");
+ if ( v!=null && v.equals("false") ) {
+ return false;
+ }
+ return true;
+
+ }
+
+ public DFAState newState() {
+ DFAState n = new DFAState(this);
+ n.stateNumber = stateCounter;
+ stateCounter++;
+ states.setSize(n.stateNumber+1);
+ states.set(n.stateNumber, n); // track state num to state
+ return n;
+ }
+
+ public int getNumberOfStates() {
+ if ( getUserMaxLookahead()>0 ) {
+ // if using fixed lookahead then uniqueSets not set
+ return states.size();
+ }
+ return numberOfStates;
+ }
+
+ public int getNumberOfAlts() {
+ return nAlts;
+ }
+
+ public boolean analysisTimedOut() {
+ return probe.analysisTimedOut();
+ }
+
+ protected void initAltRelatedInfo() {
+ unreachableAlts = new LinkedList();
+ for (int i = 1; i <= nAlts; i++) {
+ unreachableAlts.add(Utils.integer(i));
+ }
+ altToAcceptState = new DFAState[nAlts+1];
+ }
+
+ public String toString() {
+ FASerializer serializer = new FASerializer(nfa.grammar);
+ if ( startState==null ) {
+ return "";
+ }
+ return serializer.serialize(startState, false);
+ }
+
+ /** EOT (end of token) is a label that indicates when the DFA conversion
+ * algorithm would "fall off the end of a lexer rule". It normally
+ * means the default clause. So for ('a'..'z')+ you would see a DFA
+ * with a state that has a..z and EOT emanating from it. a..z would
+ * jump to a state predicting alt 1 and EOT would jump to a state
+ * predicting alt 2 (the exit loop branch). EOT implies anything other
+ * than a..z. If for some reason, the set is "all char" such as with
+ * the wildcard '.', then EOT cannot match anything. For example,
+ *
+ * BLOCK : '{' (.)* '}'
+ *
+ * consumes all char until EOF when greedy=true. When all edges are
+ * combined for the DFA state after matching '}', you will find that
+ * it is all char. The EOT transition has nothing to match and is
+ * unreachable. The findNewDFAStatesAndAddDFATransitions() method
+ * must know to ignore the EOT, so we simply remove it from the
+ * reachable labels. Later analysis will find that the exit branch
+ * is not predicted by anything. For greedy=false, we leave only
+ * the EOT label indicating that the DFA should stop immediately
+ * and predict the exit branch. The reachable labels are often a
+ * set of disjoint values like: [, 42, {0..41, 43..65534}]
+ * due to DFA conversion so must construct a pure set to see if
+ * it is same as Label.ALLCHAR.
+ *
+ * Only do this for Lexers.
+ *
+ * If EOT coexists with ALLCHAR:
+ * 1. If not greedy, modify the labels parameter to be EOT
+ * 2. If greedy, remove EOT from the labels set
+ protected boolean reachableLabelsEOTCoexistsWithAllChar(OrderedHashSet labels)
+ {
+ Label eot = new Label(Label.EOT);
+ if ( !labels.containsKey(eot) ) {
+ return false;
+ }
+ System.out.println("### contains EOT");
+ boolean containsAllChar = false;
+ IntervalSet completeVocab = new IntervalSet();
+ int n = labels.size();
+ for (int i=0; iDFA->codegen pipeline seems very robust
+ * to me which I attribute to a uniform and consistent set of data
+ * structures. Regardless of what I want to "say"/implement, I do so
+ * within the confines of, for example, a DFA. The code generator
+ * can then just generate code--it doesn't have to do much thinking.
+ * Putting optimizations in the code gen code really starts to make
+ * it a spagetti factory (uh oh, now I'm hungry!). The pipeline is
+ * very testable; each stage has well defined input/output pairs.
+ *
+ * ### Optimization: PRUNE_EBNF_EXIT_BRANCHES
+ *
+ * There is no need to test EBNF block exit branches. Not only is it
+ * an unneeded computation, but counter-intuitively, you actually get
+ * better errors. You can report an error at the missing or extra
+ * token rather than as soon as you've figured out you will fail.
+ *
+ * Imagine optional block "( DOT CLASS )? SEMI". ANTLR generates:
+ *
+ * int alt=0;
+ * if ( input.LA(1)==DOT ) {
+ * alt=1;
+ * }
+ * else if ( input.LA(1)==SEMI ) {
+ * alt=2;
+ * }
+ *
+ * Clearly, since Parser.match() will ultimately find the error, we
+ * do not want to report an error nor do we want to bother testing
+ * lookahead against what follows the (...)? We want to generate
+ * simply "should I enter the subrule?":
+ *
+ * int alt=2;
+ * if ( input.LA(1)==DOT ) {
+ * alt=1;
+ * }
+ *
+ * NOTE 1. Greedy loops cannot be optimized in this way. For example,
+ * "(greedy=false:'x'|.)* '\n'". You specifically need the exit branch
+ * to tell you when to terminate the loop as the same input actually
+ * predicts one of the alts (i.e., staying in the loop).
+ *
+ * NOTE 2. I do not optimize cyclic DFAs at the moment as it doesn't
+ * seem to work. ;) I'll have to investigate later to see what work I
+ * can do on cyclic DFAs to make them have fewer edges. Might have
+ * something to do with the EOT token.
+ *
+ * ### PRUNE_SUPERFLUOUS_EOT_EDGES
+ *
+ * When a token is a subset of another such as the following rules, ANTLR
+ * quietly assumes the first token to resolve the ambiguity.
+ *
+ * EQ : '=' ;
+ * ASSIGNOP : '=' | '+=' ;
+ *
+ * It can yield states that have only a single edge on EOT to an accept
+ * state. This is a waste and messes up my code generation. ;) If
+ * Tokens rule DFA goes
+ *
+ * s0 -'='-> s3 -EOT-> s5 (accept)
+ *
+ * then s5 should be pruned and s3 should be made an accept. Do NOT do this
+ * for keyword versus ID as the state with EOT edge emanating from it will
+ * also have another edge.
+ *
+ * ### Optimization: COLLAPSE_ALL_INCIDENT_EDGES
+ *
+ * Done during DFA construction. See method addTransition() in
+ * NFAToDFAConverter.
+ *
+ * ### Optimization: MERGE_STOP_STATES
+ *
+ * Done during DFA construction. See addDFAState() in NFAToDFAConverter.
+ */
+public class DFAOptimizer {
+ public static boolean PRUNE_EBNF_EXIT_BRANCHES = true;
+ public static boolean PRUNE_TOKENS_RULE_SUPERFLUOUS_EOT_EDGES = true;
+ public static boolean COLLAPSE_ALL_PARALLEL_EDGES = true;
+ public static boolean MERGE_STOP_STATES = true;
+
+ /** Used by DFA state machine generator to avoid infinite recursion
+ * resulting from cycles int the DFA. This is a set of int state #s.
+ * This is a side-effect of calling optimize; can't clear after use
+ * because code gen needs it.
+ */
+ protected Set visited = new HashSet();
+
+ protected Grammar grammar;
+
+ public DFAOptimizer(Grammar grammar) {
+ this.grammar = grammar;
+ }
+
+ public void optimize() {
+ // optimize each DFA in this grammar
+ for (int decisionNumber=1;
+ decisionNumber<=grammar.getNumberOfDecisions();
+ decisionNumber++)
+ {
+ DFA dfa = grammar.getLookaheadDFA(decisionNumber);
+ optimize(dfa);
+ }
+ }
+
+ protected void optimize(DFA dfa) {
+ if ( dfa==null ) {
+ return; // nothing to do
+ }
+ /*
+ System.out.println("Optimize DFA "+dfa.decisionNFAStartState.decisionNumber+
+ " num states="+dfa.getNumberOfStates());
+ */
+ //long start = System.currentTimeMillis();
+ if ( PRUNE_EBNF_EXIT_BRANCHES && dfa.canInlineDecision() ) {
+ visited.clear();
+ int decisionType =
+ dfa.getNFADecisionStartState().decisionStateType;
+ if ( dfa.isGreedy() &&
+ (decisionType==NFAState.OPTIONAL_BLOCK_START ||
+ decisionType==NFAState.LOOPBACK) )
+ {
+ optimizeExitBranches(dfa.startState);
+ }
+ }
+ // If the Tokens rule has syntactically ambiguous rules, try to prune
+ if ( PRUNE_TOKENS_RULE_SUPERFLUOUS_EOT_EDGES &&
+ dfa.isTokensRuleDecision() &&
+ dfa.probe.stateToSyntacticallyAmbiguousTokensRuleAltsMap.size()>0 )
+ {
+ visited.clear();
+ optimizeEOTBranches(dfa.startState);
+ }
+
+ /* ack...code gen needs this, cannot optimize
+ visited.clear();
+ unlinkUnneededStateData(dfa.startState);
+ */
+ //long stop = System.currentTimeMillis();
+ //System.out.println("minimized in "+(int)(stop-start)+" ms");
+ }
+
+ protected void optimizeExitBranches(DFAState d) {
+ Integer sI = Utils.integer(d.stateNumber);
+ if ( visited.contains(sI) ) {
+ return; // already visited
+ }
+ visited.add(sI);
+ int nAlts = d.dfa.getNumberOfAlts();
+ for (int i = 0; i < d.getNumberOfTransitions(); i++) {
+ Transition edge = (Transition) d.transition(i);
+ DFAState edgeTarget = ((DFAState)edge.target);
+ /*
+ System.out.println(d.stateNumber+"-"+
+ edge.label.toString(d.dfa.nfa.grammar)+"->"+
+ edgeTarget.stateNumber);
+ */
+ // if target is an accept state and that alt is the exit alt
+ if ( edgeTarget.isAcceptState() &&
+ edgeTarget.getUniquelyPredictedAlt()==nAlts)
+ {
+ /*
+ System.out.println("ignoring transition "+i+" to max alt "+
+ d.dfa.getNumberOfAlts());
+ */
+ d.removeTransition(i);
+ i--; // back up one so that i++ of loop iteration stays within bounds
+ }
+ optimizeExitBranches(edgeTarget);
+ }
+ }
+
+ protected void optimizeEOTBranches(DFAState d) {
+ Integer sI = Utils.integer(d.stateNumber);
+ if ( visited.contains(sI) ) {
+ return; // already visited
+ }
+ visited.add(sI);
+ for (int i = 0; i < d.getNumberOfTransitions(); i++) {
+ Transition edge = (Transition) d.transition(i);
+ DFAState edgeTarget = ((DFAState)edge.target);
+ /*
+ System.out.println(d.stateNumber+"-"+
+ edge.label.toString(d.dfa.nfa.grammar)+"->"+
+ edgeTarget.stateNumber);
+ */
+ // if only one edge coming out, it is EOT, and target is accept prune
+ if ( PRUNE_TOKENS_RULE_SUPERFLUOUS_EOT_EDGES &&
+ edgeTarget.isAcceptState() &&
+ d.getNumberOfTransitions()==1 &&
+ edge.label.isAtom() &&
+ edge.label.getAtom()==Label.EOT )
+ {
+ //System.out.println("state "+d+" can be pruned");
+ // remove the superfluous EOT edge
+ d.removeTransition(i);
+ d.setAcceptState(true); // make it an accept state
+ // force it to uniquely predict the originally predicted state
+ d.cachedUniquelyPredicatedAlt =
+ edgeTarget.getUniquelyPredictedAlt();
+ i--; // back up one so that i++ of loop iteration stays within bounds
+ }
+ optimizeEOTBranches(edgeTarget);
+ }
+ }
+
+ /** Walk DFA states, unlinking the nfa configs and whatever else I
+ * can to reduce memory footprint.
+ protected void unlinkUnneededStateData(DFAState d) {
+ Integer sI = Utils.integer(d.stateNumber);
+ if ( visited.contains(sI) ) {
+ return; // already visited
+ }
+ visited.add(sI);
+ d.nfaConfigurations = null;
+ for (int i = 0; i < d.getNumberOfTransitions(); i++) {
+ Transition edge = (Transition) d.transition(i);
+ DFAState edgeTarget = ((DFAState)edge.target);
+ unlinkUnneededStateData(edgeTarget);
+ }
+ }
+ */
+
+}
diff --git a/antlr_3_1_source/analysis/DFAState.java b/antlr_3_1_source/analysis/DFAState.java
new file mode 100644
index 0000000..4c2085b
--- /dev/null
+++ b/antlr_3_1_source/analysis/DFAState.java
@@ -0,0 +1,776 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.misc.IntSet;
+import org.antlr.misc.MultiMap;
+import org.antlr.misc.OrderedHashSet;
+import org.antlr.misc.Utils;
+import org.antlr.tool.Grammar;
+
+import java.util.*;
+
+/** A DFA state represents a set of possible NFA configurations.
+ * As Aho, Sethi, Ullman p. 117 says "The DFA uses its state
+ * to keep track of all possible states the NFA can be in after
+ * reading each input symbol. That is to say, after reading
+ * input a1a2..an, the DFA is in a state that represents the
+ * subset T of the states of the NFA that are reachable from the
+ * NFA's start state along some path labeled a1a2..an."
+ * In conventional NFA->DFA conversion, therefore, the subset T
+ * would be a bitset representing the set of states the
+ * NFA could be in. We need to track the alt predicted by each
+ * state as well, however. More importantly, we need to maintain
+ * a stack of states, tracking the closure operations as they
+ * jump from rule to rule, emulating rule invocations (method calls).
+ * Recall that NFAs do not normally have a stack like a pushdown-machine
+ * so I have to add one to simulate the proper lookahead sequences for
+ * the underlying LL grammar from which the NFA was derived.
+ *
+ * I use a list of NFAConfiguration objects. An NFAConfiguration
+ * is both a state (ala normal conversion) and an NFAContext describing
+ * the chain of rules (if any) followed to arrive at that state. There
+ * is also the semantic context, which is the "set" of predicates found
+ * on the path to this configuration.
+ *
+ * A DFA state may have multiple references to a particular state,
+ * but with different NFAContexts (with same or different alts)
+ * meaning that state was reached via a different set of rule invocations.
+ */
+public class DFAState extends State {
+ public static final int INITIAL_NUM_TRANSITIONS = 4;
+ public static final int PREDICTED_ALT_UNSET = NFA.INVALID_ALT_NUMBER-1;
+
+ /** We are part of what DFA? Use this ref to get access to the
+ * context trees for an alt.
+ */
+ public DFA dfa;
+
+ /** Track the transitions emanating from this DFA state. The List
+ * elements are Transition objects.
+ */
+ protected List transitions =
+ new ArrayList(INITIAL_NUM_TRANSITIONS);
+
+ /** When doing an acyclic DFA, this is the number of lookahead symbols
+ * consumed to reach this state. This value may be nonzero for most
+ * dfa states, but it is only a valid value if the user has specified
+ * a max fixed lookahead.
+ */
+ protected int k;
+
+ /** The NFA->DFA algorithm may terminate leaving some states
+ * without a path to an accept state, implying that upon certain
+ * input, the decision is not deterministic--no decision about
+ * predicting a unique alternative can be made. Recall that an
+ * accept state is one in which a unique alternative is predicted.
+ */
+ protected int acceptStateReachable = DFA.REACHABLE_UNKNOWN;
+
+ /** Rather than recheck every NFA configuration in a DFA state (after
+ * resolving) in findNewDFAStatesAndAddDFATransitions just check
+ * this boolean. Saves a linear walk perhaps DFA state creation.
+ * Every little bit helps.
+ */
+ protected boolean resolvedWithPredicates = false;
+
+ /** If a closure operation finds that we tried to invoke the same
+ * rule too many times (stack would grow beyond a threshold), it
+ * marks the state has aborted and notifies the DecisionProbe.
+ */
+ public boolean abortedDueToRecursionOverflow = false;
+
+ /** If we detect recursion on more than one alt, decision is non-LL(*),
+ * but try to isolate it to only those states whose closure operations
+ * detect recursion. There may be other alts that are cool:
+ *
+ * a : recur '.'
+ * | recur ';'
+ * | X Y // LL(2) decision; don't abort and use k=1 plus backtracking
+ * | X Z
+ * ;
+ *
+ * 12/13/2007: Actually this has caused problems. If k=*, must terminate
+ * and throw out entire DFA; retry with k=1. Since recursive, do not
+ * attempt more closure ops as it may take forever. Exception thrown
+ * now and we simply report the problem. If synpreds exist, I'll retry
+ * with k=1.
+ */
+ protected boolean abortedDueToMultipleRecursiveAlts = false;
+
+ /** Build up the hash code for this state as NFA configurations
+ * are added as it's monotonically increasing list of configurations.
+ */
+ protected int cachedHashCode;
+
+ protected int cachedUniquelyPredicatedAlt = PREDICTED_ALT_UNSET;
+
+ public int minAltInConfigurations=Integer.MAX_VALUE;
+
+ public boolean atLeastOneConfigurationHasAPredicate = false;
+
+ /** The set of NFA configurations (state,alt,context) for this DFA state */
+ public OrderedHashSet nfaConfigurations =
+ new OrderedHashSet();
+
+ public List configurationsWithLabeledEdges =
+ new ArrayList();
+
+ /** Used to prevent the closure operation from looping to itself and
+ * hence looping forever. Sensitive to the NFA state, the alt, and
+ * the stack context. This just the nfa config set because we want to
+ * prevent closures only on states contributed by closure not reach
+ * operations.
+ *
+ * Two configurations identical including semantic context are
+ * considered the same closure computation. @see NFAToDFAConverter.closureBusy().
+ */
+ protected Set closureBusy = new HashSet();
+
+ /** As this state is constructed (i.e., as NFA states are added), we
+ * can easily check for non-epsilon transitions because the only
+ * transition that could be a valid label is transition(0). When we
+ * process this node eventually, we'll have to walk all states looking
+ * for all possible transitions. That is of the order: size(label space)
+ * times size(nfa states), which can be pretty damn big. It's better
+ * to simply track possible labels.
+ */
+ protected OrderedHashSet reachableLabels;
+
+ public DFAState(DFA dfa) {
+ this.dfa = dfa;
+ }
+
+ public void reset() {
+ //nfaConfigurations = null; // getGatedPredicatesInNFAConfigurations needs
+ configurationsWithLabeledEdges = null;
+ closureBusy = null;
+ reachableLabels = null;
+ }
+
+ public Transition transition(int i) {
+ return (Transition)transitions.get(i);
+ }
+
+ public int getNumberOfTransitions() {
+ return transitions.size();
+ }
+
+ public void addTransition(Transition t) {
+ transitions.add(t);
+ }
+
+ /** Add a transition from this state to target with label. Return
+ * the transition number from 0..n-1.
+ */
+ public int addTransition(DFAState target, Label label) {
+ transitions.add( new Transition(label, target) );
+ return transitions.size()-1;
+ }
+
+ public Transition getTransition(int trans) {
+ return transitions.get(trans);
+ }
+
+ public void removeTransition(int trans) {
+ transitions.remove(trans);
+ }
+
+ /** Add an NFA configuration to this DFA node. Add uniquely
+ * an NFA state/alt/syntactic&semantic context (chain of invoking state(s)
+ * and semantic predicate contexts).
+ *
+ * I don't see how there could be two configurations with same
+ * state|alt|synCtx and different semantic contexts because the
+ * semantic contexts are computed along the path to a particular state
+ * so those two configurations would have to have the same predicate.
+ * Nonetheless, the addition of configurations is unique on all
+ * configuration info. I guess I'm saying that syntactic context
+ * implies semantic context as the latter is computed according to the
+ * former.
+ *
+ * As we add configurations to this DFA state, track the set of all possible
+ * transition labels so we can simply walk it later rather than doing a
+ * loop over all possible labels in the NFA.
+ */
+ public void addNFAConfiguration(NFAState state, NFAConfiguration c) {
+ if ( nfaConfigurations.contains(c) ) {
+ return;
+ }
+
+ nfaConfigurations.add(c);
+
+ // track min alt rather than compute later
+ if ( c.alt < minAltInConfigurations ) {
+ minAltInConfigurations = c.alt;
+ }
+
+ if ( c.semanticContext!=SemanticContext.EMPTY_SEMANTIC_CONTEXT ) {
+ atLeastOneConfigurationHasAPredicate = true;
+ }
+
+ // update hashCode; for some reason using context.hashCode() also
+ // makes the GC take like 70% of the CPU and is slow!
+ cachedHashCode += c.state + c.alt;
+
+ // update reachableLabels
+ // We're adding an NFA state; check to see if it has a non-epsilon edge
+ if ( state.transition[0] != null ) {
+ Label label = state.transition[0].label;
+ if ( !(label.isEpsilon()||label.isSemanticPredicate()) ) {
+ // this NFA state has a non-epsilon edge, track for fast
+ // walking later when we do reach on this DFA state we're
+ // building.
+ configurationsWithLabeledEdges.add(c);
+ if ( state.transition[1] ==null ) {
+ // later we can check this to ignore o-A->o states in closure
+ c.singleAtomTransitionEmanating = true;
+ }
+ addReachableLabel(label);
+ }
+ }
+ }
+
+ public NFAConfiguration addNFAConfiguration(NFAState state,
+ int alt,
+ NFAContext context,
+ SemanticContext semanticContext)
+ {
+ NFAConfiguration c = new NFAConfiguration(state.stateNumber,
+ alt,
+ context,
+ semanticContext);
+ addNFAConfiguration(state, c);
+ return c;
+ }
+
+ /** Add label uniquely and disjointly; intersection with
+ * another set or int/char forces breaking up the set(s).
+ *
+ * Example, if reachable list of labels is [a..z, {k,9}, 0..9],
+ * the disjoint list will be [{a..j,l..z}, k, 9, 0..8].
+ *
+ * As we add NFA configurations to a DFA state, we might as well track
+ * the set of all possible transition labels to make the DFA conversion
+ * more efficient. W/o the reachable labels, we'd need to check the
+ * whole vocabulary space (could be 0..\uFFFF)! The problem is that
+ * labels can be sets, which may overlap with int labels or other sets.
+ * As we need a deterministic set of transitions from any
+ * state in the DFA, we must make the reachable labels set disjoint.
+ * This operation amounts to finding the character classes for this
+ * DFA state whereas with tools like flex, that need to generate a
+ * homogeneous DFA, must compute char classes across all states.
+ * We are going to generate DFAs with heterogeneous states so we
+ * only care that the set of transitions out of a single state are
+ * unique. :)
+ *
+ * The idea for adding a new set, t, is to look for overlap with the
+ * elements of existing list s. Upon overlap, replace
+ * existing set s[i] with two new disjoint sets, s[i]-t and s[i]&t.
+ * (if s[i]-t is nil, don't add). The remainder is t-s[i], which is
+ * what you want to add to the set minus what was already there. The
+ * remainder must then be compared against the i+1..n elements in s
+ * looking for another collision. Each collision results in a smaller
+ * and smaller remainder. Stop when you run out of s elements or
+ * remainder goes to nil. If remainder is non nil when you run out of
+ * s elements, then add remainder to the end.
+ *
+ * Single element labels are treated as sets to make the code uniform.
+ */
+ protected void addReachableLabel(Label label) {
+ if ( reachableLabels==null ) {
+ reachableLabels = new OrderedHashSet();
+ }
+ /*
+ System.out.println("addReachableLabel to state "+dfa.decisionNumber+"."+stateNumber+": "+label.getSet().toString(dfa.nfa.grammar));
+ System.out.println("start of add to state "+dfa.decisionNumber+"."+stateNumber+": " +
+ "reachableLabels="+reachableLabels.toString());
+ */
+ if ( reachableLabels.contains(label) ) { // exact label present
+ return;
+ }
+ IntSet t = label.getSet();
+ IntSet remainder = t; // remainder starts out as whole set to add
+ int n = reachableLabels.size(); // only look at initial elements
+ // walk the existing list looking for the collision
+ for (int i=0; i configs) {
+ this.nfaConfigurations = configs;
+ }
+
+ /** A decent hash for a DFA state is the sum of the NFA state/alt pairs.
+ * This is used when we add DFAState objects to the DFA.states Map and
+ * when we compare DFA states. Computed in addNFAConfiguration()
+ */
+ public int hashCode() {
+ if ( cachedHashCode==0 ) {
+ // LL(1) algorithm doesn't use NFA configurations, which
+ // dynamically compute hashcode; must have something; use super
+ return super.hashCode();
+ }
+ return cachedHashCode;
+ }
+
+ /** Two DFAStates are equal if their NFA configuration sets are the
+ * same. This method is used to see if a DFA state already exists.
+ *
+ * Because the number of alternatives and number of NFA configurations are
+ * finite, there is a finite number of DFA states that can be processed.
+ * This is necessary to show that the algorithm terminates.
+ *
+ * Cannot test the DFA state numbers here because in DFA.addState we need
+ * to know if any other state exists that has this exact set of NFA
+ * configurations. The DFAState state number is irrelevant.
+ */
+ public boolean equals(Object o) {
+ // compare set of NFA configurations in this set with other
+ DFAState other = (DFAState)o;
+ return this.nfaConfigurations.equals(other.nfaConfigurations);
+ }
+
+ /** Walk each configuration and if they are all the same alt, return
+ * that alt else return NFA.INVALID_ALT_NUMBER. Ignore resolved
+ * configurations, but don't ignore resolveWithPredicate configs
+ * because this state should not be an accept state. We need to add
+ * this to the work list and then have semantic predicate edges
+ * emanating from it.
+ */
+ public int getUniquelyPredictedAlt() {
+ if ( cachedUniquelyPredicatedAlt!=PREDICTED_ALT_UNSET ) {
+ return cachedUniquelyPredicatedAlt;
+ }
+ int alt = NFA.INVALID_ALT_NUMBER;
+ int numConfigs = nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ // ignore anything we resolved; predicates will still result
+ // in transitions out of this state, so must count those
+ // configurations; i.e., don't ignore resolveWithPredicate configs
+ if ( configuration.resolved ) {
+ continue;
+ }
+ if ( alt==NFA.INVALID_ALT_NUMBER ) {
+ alt = configuration.alt; // found first nonresolved alt
+ }
+ else if ( configuration.alt!=alt ) {
+ return NFA.INVALID_ALT_NUMBER;
+ }
+ }
+ this.cachedUniquelyPredicatedAlt = alt;
+ return alt;
+ }
+
+ /** Return the uniquely mentioned alt from the NFA configurations;
+ * Ignore the resolved bit etc... Return INVALID_ALT_NUMBER
+ * if there is more than one alt mentioned.
+ */
+ public int getUniqueAlt() {
+ int alt = NFA.INVALID_ALT_NUMBER;
+ int numConfigs = nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ if ( alt==NFA.INVALID_ALT_NUMBER ) {
+ alt = configuration.alt; // found first alt
+ }
+ else if ( configuration.alt!=alt ) {
+ return NFA.INVALID_ALT_NUMBER;
+ }
+ }
+ return alt;
+ }
+
+ /** When more than one alternative can match the same input, the first
+ * alternative is chosen to resolve the conflict. The other alts
+ * are "turned off" by setting the "resolved" flag in the NFA
+ * configurations. Return the set of disabled alternatives. For
+ *
+ * a : A | A | A ;
+ *
+ * this method returns {2,3} as disabled. This does not mean that
+ * the alternative is totally unreachable, it just means that for this
+ * DFA state, that alt is disabled. There may be other accept states
+ * for that alt.
+ */
+ public Set getDisabledAlternatives() {
+ Set disabled = new LinkedHashSet();
+ int numConfigs = nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ if ( configuration.resolved ) {
+ disabled.add(Utils.integer(configuration.alt));
+ }
+ }
+ return disabled;
+ }
+
+ protected Set getNonDeterministicAlts() {
+ int user_k = dfa.getUserMaxLookahead();
+ if ( user_k>0 && user_k==k ) {
+ // if fixed lookahead, then more than 1 alt is a nondeterminism
+ // if we have hit the max lookahead
+ return getAltSet();
+ }
+ else if ( abortedDueToMultipleRecursiveAlts || abortedDueToRecursionOverflow ) {
+ // if we had to abort for non-LL(*) state assume all alts are a problem
+ return getAltSet();
+ }
+ else {
+ return getConflictingAlts();
+ }
+ }
+
+ /** Walk each NFA configuration in this DFA state looking for a conflict
+ * where (s|i|ctx) and (s|j|ctx) exist, indicating that state s with
+ * context conflicting ctx predicts alts i and j. Return an Integer set
+ * of the alternative numbers that conflict. Two contexts conflict if
+ * they are equal or one is a stack suffix of the other or one is
+ * the empty context.
+ *
+ * Use a hash table to record the lists of configs for each state
+ * as they are encountered. We need only consider states for which
+ * there is more than one configuration. The configurations' predicted
+ * alt must be different or must have different contexts to avoid a
+ * conflict.
+ *
+ * Don't report conflicts for DFA states that have conflicting Tokens
+ * rule NFA states; they will be resolved in favor of the first rule.
+ */
+ protected Set getConflictingAlts() {
+ // TODO this is called multiple times: cache result?
+ //System.out.println("getNondetAlts for DFA state "+stateNumber);
+ Set nondeterministicAlts = new HashSet();
+
+ // If only 1 NFA conf then no way it can be nondeterministic;
+ // save the overhead. There are many o-a->o NFA transitions
+ // and so we save a hash map and iterator creation for each
+ // state.
+ int numConfigs = nfaConfigurations.size();
+ if ( numConfigs <=1 ) {
+ return null;
+ }
+
+ // First get a list of configurations for each state.
+ // Most of the time, each state will have one associated configuration.
+ MultiMap stateToConfigListMap =
+ new MultiMap();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ Integer stateI = Utils.integer(configuration.state);
+ stateToConfigListMap.map(stateI, configuration);
+ }
+ // potential conflicts are states with > 1 configuration and diff alts
+ Set states = stateToConfigListMap.keySet();
+ int numPotentialConflicts = 0;
+ for (Iterator it = states.iterator(); it.hasNext();) {
+ Integer stateI = (Integer) it.next();
+ boolean thisStateHasPotentialProblem = false;
+ List configsForState = (List)stateToConfigListMap.get(stateI);
+ int alt=0;
+ int numConfigsForState = configsForState.size();
+ for (int i = 0; i < numConfigsForState && numConfigsForState>1 ; i++) {
+ NFAConfiguration c = (NFAConfiguration) configsForState.get(i);
+ if ( alt==0 ) {
+ alt = c.alt;
+ }
+ else if ( c.alt!=alt ) {
+ /*
+ System.out.println("potential conflict in state "+stateI+
+ " configs: "+configsForState);
+ */
+ // 11/28/2005: don't report closures that pinch back
+ // together in Tokens rule. We want to silently resolve
+ // to the first token definition ala lex/flex by ignoring
+ // these conflicts.
+ // Also this ensures that lexers look for more and more
+ // characters (longest match) before resorting to predicates.
+ // TestSemanticPredicates.testLexerMatchesLongestThenTestPred()
+ // for example would terminate at state s1 and test predicate
+ // meaning input "ab" would test preds to decide what to
+ // do but it should match rule C w/o testing preds.
+ if ( dfa.nfa.grammar.type!=Grammar.LEXER ||
+ !dfa.decisionNFAStartState.enclosingRule.name.equals(Grammar.ARTIFICIAL_TOKENS_RULENAME) )
+ {
+ numPotentialConflicts++;
+ thisStateHasPotentialProblem = true;
+ }
+ }
+ }
+ if ( !thisStateHasPotentialProblem ) {
+ // remove NFA state's configurations from
+ // further checking; no issues with it
+ // (can't remove as it's concurrent modification; set to null)
+ stateToConfigListMap.put(stateI, null);
+ }
+ }
+
+ // a fast check for potential issues; most states have none
+ if ( numPotentialConflicts==0 ) {
+ return null;
+ }
+
+ // we have a potential problem, so now go through config lists again
+ // looking for different alts (only states with potential issues
+ // are left in the states set). Now we will check context.
+ // For example, the list of configs for NFA state 3 in some DFA
+ // state might be:
+ // [3|2|[28 18 $], 3|1|[28 $], 3|1, 3|2]
+ // I want to create a map from context to alts looking for overlap:
+ // [28 18 $] -> 2
+ // [28 $] -> 1
+ // [$] -> 1,2
+ // Indeed a conflict exists as same state 3, same context [$], predicts
+ // alts 1 and 2.
+ // walk each state with potential conflicting configurations
+ for (Iterator it = states.iterator(); it.hasNext();) {
+ Integer stateI = (Integer) it.next();
+ List configsForState = (List)stateToConfigListMap.get(stateI);
+ // compare each configuration pair s, t to ensure:
+ // s.ctx different than t.ctx if s.alt != t.alt
+ int numConfigsForState = 0;
+ if ( configsForState!=null ) {
+ numConfigsForState = configsForState.size();
+ }
+ for (int i = 0; i < numConfigsForState; i++) {
+ NFAConfiguration s = (NFAConfiguration) configsForState.get(i);
+ for (int j = i+1; j < numConfigsForState; j++) {
+ NFAConfiguration t = (NFAConfiguration)configsForState.get(j);
+ // conflicts means s.ctx==t.ctx or s.ctx is a stack
+ // suffix of t.ctx or vice versa (if alts differ).
+ // Also a conflict if s.ctx or t.ctx is empty
+ if ( s.alt != t.alt && s.context.conflictsWith(t.context) ) {
+ nondeterministicAlts.add(Utils.integer(s.alt));
+ nondeterministicAlts.add(Utils.integer(t.alt));
+ }
+ }
+ }
+ }
+
+ if ( nondeterministicAlts.size()==0 ) {
+ return null;
+ }
+ return nondeterministicAlts;
+ }
+
+ /** Get the set of all alts mentioned by all NFA configurations in this
+ * DFA state.
+ */
+ public Set getAltSet() {
+ int numConfigs = nfaConfigurations.size();
+ Set alts = new HashSet();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ alts.add(Utils.integer(configuration.alt));
+ }
+ if ( alts.size()==0 ) {
+ return null;
+ }
+ return alts;
+ }
+
+ public Set getGatedSyntacticPredicatesInNFAConfigurations() {
+ int numConfigs = nfaConfigurations.size();
+ Set synpreds = new HashSet();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ SemanticContext gatedPredExpr =
+ configuration.semanticContext.getGatedPredicateContext();
+ // if this is a manual syn pred (gated and syn pred), add
+ if ( gatedPredExpr!=null &&
+ configuration.semanticContext.isSyntacticPredicate() )
+ {
+ synpreds.add(configuration.semanticContext);
+ }
+ }
+ if ( synpreds.size()==0 ) {
+ return null;
+ }
+ return synpreds;
+ }
+
+ /** For gated productions, we need an OR'd list of all predicates for the
+ * target of an edge so we can gate the edge based upon the predicates
+ * associated with taking that path (if any).
+ *
+ * For syntactic predicates, we only want to generate predicate
+ * evaluations as it transitions to an accept state; waste to
+ * do it earlier. So, only add gated preds derived from manually-
+ * specified syntactic predicates if this is an accept state.
+ *
+ * Also, since configurations w/o gated predicates are like true
+ * gated predicates, finding a configuration whose alt has no gated
+ * predicate implies we should evaluate the predicate to true. This
+ * means the whole edge has to be ungated. Consider:
+ *
+ * X : ('a' | {p}?=> 'a')
+ * | 'a' 'b'
+ * ;
+ *
+ * Here, you 'a' gets you from s0 to s1 but you can't test p because
+ * plain 'a' is ok. It's also ok for starting alt 2. Hence, you can't
+ * test p. Even on the edge going to accept state for alt 1 of X, you
+ * can't test p. You can get to the same place with and w/o the context.
+ * Therefore, it is never ok to test p in this situation.
+ *
+ * TODO: cache this as it's called a lot; or at least set bit if >1 present in state
+ */
+ public SemanticContext getGatedPredicatesInNFAConfigurations() {
+ SemanticContext unionOfPredicatesFromAllAlts = null;
+ int numConfigs = nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ SemanticContext gatedPredExpr =
+ configuration.semanticContext.getGatedPredicateContext();
+ if ( gatedPredExpr==null ) {
+ // if we ever find a configuration w/o a gated predicate
+ // (even if it's a nongated predicate), we cannot gate
+ // the indident edges.
+ return null;
+ }
+ else if ( acceptState || !configuration.semanticContext.isSyntacticPredicate() ) {
+ // at this point we have a gated predicate and, due to elseif,
+ // we know it's an accept and not a syn pred. In this case,
+ // it's safe to add the gated predicate to the union. We
+ // only want to add syn preds if it's an accept state. Other
+ // gated preds can be used with edges leading to accept states.
+ if ( unionOfPredicatesFromAllAlts==null ) {
+ unionOfPredicatesFromAllAlts = gatedPredExpr;
+ }
+ else {
+ unionOfPredicatesFromAllAlts =
+ SemanticContext.or(unionOfPredicatesFromAllAlts,gatedPredExpr);
+ }
+ }
+ }
+ if ( unionOfPredicatesFromAllAlts instanceof SemanticContext.TruePredicate ) {
+ return null;
+ }
+ return unionOfPredicatesFromAllAlts;
+ }
+
+ /** Is an accept state reachable from this state? */
+ public int getAcceptStateReachable() {
+ return acceptStateReachable;
+ }
+
+ public void setAcceptStateReachable(int acceptStateReachable) {
+ this.acceptStateReachable = acceptStateReachable;
+ }
+
+ public boolean isResolvedWithPredicates() {
+ return resolvedWithPredicates;
+ }
+
+ /** Print all NFA states plus what alts they predict */
+ public String toString() {
+ StringBuffer buf = new StringBuffer();
+ buf.append(stateNumber+":{");
+ for (int i = 0; i < nfaConfigurations.size(); i++) {
+ NFAConfiguration configuration = (NFAConfiguration) nfaConfigurations.get(i);
+ if ( i>0 ) {
+ buf.append(", ");
+ }
+ buf.append(configuration);
+ }
+ buf.append("}");
+ return buf.toString();
+ }
+
+ public int getLookaheadDepth() {
+ return k;
+ }
+
+ public void setLookaheadDepth(int k) {
+ this.k = k;
+ if ( k > dfa.max_k ) { // track max k for entire DFA
+ dfa.max_k = k;
+ }
+ }
+
+}
diff --git a/antlr_3_1_source/analysis/DecisionProbe.java b/antlr_3_1_source/analysis/DecisionProbe.java
new file mode 100644
index 0000000..762ee6d
--- /dev/null
+++ b/antlr_3_1_source/analysis/DecisionProbe.java
@@ -0,0 +1,915 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.ErrorManager;
+import org.antlr.tool.Grammar;
+import org.antlr.tool.GrammarAST;
+import org.antlr.tool.ANTLRParser;
+import org.antlr.misc.Utils;
+import org.antlr.misc.MultiMap;
+
+import java.util.*;
+
+import antlr.Token;
+
+/** Collection of information about what is wrong with a decision as
+ * discovered while building the DFA predictor.
+ *
+ * The information is collected during NFA->DFA conversion and, while
+ * some of this is available elsewhere, it is nice to have it all tracked
+ * in one spot so a great error message can be easily had. I also like
+ * the fact that this object tracks it all for later perusing to make an
+ * excellent error message instead of lots of imprecise on-the-fly warnings
+ * (during conversion).
+ *
+ * A decision normally only has one problem; e.g., some input sequence
+ * can be matched by multiple alternatives. Unfortunately, some decisions
+ * such as
+ *
+ * a : ( A | B ) | ( A | B ) | A ;
+ *
+ * have multiple problems. So in general, you should approach a decision
+ * as having multiple flaws each one uniquely identified by a DFAState.
+ * For example, statesWithSyntacticallyAmbiguousAltsSet tracks the set of
+ * all DFAStates where ANTLR has discovered a problem. Recall that a decision
+ * is represented internall with a DFA comprised of multiple states, each of
+ * which could potentially have problems.
+ *
+ * Because of this, you need to iterate over this list of DFA states. You'll
+ * note that most of the informational methods like
+ * getSampleNonDeterministicInputSequence() require a DFAState. This state
+ * will be one of the iterated states from stateToSyntacticallyAmbiguousAltsSet.
+ *
+ * This class is not thread safe due to shared use of visited maps etc...
+ * Only one thread should really need to access one DecisionProbe anyway.
+ */
+public class DecisionProbe {
+ public DFA dfa;
+
+ /** Track all DFA states with nondeterministic alternatives.
+ * By reaching the same DFA state, a path through the NFA for some input
+ * is able to reach the same NFA state by starting at more than one
+ * alternative's left edge. Though, later, we may find that predicates
+ * resolve the issue, but track info anyway.
+ * Note that from the DFA state, you can ask for
+ * which alts are nondeterministic.
+ */
+ protected Set statesWithSyntacticallyAmbiguousAltsSet = new HashSet();
+
+ /** Track just like stateToSyntacticallyAmbiguousAltsMap, but only
+ * for nondeterminisms that arise in the Tokens rule such as keyword vs
+ * ID rule. The state maps to the list of Tokens rule alts that are
+ * in conflict.
+ */
+ protected Map> stateToSyntacticallyAmbiguousTokensRuleAltsMap =
+ new HashMap>();
+
+ /** Was a syntactic ambiguity resolved with predicates? Any DFA
+ * state that predicts more than one alternative, must be resolved
+ * with predicates or it should be reported to the user.
+ */
+ protected Set statesResolvedWithSemanticPredicatesSet = new HashSet();
+
+ /** Track the predicates for each alt per DFA state;
+ * more than one DFA state might have syntactically ambig alt prediction.
+ * Maps DFA state to another map, mapping alt number to a
+ * SemanticContext (pred(s) to execute to resolve syntactic ambiguity).
+ */
+ protected Map> stateToAltSetWithSemanticPredicatesMap =
+ new HashMap>();
+
+ /** Tracks alts insufficiently covered.
+ * For example, p1||true gets reduced to true and so leaves
+ * whole alt uncovered. This maps DFA state to the set of alts
+ */
+ protected Map>> stateToIncompletelyCoveredAltsMap =
+ new HashMap>>();
+
+ /** The set of states w/o emanating edges and w/o resolving sem preds. */
+ protected Set danglingStates = new HashSet();
+
+ /** The overall list of alts within the decision that have at least one
+ * conflicting input sequence.
+ */
+ protected Set altsWithProblem = new HashSet();
+
+ /** If decision with > 1 alt has recursion in > 1 alt, it's nonregular
+ * lookahead. The decision cannot be made with a DFA.
+ * the alts are stored in altsWithProblem.
+ */
+ protected boolean nonLLStarDecision = false;
+
+ /** Recursion is limited to a particular depth. If that limit is exceeded
+ * the proposed new NFAConfiguration is recorded for the associated DFA state.
+ */
+ protected MultiMap stateToRecursionOverflowConfigurationsMap =
+ new MultiMap();
+ /*
+ protected Map> stateToRecursionOverflowConfigurationsMap =
+ new HashMap>();
+ */
+
+ /** Left recursion discovered. The proposed new NFAConfiguration
+ * is recorded for the associated DFA state.
+ protected Map> stateToLeftRecursiveConfigurationsMap =
+ new HashMap>();
+ */
+
+ /** Did ANTLR have to terminate early on the analysis of this decision? */
+ protected boolean timedOut = false;
+
+ /** Used to find paths through syntactically ambiguous DFA. If we've
+ * seen statement number before, what did we learn?
+ */
+ protected Map stateReachable;
+
+ public static final Integer REACHABLE_BUSY = Utils.integer(-1);
+ public static final Integer REACHABLE_NO = Utils.integer(0);
+ public static final Integer REACHABLE_YES = Utils.integer(1);
+
+ /** Used while finding a path through an NFA whose edge labels match
+ * an input sequence. Tracks the input position
+ * we were at the last time at this node. If same input position, then
+ * we'd have reached same state without consuming input...probably an
+ * infinite loop. Stop. Set. The strings look like
+ * stateNumber_labelIndex.
+ */
+ protected Set statesVisitedAtInputDepth;
+
+ protected Set statesVisitedDuringSampleSequence;
+
+ public static boolean verbose = false;
+
+ public DecisionProbe(DFA dfa) {
+ this.dfa = dfa;
+ }
+
+ // I N F O R M A T I O N A B O U T D E C I S I O N
+
+ /** Return a string like "3:22: ( A {;} | B )" that describes this
+ * decision.
+ */
+ public String getDescription() {
+ return dfa.getNFADecisionStartState().getDescription();
+ }
+
+ public boolean isReduced() {
+ return dfa.isReduced();
+ }
+
+ public boolean isCyclic() {
+ return dfa.isCyclic();
+ }
+
+ /** If no states are dead-ends, no alts are unreachable, there are
+ * no nondeterminisms unresolved by syn preds, all is ok with decision.
+ */
+ public boolean isDeterministic() {
+ if ( danglingStates.size()==0 &&
+ statesWithSyntacticallyAmbiguousAltsSet.size()==0 &&
+ dfa.getUnreachableAlts().size()==0 )
+ {
+ return true;
+ }
+
+ if ( statesWithSyntacticallyAmbiguousAltsSet.size()>0 ) {
+ Iterator it =
+ statesWithSyntacticallyAmbiguousAltsSet.iterator();
+ while ( it.hasNext() ) {
+ DFAState d = (DFAState) it.next();
+ if ( !statesResolvedWithSemanticPredicatesSet.contains(d) ) {
+ return false;
+ }
+ }
+ // no syntactically ambig alts were left unresolved by predicates
+ return true;
+ }
+ return false;
+ }
+
+ /** Did the analysis complete it's work? */
+ public boolean analysisTimedOut() {
+ return timedOut;
+ }
+
+ /** Took too long to analyze a DFA */
+ public boolean analysisOverflowed() {
+ return stateToRecursionOverflowConfigurationsMap.size()>0;
+ }
+
+ /** Found recursion in > 1 alt */
+ public boolean isNonLLStarDecision() {
+ return nonLLStarDecision;
+ }
+
+ /** How many states does the DFA predictor have? */
+ public int getNumberOfStates() {
+ return dfa.getNumberOfStates();
+ }
+
+ /** Get a list of all unreachable alternatives for this decision. There
+ * may be multiple alternatives with ambiguous input sequences, but this
+ * is the overall list of unreachable alternatives (either due to
+ * conflict resolution or alts w/o accept states).
+ */
+ public List getUnreachableAlts() {
+ return dfa.getUnreachableAlts();
+ }
+
+ /** return set of states w/o emanating edges and w/o resolving sem preds.
+ * These states come about because the analysis algorithm had to
+ * terminate early to avoid infinite recursion for example (due to
+ * left recursion perhaps).
+ */
+ public Set getDanglingStates() {
+ return danglingStates;
+ }
+
+ public Set getNonDeterministicAlts() {
+ return altsWithProblem;
+ }
+
+ /** Return the sorted list of alts that conflict within a single state.
+ * Note that predicates may resolve the conflict.
+ */
+ public List getNonDeterministicAltsForState(DFAState targetState) {
+ Set nondetAlts = targetState.getNonDeterministicAlts();
+ if ( nondetAlts==null ) {
+ return null;
+ }
+ List sorted = new LinkedList();
+ sorted.addAll(nondetAlts);
+ Collections.sort(sorted); // make sure it's 1, 2, ...
+ return sorted;
+ }
+
+ /** Return all DFA states in this DFA that have NFA configurations that
+ * conflict. You must report a problem for each state in this set
+ * because each state represents a different input sequence.
+ */
+ public Set getDFAStatesWithSyntacticallyAmbiguousAlts() {
+ return statesWithSyntacticallyAmbiguousAltsSet;
+ }
+
+ /** Which alts were specifically turned off to resolve nondeterminisms?
+ * This is different than the unreachable alts. Disabled doesn't mean that
+ * the alternative is totally unreachable necessarily, it just means
+ * that for this DFA state, that alt is disabled. There may be other
+ * accept states for that alt that make an alt reachable.
+ */
+ public Set getDisabledAlternatives(DFAState d) {
+ return d.getDisabledAlternatives();
+ }
+
+ /** If a recursion overflow is resolve with predicates, then we need
+ * to shut off the warning that would be generated.
+ */
+ public void removeRecursiveOverflowState(DFAState d) {
+ Integer stateI = Utils.integer(d.stateNumber);
+ stateToRecursionOverflowConfigurationsMap.remove(stateI);
+ }
+
+ /** Return a List indicating an input sequence that can be matched
+ * from the start state of the DFA to the targetState (which is known
+ * to have a problem).
+ */
+ public List getSampleNonDeterministicInputSequence(DFAState targetState) {
+ Set dfaStates = getDFAPathStatesToTarget(targetState);
+ statesVisitedDuringSampleSequence = new HashSet();
+ List labels = new ArrayList(); // may access ith element; use array
+ if ( dfa==null || dfa.startState==null ) {
+ return labels;
+ }
+ getSampleInputSequenceUsingStateSet(dfa.startState,
+ targetState,
+ dfaStates,
+ labels);
+ return labels;
+ }
+
+ /** Given List, return a String with a useful representation
+ * of the associated input string. One could show something different
+ * for lexers and parsers, for example.
+ */
+ public String getInputSequenceDisplay(List labels) {
+ Grammar g = dfa.nfa.grammar;
+ StringBuffer buf = new StringBuffer();
+ for (Iterator it = labels.iterator(); it.hasNext();) {
+ Label label = (Label) it.next();
+ buf.append(label.toString(g));
+ if ( it.hasNext() && g.type!=Grammar.LEXER ) {
+ buf.append(' ');
+ }
+ }
+ return buf.toString();
+ }
+
+ /** Given an alternative associated with a nondeterministic DFA state,
+ * find the path of NFA states associated with the labels sequence.
+ * Useful tracing where in the NFA, a single input sequence can be
+ * matched. For different alts, you should get different NFA paths.
+ *
+ * The first NFA state for all NFA paths will be the same: the starting
+ * NFA state of the first nondeterministic alt. Imagine (A|B|A|A):
+ *
+ * 5->9-A->o
+ * |
+ * 6->10-B->o
+ * |
+ * 7->11-A->o
+ * |
+ * 8->12-A->o
+ *
+ * There are 3 nondeterministic alts. The paths should be:
+ * 5 9 ...
+ * 5 6 7 11 ...
+ * 5 6 7 8 12 ...
+ *
+ * The NFA path matching the sample input sequence (labels) is computed
+ * using states 9, 11, and 12 rather than 5, 7, 8 because state 5, for
+ * example can get to all ambig paths. Must isolate for each alt (hence,
+ * the extra state beginning each alt in my NFA structures). Here,
+ * firstAlt=1.
+ */
+ public List getNFAPathStatesForAlt(int firstAlt,
+ int alt,
+ List labels)
+ {
+ NFAState nfaStart = dfa.getNFADecisionStartState();
+ List path = new LinkedList();
+ // first add all NFA states leading up to altStart state
+ for (int a=firstAlt; a<=alt; a++) {
+ NFAState s =
+ dfa.nfa.grammar.getNFAStateForAltOfDecision(nfaStart,a);
+ path.add(s);
+ }
+
+ // add first state of actual alt
+ NFAState altStart = dfa.nfa.grammar.getNFAStateForAltOfDecision(nfaStart,alt);
+ NFAState isolatedAltStart = (NFAState)altStart.transition[0].target;
+ path.add(isolatedAltStart);
+
+ // add the actual path now
+ statesVisitedAtInputDepth = new HashSet();
+ getNFAPath(isolatedAltStart,
+ 0,
+ labels,
+ path);
+ return path;
+ }
+
+ /** Each state in the DFA represents a different input sequence for an
+ * alt of the decision. Given a DFA state, what is the semantic
+ * predicate context for a particular alt.
+ */
+ public SemanticContext getSemanticContextForAlt(DFAState d, int alt) {
+ Map altToPredMap = (Map)stateToAltSetWithSemanticPredicatesMap.get(d);
+ if ( altToPredMap==null ) {
+ return null;
+ }
+ return (SemanticContext)altToPredMap.get(Utils.integer(alt));
+ }
+
+ /** At least one alt refs a sem or syn pred */
+ public boolean hasPredicate() {
+ return stateToAltSetWithSemanticPredicatesMap.size()>0;
+ }
+
+ public Set getNondeterministicStatesResolvedWithSemanticPredicate() {
+ return statesResolvedWithSemanticPredicatesSet;
+ }
+
+ /** Return a list of alts whose predicate context was insufficient to
+ * resolve a nondeterminism for state d.
+ */
+ public Map> getIncompletelyCoveredAlts(DFAState d) {
+ return stateToIncompletelyCoveredAltsMap.get(d);
+ }
+
+ public void issueWarnings() {
+ // NONREGULAR DUE TO RECURSION > 1 ALTS
+ // Issue this before aborted analysis, which might also occur
+ // if we take too long to terminate
+ if ( nonLLStarDecision && !dfa.getAutoBacktrackMode() ) {
+ ErrorManager.nonLLStarDecision(this);
+ }
+
+ if ( analysisTimedOut() ) {
+ // only report early termination errors if !backtracking
+ if ( !dfa.getAutoBacktrackMode() ) {
+ ErrorManager.analysisAborted(this);
+ }
+ // now just return...if we bailed out, don't spew other messages
+ return;
+ }
+
+ issueRecursionWarnings();
+
+ // generate a separate message for each problem state in DFA
+ Set resolvedStates = getNondeterministicStatesResolvedWithSemanticPredicate();
+ Set problemStates = getDFAStatesWithSyntacticallyAmbiguousAlts();
+ if ( problemStates.size()>0 ) {
+ Iterator it =
+ problemStates.iterator();
+ while ( it.hasNext() && !dfa.nfa.grammar.NFAToDFAConversionExternallyAborted() ) {
+ DFAState d = (DFAState) it.next();
+ Map> insufficientAltToLocations = getIncompletelyCoveredAlts(d);
+ if ( insufficientAltToLocations!=null && insufficientAltToLocations.size()>0 ) {
+ ErrorManager.insufficientPredicates(this,d,insufficientAltToLocations);
+ }
+ // don't report problem if resolved
+ if ( resolvedStates==null || !resolvedStates.contains(d) ) {
+ // first strip last alt from disableAlts if it's wildcard
+ // then don't print error if no more disable alts
+ Set disabledAlts = getDisabledAlternatives(d);
+ stripWildCardAlts(disabledAlts);
+ if ( disabledAlts.size()>0 ) {
+ ErrorManager.nondeterminism(this,d);
+ }
+ }
+ }
+ }
+
+ Set danglingStates = getDanglingStates();
+ if ( danglingStates.size()>0 ) {
+ //System.err.println("no emanating edges for states: "+danglingStates);
+ for (Iterator it = danglingStates.iterator(); it.hasNext();) {
+ DFAState d = (DFAState) it.next();
+ ErrorManager.danglingState(this,d);
+ }
+ }
+
+ if ( !nonLLStarDecision ) {
+ List unreachableAlts = dfa.getUnreachableAlts();
+ if ( unreachableAlts!=null && unreachableAlts.size()>0 ) {
+ // give different msg if it's an empty Tokens rule from delegate
+ boolean isInheritedTokensRule = false;
+ if ( dfa.isTokensRuleDecision() ) {
+ for (Integer altI : unreachableAlts) {
+ GrammarAST decAST = dfa.getDecisionASTNode();
+ GrammarAST altAST = decAST.getChild(altI-1);
+ GrammarAST delegatedTokensAlt =
+ altAST.getFirstChildWithType(ANTLRParser.DOT);
+ if ( delegatedTokensAlt !=null ) {
+ isInheritedTokensRule = true;
+ ErrorManager.grammarWarning(ErrorManager.MSG_IMPORTED_TOKENS_RULE_EMPTY,
+ dfa.nfa.grammar,
+ null,
+ dfa.nfa.grammar.name,
+ delegatedTokensAlt.getFirstChild().getText());
+ }
+ }
+ }
+ if ( isInheritedTokensRule ) {
+ }
+ else {
+ ErrorManager.unreachableAlts(this,unreachableAlts);
+ }
+ }
+ }
+ }
+
+ /** Get the last disabled alt number and check in the grammar to see
+ * if that alt is a simple wildcard. If so, treat like an else clause
+ * and don't emit the error. Strip out the last alt if it's wildcard.
+ */
+ protected void stripWildCardAlts(Set disabledAlts) {
+ List sortedDisableAlts = new ArrayList(disabledAlts);
+ Collections.sort(sortedDisableAlts);
+ Integer lastAlt =
+ (Integer)sortedDisableAlts.get(sortedDisableAlts.size()-1);
+ GrammarAST blockAST =
+ dfa.nfa.grammar.getDecisionBlockAST(dfa.decisionNumber);
+ //System.out.println("block with error = "+blockAST.toStringTree());
+ GrammarAST lastAltAST = null;
+ if ( blockAST.getChild(0).getType()==ANTLRParser.OPTIONS ) {
+ // if options, skip first child: ( options { ( = greedy false ) )
+ lastAltAST = blockAST.getChild(lastAlt.intValue());
+ }
+ else {
+ lastAltAST = blockAST.getChild(lastAlt.intValue()-1);
+ }
+ //System.out.println("last alt is "+lastAltAST.toStringTree());
+ // if last alt looks like ( ALT . ) then wildcard
+ // Avoid looking at optional blocks etc... that have last alt
+ // as the EOB:
+ // ( BLOCK ( ALT 'else' statement ) )
+ if ( lastAltAST.getType()!=ANTLRParser.EOB &&
+ lastAltAST.getChild(0).getType()== ANTLRParser.WILDCARD &&
+ lastAltAST.getChild(1).getType()== ANTLRParser.EOA )
+ {
+ //System.out.println("wildcard");
+ disabledAlts.remove(lastAlt);
+ }
+ }
+
+ protected void issueRecursionWarnings() {
+ // RECURSION OVERFLOW
+ Set dfaStatesWithRecursionProblems =
+ stateToRecursionOverflowConfigurationsMap.keySet();
+ // now walk truly unique (unaliased) list of dfa states with inf recur
+ // Goal: create a map from alt to map>
+ // Map>
+ Map altToTargetToCallSitesMap = new HashMap();
+ // track a single problem DFA state for each alt
+ Map altToDFAState = new HashMap();
+ computeAltToProblemMaps(dfaStatesWithRecursionProblems,
+ stateToRecursionOverflowConfigurationsMap,
+ altToTargetToCallSitesMap, // output param
+ altToDFAState); // output param
+
+ // walk each alt with recursion overflow problems and generate error
+ Set alts = altToTargetToCallSitesMap.keySet();
+ List sortedAlts = new ArrayList(alts);
+ Collections.sort(sortedAlts);
+ for (Iterator altsIt = sortedAlts.iterator(); altsIt.hasNext();) {
+ Integer altI = (Integer) altsIt.next();
+ Map targetToCallSiteMap =
+ (Map)altToTargetToCallSitesMap.get(altI);
+ Set targetRules = targetToCallSiteMap.keySet();
+ Collection callSiteStates = targetToCallSiteMap.values();
+ DFAState sampleBadState = (DFAState)altToDFAState.get(altI);
+ ErrorManager.recursionOverflow(this,
+ sampleBadState,
+ altI.intValue(),
+ targetRules,
+ callSiteStates);
+ }
+ }
+
+ private void computeAltToProblemMaps(Set dfaStatesUnaliased,
+ Map configurationsMap,
+ Map altToTargetToCallSitesMap,
+ Map altToDFAState)
+ {
+ for (Iterator it = dfaStatesUnaliased.iterator(); it.hasNext();) {
+ Integer stateI = (Integer) it.next();
+ // walk this DFA's config list
+ List configs = (List)configurationsMap.get(stateI);
+ for (int i = 0; i < configs.size(); i++) {
+ NFAConfiguration c = (NFAConfiguration) configs.get(i);
+ NFAState ruleInvocationState = dfa.nfa.getState(c.state);
+ Transition transition0 = ruleInvocationState.transition[0];
+ RuleClosureTransition ref = (RuleClosureTransition)transition0;
+ String targetRule = ((NFAState) ref.target).enclosingRule.name;
+ Integer altI = Utils.integer(c.alt);
+ Map targetToCallSiteMap =
+ (Map)altToTargetToCallSitesMap.get(altI);
+ if ( targetToCallSiteMap==null ) {
+ targetToCallSiteMap = new HashMap();
+ altToTargetToCallSitesMap.put(altI, targetToCallSiteMap);
+ }
+ Set callSites =
+ (HashSet)targetToCallSiteMap.get(targetRule);
+ if ( callSites==null ) {
+ callSites = new HashSet();
+ targetToCallSiteMap.put(targetRule, callSites);
+ }
+ callSites.add(ruleInvocationState);
+ // track one problem DFA state per alt
+ if ( altToDFAState.get(altI)==null ) {
+ DFAState sampleBadState = dfa.getState(stateI.intValue());
+ altToDFAState.put(altI, sampleBadState);
+ }
+ }
+ }
+ }
+
+ private Set getUnaliasedDFAStateSet(Set dfaStatesWithRecursionProblems) {
+ Set dfaStatesUnaliased = new HashSet();
+ for (Iterator it = dfaStatesWithRecursionProblems.iterator(); it.hasNext();) {
+ Integer stateI = (Integer) it.next();
+ DFAState d = dfa.getState(stateI.intValue());
+ dfaStatesUnaliased.add(Utils.integer(d.stateNumber));
+ }
+ return dfaStatesUnaliased;
+ }
+
+
+ // T R A C K I N G M E T H O D S
+
+ /** Report the fact that DFA state d is not a state resolved with
+ * predicates and yet it has no emanating edges. Usually this
+ * is a result of the closure/reach operations being unable to proceed
+ */
+ public void reportDanglingState(DFAState d) {
+ danglingStates.add(d);
+ }
+
+ public void reportAnalysisTimeout() {
+ timedOut = true;
+ dfa.nfa.grammar.setOfDFAWhoseAnalysisTimedOut.add(dfa);
+ }
+
+ /** Report that at least 2 alts have recursive constructs. There is
+ * no way to build a DFA so we terminated.
+ */
+ public void reportNonLLStarDecision(DFA dfa) {
+ /*
+ System.out.println("non-LL(*) DFA "+dfa.decisionNumber+", alts: "+
+ dfa.recursiveAltSet.toList());
+ */
+ nonLLStarDecision = true;
+ altsWithProblem.addAll(dfa.recursiveAltSet.toList());
+ }
+
+ public void reportRecursionOverflow(DFAState d,
+ NFAConfiguration recursionNFAConfiguration)
+ {
+ // track the state number rather than the state as d will change
+ // out from underneath us; hash wouldn't return any value
+
+ // left-recursion is detected in start state. Since we can't
+ // call resolveNondeterminism() on the start state (it would
+ // not look k=1 to get min single token lookahead), we must
+ // prevent errors derived from this state. Avoid start state
+ if ( d.stateNumber > 0 ) {
+ Integer stateI = Utils.integer(d.stateNumber);
+ stateToRecursionOverflowConfigurationsMap.map(stateI, recursionNFAConfiguration);
+ }
+ }
+
+ public void reportNondeterminism(DFAState d, Set nondeterministicAlts) {
+ altsWithProblem.addAll(nondeterministicAlts); // track overall list
+ statesWithSyntacticallyAmbiguousAltsSet.add(d);
+ dfa.nfa.grammar.setOfNondeterministicDecisionNumbers.add(
+ Utils.integer(dfa.getDecisionNumber())
+ );
+ }
+
+ /** Currently the analysis reports issues between token definitions, but
+ * we don't print out warnings in favor of just picking the first token
+ * definition found in the grammar ala lex/flex.
+ */
+ public void reportLexerRuleNondeterminism(DFAState d, Set nondeterministicAlts) {
+ stateToSyntacticallyAmbiguousTokensRuleAltsMap.put(d,nondeterministicAlts);
+ }
+
+ public void reportNondeterminismResolvedWithSemanticPredicate(DFAState d) {
+ // First, prevent a recursion warning on this state due to
+ // pred resolution
+ if ( d.abortedDueToRecursionOverflow ) {
+ d.dfa.probe.removeRecursiveOverflowState(d);
+ }
+ statesResolvedWithSemanticPredicatesSet.add(d);
+ //System.out.println("resolved with pred: "+d);
+ dfa.nfa.grammar.setOfNondeterministicDecisionNumbersResolvedWithPredicates.add(
+ Utils.integer(dfa.getDecisionNumber())
+ );
+ }
+
+ /** Report the list of predicates found for each alternative; copy
+ * the list because this set gets altered later by the method
+ * tryToResolveWithSemanticPredicates() while flagging NFA configurations
+ * in d as resolved.
+ */
+ public void reportAltPredicateContext(DFAState d, Map altPredicateContext) {
+ Map copy = new HashMap();
+ copy.putAll(altPredicateContext);
+ stateToAltSetWithSemanticPredicatesMap.put(d,copy);
+ }
+
+ public void reportIncompletelyCoveredAlts(DFAState d,
+ Map> altToLocationsReachableWithoutPredicate)
+ {
+ stateToIncompletelyCoveredAltsMap.put(d, altToLocationsReachableWithoutPredicate);
+ }
+
+ // S U P P O R T
+
+ /** Given a start state and a target state, return true if start can reach
+ * target state. Also, compute the set of DFA states
+ * that are on a path from start to target; return in states parameter.
+ */
+ protected boolean reachesState(DFAState startState,
+ DFAState targetState,
+ Set states) {
+ if ( startState==targetState ) {
+ states.add(targetState);
+ //System.out.println("found target DFA state "+targetState.getStateNumber());
+ stateReachable.put(startState.stateNumber, REACHABLE_YES);
+ return true;
+ }
+
+ DFAState s = startState;
+ // avoid infinite loops
+ stateReachable.put(s.stateNumber, REACHABLE_BUSY);
+
+ // look for a path to targetState among transitions for this state
+ // stop when you find the first one; I'm pretty sure there is
+ // at most one path to any DFA state with conflicting predictions
+ for (int i=0; i as
+ * a parameter. The incoming states set must be all states that lead
+ * from startState to targetState and no others so this algorithm doesn't
+ * take a path that eventually leads to a state other than targetState.
+ * Don't follow loops, leading to short (possibly shortest) path.
+ */
+ protected void getSampleInputSequenceUsingStateSet(State startState,
+ State targetState,
+ Set states,
+ List labels)
+ {
+ statesVisitedDuringSampleSequence.add(startState.stateNumber);
+
+ // pick the first edge in states as the one to traverse
+ for (int i=0; i"+
+ edgeTarget.stateNumber+" =="+
+ label.toString(dfa.nfa.grammar)+"?");
+ */
+ if ( t.label.isEpsilon() || t.label.isSemanticPredicate() ) {
+ // nondeterministically backtrack down epsilon edges
+ path.add(edgeTarget);
+ boolean found =
+ getNFAPath(edgeTarget, labelIndex, labels, path);
+ if ( found ) {
+ statesVisitedAtInputDepth.remove(thisStateKey);
+ return true; // return to "calling" state
+ }
+ path.remove(path.size()-1); // remove; didn't work out
+ continue; // look at the next edge
+ }
+ if ( t.label.matches(label) ) {
+ path.add(edgeTarget);
+ /*
+ System.out.println("found label "+
+ t.label.toString(dfa.nfa.grammar)+
+ " at state "+s.stateNumber+"; labelIndex="+labelIndex);
+ */
+ if ( labelIndex==labels.size()-1 ) {
+ // found last label; done!
+ statesVisitedAtInputDepth.remove(thisStateKey);
+ return true;
+ }
+ // otherwise try to match remaining input
+ boolean found =
+ getNFAPath(edgeTarget, labelIndex+1, labels, path);
+ if ( found ) {
+ statesVisitedAtInputDepth.remove(thisStateKey);
+ return true;
+ }
+ /*
+ System.out.println("backtrack; path from "+s.stateNumber+"->"+
+ t.label.toString(dfa.nfa.grammar)+" didn't work");
+ */
+ path.remove(path.size()-1); // remove; didn't work out
+ continue; // keep looking for a path for labels
+ }
+ }
+ //System.out.println("no epsilon or matching edge; removing "+thisStateKey);
+ // no edge was found matching label; is ok, some state will have it
+ statesVisitedAtInputDepth.remove(thisStateKey);
+ return false;
+ }
+
+ protected String getStateLabelIndexKey(int s, int i) {
+ StringBuffer buf = new StringBuffer();
+ buf.append(s);
+ buf.append('_');
+ buf.append(i);
+ return buf.toString();
+ }
+
+ /** From an alt number associated with artificial Tokens rule, return
+ * the name of the token that is associated with that alt.
+ */
+ public String getTokenNameForTokensRuleAlt(int alt) {
+ NFAState decisionState = dfa.getNFADecisionStartState();
+ NFAState altState =
+ dfa.nfa.grammar.getNFAStateForAltOfDecision(decisionState,alt);
+ NFAState decisionLeft = (NFAState)altState.transition[0].target;
+ RuleClosureTransition ruleCallEdge =
+ (RuleClosureTransition)decisionLeft.transition[0];
+ NFAState ruleStartState = (NFAState)ruleCallEdge.target;
+ //System.out.println("alt = "+decisionLeft.getEnclosingRule());
+ return ruleStartState.enclosingRule.name;
+ }
+
+ public void reset() {
+ stateToRecursionOverflowConfigurationsMap.clear();
+ }
+}
diff --git a/antlr_3_1_source/analysis/LL1Analyzer.java b/antlr_3_1_source/analysis/LL1Analyzer.java
new file mode 100644
index 0000000..d3c97ac
--- /dev/null
+++ b/antlr_3_1_source/analysis/LL1Analyzer.java
@@ -0,0 +1,444 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.Rule;
+import org.antlr.tool.ANTLRParser;
+import org.antlr.tool.Grammar;
+import org.antlr.misc.IntervalSet;
+import org.antlr.misc.IntSet;
+
+import java.util.*;
+
+/**
+ * Created by IntelliJ IDEA.
+ * User: parrt
+ * Date: Dec 31, 2007
+ * Time: 1:31:16 PM
+ * To change this template use File | Settings | File Templates.
+ */
+public class LL1Analyzer {
+ /** 0 if we hit end of rule and invoker should keep going (epsilon) */
+ public static final int DETECT_PRED_EOR = 0;
+ /** 1 if we found a nonautobacktracking pred */
+ public static final int DETECT_PRED_FOUND = 1;
+ /** 2 if we didn't find such a pred */
+ public static final int DETECT_PRED_NOT_FOUND = 2;
+
+ public Grammar grammar;
+
+ /** Used during LOOK to detect computation cycles */
+ protected Set lookBusy = new HashSet();
+
+ public Map FIRSTCache = new HashMap();
+ public Map FOLLOWCache = new HashMap();
+
+ public LL1Analyzer(Grammar grammar) {
+ this.grammar = grammar;
+ }
+
+ /*
+ public void computeRuleFIRSTSets() {
+ if ( getNumberOfDecisions()==0 ) {
+ createNFAs();
+ }
+ for (Iterator it = getRules().iterator(); it.hasNext();) {
+ Rule r = (Rule)it.next();
+ if ( r.isSynPred ) {
+ continue;
+ }
+ LookaheadSet s = FIRST(r);
+ System.out.println("FIRST("+r.name+")="+s);
+ }
+ }
+ */
+
+ /*
+ public Set getOverriddenRulesWithDifferentFIRST() {
+ // walk every rule in this grammar and compare FIRST set with
+ // those in imported grammars.
+ Set rules = new HashSet();
+ for (Iterator it = getRules().iterator(); it.hasNext();) {
+ Rule r = (Rule)it.next();
+ //System.out.println(r.name+" FIRST="+r.FIRST);
+ for (int i = 0; i < delegates.size(); i++) {
+ Grammar g = delegates.get(i);
+ Rule importedRule = g.getRule(r.name);
+ if ( importedRule != null ) { // exists in imported grammar
+ // System.out.println(r.name+" exists in imported grammar: FIRST="+importedRule.FIRST);
+ if ( !r.FIRST.equals(importedRule.FIRST) ) {
+ rules.add(r.name);
+ }
+ }
+ }
+ }
+ return rules;
+ }
+
+ public Set getImportedRulesSensitiveToOverriddenRulesDueToLOOK() {
+ Set diffFIRSTs = getOverriddenRulesWithDifferentFIRST();
+ Set rules = new HashSet();
+ for (Iterator it = diffFIRSTs.iterator(); it.hasNext();) {
+ String r = (String) it.next();
+ for (int i = 0; i < delegates.size(); i++) {
+ Grammar g = delegates.get(i);
+ Set callers = g.ruleSensitivity.get(r);
+ // somebody invokes rule whose FIRST changed in subgrammar?
+ if ( callers!=null ) {
+ rules.addAll(callers);
+ //System.out.println(g.name+" rules "+callers+" sensitive to "+r+"; dup 'em");
+ }
+ }
+ }
+ return rules;
+ }
+*/
+
+ /*
+ public LookaheadSet LOOK(Rule r) {
+ if ( r.FIRST==null ) {
+ r.FIRST = FIRST(r.startState);
+ }
+ return r.FIRST;
+ }
+*/
+
+ /** From an NFA state, s, find the set of all labels reachable from s.
+ * Used to compute follow sets for error recovery. Never computes
+ * a FOLLOW operation. FIRST stops at end of rules, returning EOR, unless
+ * invoked from another rule. I.e., routine properly handles
+ *
+ * a : b A ;
+ *
+ * where b is nullable.
+ *
+ * We record with EOR_TOKEN_TYPE if we hit the end of a rule so we can
+ * know at runtime (when these sets are used) to start walking up the
+ * follow chain to compute the real, correct follow set (as opposed to
+ * the FOLLOW, which is a superset).
+ *
+ * This routine will only be used on parser and tree parser grammars.
+ */
+ public LookaheadSet FIRST(NFAState s) {
+ //System.out.println("> FIRST("+s+") in rule "+s.enclosingRule);
+ lookBusy.clear();
+ LookaheadSet look = _FIRST(s, false);
+ //System.out.println("< FIRST("+s+") in rule "+s.enclosingRule+"="+look.toString(this));
+ return look;
+ }
+
+ public LookaheadSet FOLLOW(Rule r) {
+ LookaheadSet f = FOLLOWCache.get(r);
+ if ( f!=null ) {
+ return f;
+ }
+ f = _FIRST(r.stopState, true);
+ FOLLOWCache.put(r, f);
+ return f;
+ }
+
+ public LookaheadSet LOOK(NFAState s) {
+ if ( NFAToDFAConverter.debug ) {
+ System.out.println("> LOOK("+s+")");
+ }
+ lookBusy.clear();
+ LookaheadSet look = _FIRST(s, true);
+ // FOLLOW makes no sense (at the moment!) for lexical rules.
+ if ( grammar.type!=Grammar.LEXER && look.member(Label.EOR_TOKEN_TYPE) ) {
+ // avoid altering FIRST reset as it is cached
+ LookaheadSet f = FOLLOW(s.enclosingRule);
+ f.orInPlace(look);
+ f.remove(Label.EOR_TOKEN_TYPE);
+ look = f;
+ //look.orInPlace(FOLLOW(s.enclosingRule));
+ }
+ else if ( grammar.type==Grammar.LEXER && look.member(Label.EOT) ) {
+ // if this has EOT, lookahead is all char (all char can follow rule)
+ //look = new LookaheadSet(Label.EOT);
+ look = new LookaheadSet(IntervalSet.COMPLETE_SET);
+ }
+ if ( NFAToDFAConverter.debug ) {
+ System.out.println("< LOOK("+s+")="+look.toString(grammar));
+ }
+ return look;
+ }
+
+ protected LookaheadSet _FIRST(NFAState s, boolean chaseFollowTransitions) {
+ //System.out.println("_LOOK("+s+") in rule "+s.enclosingRule);
+ /*
+ if ( s.transition[0] instanceof RuleClosureTransition ) {
+ System.out.println("go to rule "+((NFAState)s.transition[0].target).enclosingRule);
+ }
+ */
+ if ( !chaseFollowTransitions && s.isAcceptState() ) {
+ if ( grammar.type==Grammar.LEXER ) {
+ // FOLLOW makes no sense (at the moment!) for lexical rules.
+ // assume all char can follow
+ return new LookaheadSet(IntervalSet.COMPLETE_SET);
+ }
+ return new LookaheadSet(Label.EOR_TOKEN_TYPE);
+ }
+
+ if ( lookBusy.contains(s) ) {
+ // return a copy of an empty set; we may modify set inline
+ return new LookaheadSet();
+ }
+ lookBusy.add(s);
+
+ Transition transition0 = s.transition[0];
+ if ( transition0==null ) {
+ return null;
+ }
+
+ if ( transition0.label.isAtom() ) {
+ int atom = transition0.label.getAtom();
+ return new LookaheadSet(atom);
+ }
+ if ( transition0.label.isSet() ) {
+ IntSet sl = transition0.label.getSet();
+ return new LookaheadSet(sl);
+ }
+
+ // compute FIRST of transition 0
+ LookaheadSet tset = null;
+ // if transition 0 is a rule call and we don't want FOLLOW, check cache
+ if ( !chaseFollowTransitions && transition0 instanceof RuleClosureTransition ) {
+ LookaheadSet prev = FIRSTCache.get((NFAState)transition0.target);
+ if ( prev!=null ) {
+ tset = prev;
+ }
+ }
+
+ // if not in cache, must compute
+ if ( tset==null ) {
+ tset = _FIRST((NFAState)transition0.target, chaseFollowTransitions);
+ // save FIRST cache for transition 0 if rule call
+ if ( !chaseFollowTransitions && transition0 instanceof RuleClosureTransition ) {
+ FIRSTCache.put((NFAState)transition0.target, tset);
+ }
+ }
+
+ // did we fall off the end?
+ if ( grammar.type!=Grammar.LEXER && tset.member(Label.EOR_TOKEN_TYPE) ) {
+ if ( transition0 instanceof RuleClosureTransition ) {
+ // we called a rule that found the end of the rule.
+ // That means the rule is nullable and we need to
+ // keep looking at what follows the rule ref. E.g.,
+ // a : b A ; where b is nullable means that LOOK(a)
+ // should include A.
+ RuleClosureTransition ruleInvocationTrans =
+ (RuleClosureTransition)transition0;
+ // remove the EOR and get what follows
+ //tset.remove(Label.EOR_TOKEN_TYPE);
+ NFAState following = (NFAState) ruleInvocationTrans.followState;
+ LookaheadSet fset = _FIRST(following, chaseFollowTransitions);
+ fset.orInPlace(tset); // tset cached; or into new set
+ fset.remove(Label.EOR_TOKEN_TYPE);
+ tset = fset;
+ }
+ }
+
+ Transition transition1 = s.transition[1];
+ if ( transition1!=null ) {
+ LookaheadSet tset1 =
+ _FIRST((NFAState)transition1.target, chaseFollowTransitions);
+ tset1.orInPlace(tset); // tset cached; or into new set
+ tset = tset1;
+ }
+
+ return tset;
+ }
+
+ /** Is there a non-syn-pred predicate visible from s that is not in
+ * the rule enclosing s? This accounts for most predicate situations
+ * and lets ANTLR do a simple LL(1)+pred computation.
+ *
+ * TODO: what about gated vs regular preds?
+ */
+ public boolean detectConfoundingPredicates(NFAState s) {
+ lookBusy.clear();
+ Rule r = s.enclosingRule;
+ return _detectConfoundingPredicates(s, r, false) == DETECT_PRED_FOUND;
+ }
+
+ protected int _detectConfoundingPredicates(NFAState s,
+ Rule enclosingRule,
+ boolean chaseFollowTransitions)
+ {
+ //System.out.println("_detectNonAutobacktrackPredicates("+s+")");
+ if ( !chaseFollowTransitions && s.isAcceptState() ) {
+ if ( grammar.type==Grammar.LEXER ) {
+ // FOLLOW makes no sense (at the moment!) for lexical rules.
+ // assume all char can follow
+ return DETECT_PRED_NOT_FOUND;
+ }
+ return DETECT_PRED_EOR;
+ }
+
+ if ( lookBusy.contains(s) ) {
+ // return a copy of an empty set; we may modify set inline
+ return DETECT_PRED_NOT_FOUND;
+ }
+ lookBusy.add(s);
+
+ Transition transition0 = s.transition[0];
+ if ( transition0==null ) {
+ return DETECT_PRED_NOT_FOUND;
+ }
+
+ if ( !(transition0.label.isSemanticPredicate()||
+ transition0.label.isEpsilon()) ) {
+ return DETECT_PRED_NOT_FOUND;
+ }
+
+ if ( transition0.label.isSemanticPredicate() ) {
+ //System.out.println("pred "+transition0.label);
+ SemanticContext ctx = transition0.label.getSemanticContext();
+ SemanticContext.Predicate p = (SemanticContext.Predicate)ctx;
+ if ( p.predicateAST.getType() != ANTLRParser.BACKTRACK_SEMPRED ) {
+ return DETECT_PRED_FOUND;
+ }
+ }
+
+ /*
+ if ( transition0.label.isSemanticPredicate() ) {
+ System.out.println("pred "+transition0.label);
+ SemanticContext ctx = transition0.label.getSemanticContext();
+ SemanticContext.Predicate p = (SemanticContext.Predicate)ctx;
+ // if a non-syn-pred found not in enclosingRule, say we found one
+ if ( p.predicateAST.getType() != ANTLRParser.BACKTRACK_SEMPRED &&
+ !p.predicateAST.enclosingRuleName.equals(enclosingRule.name) )
+ {
+ System.out.println("found pred "+p+" not in "+enclosingRule.name);
+ return DETECT_PRED_FOUND;
+ }
+ }
+ */
+
+ int result = _detectConfoundingPredicates((NFAState)transition0.target,
+ enclosingRule,
+ chaseFollowTransitions);
+ if ( result == DETECT_PRED_FOUND ) {
+ return DETECT_PRED_FOUND;
+ }
+
+ if ( result == DETECT_PRED_EOR ) {
+ if ( transition0 instanceof RuleClosureTransition ) {
+ // we called a rule that found the end of the rule.
+ // That means the rule is nullable and we need to
+ // keep looking at what follows the rule ref. E.g.,
+ // a : b A ; where b is nullable means that LOOK(a)
+ // should include A.
+ RuleClosureTransition ruleInvocationTrans =
+ (RuleClosureTransition)transition0;
+ NFAState following = (NFAState) ruleInvocationTrans.followState;
+ int afterRuleResult =
+ _detectConfoundingPredicates(following,
+ enclosingRule,
+ chaseFollowTransitions);
+ if ( afterRuleResult == DETECT_PRED_FOUND ) {
+ return DETECT_PRED_FOUND;
+ }
+ }
+ }
+
+ Transition transition1 = s.transition[1];
+ if ( transition1!=null ) {
+ int t1Result =
+ _detectConfoundingPredicates((NFAState)transition1.target,
+ enclosingRule,
+ chaseFollowTransitions);
+ if ( t1Result == DETECT_PRED_FOUND ) {
+ return DETECT_PRED_FOUND;
+ }
+ }
+
+ return DETECT_PRED_NOT_FOUND;
+ }
+
+ /** Return predicate expression found via epsilon edges from s. Do
+ * not look into other rules for now. Do something simple. Include
+ * backtracking synpreds.
+ */
+ public SemanticContext getPredicates(NFAState altStartState) {
+ lookBusy.clear();
+ return _getPredicates(altStartState, altStartState);
+ }
+
+ protected SemanticContext _getPredicates(NFAState s, NFAState altStartState) {
+ //System.out.println("_getPredicates("+s+")");
+ if ( s.isAcceptState() ) {
+ return null;
+ }
+
+ // avoid infinite loops from (..)* etc...
+ if ( lookBusy.contains(s) ) {
+ return null;
+ }
+ lookBusy.add(s);
+
+ Transition transition0 = s.transition[0];
+ // no transitions
+ if ( transition0==null ) {
+ return null;
+ }
+
+ // not a predicate and not even an epsilon
+ if ( !(transition0.label.isSemanticPredicate()||
+ transition0.label.isEpsilon()) ) {
+ return null;
+ }
+
+ SemanticContext p = null;
+ SemanticContext p0 = null;
+ SemanticContext p1 = null;
+ if ( transition0.label.isSemanticPredicate() ) {
+ //System.out.println("pred "+transition0.label);
+ p = transition0.label.getSemanticContext();
+ // ignore backtracking preds not on left edge for this decision
+ if ( ((SemanticContext.Predicate)p).predicateAST.getType() ==
+ ANTLRParser.BACKTRACK_SEMPRED &&
+ s == altStartState.transition[0].target )
+ {
+ p = null; // don't count
+ }
+ }
+
+ // get preds from beyond this state
+ p0 = _getPredicates((NFAState)transition0.target, altStartState);
+
+ // get preds from other transition
+ Transition transition1 = s.transition[1];
+ if ( transition1!=null ) {
+ p1 = _getPredicates((NFAState)transition1.target, altStartState);
+ }
+
+ // join this&following-right|following-down
+ return SemanticContext.and(p,SemanticContext.or(p0,p1));
+ }
+}
diff --git a/antlr_3_1_source/analysis/LL1DFA.java b/antlr_3_1_source/analysis/LL1DFA.java
new file mode 100644
index 0000000..e9ac316
--- /dev/null
+++ b/antlr_3_1_source/analysis/LL1DFA.java
@@ -0,0 +1,179 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.misc.IntervalSet;
+import org.antlr.misc.MultiMap;
+import org.antlr.tool.ANTLRParser;
+
+import java.util.Iterator;
+import java.util.List;
+import java.util.Collections;
+
+/** A special DFA that is exactly LL(1) or LL(1) with backtracking mode
+ * predicates to resolve edge set collisions.
+ */
+public class LL1DFA extends DFA {
+ /** From list of lookahead sets (one per alt in decision), create
+ * an LL(1) DFA. One edge per set.
+ *
+ * s0-{alt1}->:o=>1
+ * | \
+ * | -{alt2}->:o=>2
+ * |
+ * ...
+ */
+ public LL1DFA(int decisionNumber, NFAState decisionStartState, LookaheadSet[] altLook) {
+ DFAState s0 = newState();
+ startState = s0;
+ nfa = decisionStartState.nfa;
+ nAlts = nfa.grammar.getNumberOfAltsForDecisionNFA(decisionStartState);
+ this.decisionNumber = decisionNumber;
+ this.decisionNFAStartState = decisionStartState;
+ initAltRelatedInfo();
+ unreachableAlts = null;
+ for (int alt=1; altlist-of-alts mappings, create a DFA
+ * that uses syn preds for all |list-of-alts|>1.
+ */
+ public LL1DFA(int decisionNumber,
+ NFAState decisionStartState,
+ MultiMap edgeMap)
+ {
+ DFAState s0 = newState();
+ startState = s0;
+ nfa = decisionStartState.nfa;
+ nAlts = nfa.grammar.getNumberOfAltsForDecisionNFA(decisionStartState);
+ this.decisionNumber = decisionNumber;
+ this.decisionNFAStartState = decisionStartState;
+ initAltRelatedInfo();
+ unreachableAlts = null;
+ for (Iterator it = edgeMap.keySet().iterator(); it.hasNext();) {
+ IntervalSet edge = (IntervalSet)it.next();
+ List alts = edgeMap.get(edge);
+ Collections.sort(alts); // make sure alts are attempted in order
+ //System.out.println(edge+" -> "+alts);
+ DFAState s = newState();
+ s.k = 1;
+ Label e = getLabelForSet(edge);
+ s0.addTransition(s, e);
+ if ( alts.size()==1 ) {
+ s.acceptState = true;
+ int alt = alts.get(0);
+ setAcceptState(alt, s);
+ s.cachedUniquelyPredicatedAlt = alt;
+ }
+ else {
+ // resolve with syntactic predicates. Add edges from
+ // state s that test predicates.
+ s.resolvedWithPredicates = true;
+ for (int i = 0; i < alts.size(); i++) {
+ int alt = (int)alts.get(i);
+ s.cachedUniquelyPredicatedAlt = NFA.INVALID_ALT_NUMBER;
+ DFAState predDFATarget = getAcceptState(alt);
+ if ( predDFATarget==null ) {
+ predDFATarget = newState(); // create if not there.
+ predDFATarget.acceptState = true;
+ predDFATarget.cachedUniquelyPredicatedAlt = alt;
+ setAcceptState(alt, predDFATarget);
+ }
+ // add a transition to pred target from d
+ /*
+ int walkAlt =
+ decisionStartState.translateDisplayAltToWalkAlt(alt);
+ NFAState altLeftEdge = nfa.grammar.getNFAStateForAltOfDecision(decisionStartState, walkAlt);
+ NFAState altStartState = (NFAState)altLeftEdge.transition[0].target;
+ SemanticContext ctx = nfa.grammar.ll1Analyzer.getPredicates(altStartState);
+ System.out.println("sem ctx = "+ctx);
+ if ( ctx == null ) {
+ ctx = new SemanticContext.TruePredicate();
+ }
+ s.addTransition(predDFATarget, new Label(ctx));
+ */
+ SemanticContext.Predicate synpred =
+ getSynPredForAlt(decisionStartState, alt);
+ if ( synpred == null ) {
+ synpred = new SemanticContext.TruePredicate();
+ }
+ s.addTransition(predDFATarget, new PredicateLabel(synpred));
+ }
+ }
+ }
+ //System.out.println("dfa for preds=\n"+this);
+ }
+
+ protected Label getLabelForSet(IntervalSet edgeSet) {
+ Label e = null;
+ int atom = edgeSet.getSingleElement();
+ if ( atom != Label.INVALID ) {
+ e = new Label(atom);
+ }
+ else {
+ e = new Label(edgeSet);
+ }
+ return e;
+ }
+
+ protected SemanticContext.Predicate getSynPredForAlt(NFAState decisionStartState,
+ int alt)
+ {
+ int walkAlt =
+ decisionStartState.translateDisplayAltToWalkAlt(alt);
+ NFAState altLeftEdge =
+ nfa.grammar.getNFAStateForAltOfDecision(decisionStartState, walkAlt);
+ NFAState altStartState = (NFAState)altLeftEdge.transition[0].target;
+ //System.out.println("alt "+alt+" start state = "+altStartState.stateNumber);
+ if ( altStartState.transition[0].isSemanticPredicate() ) {
+ SemanticContext ctx = altStartState.transition[0].label.getSemanticContext();
+ if ( ctx.isSyntacticPredicate() ) {
+ SemanticContext.Predicate p = (SemanticContext.Predicate)ctx;
+ if ( p.predicateAST.getType() == ANTLRParser.BACKTRACK_SEMPRED ) {
+ /*
+ System.out.println("syn pred for alt "+walkAlt+" "+
+ ((SemanticContext.Predicate)altStartState.transition[0].label.getSemanticContext()).predicateAST);
+ */
+ if ( ctx.isSyntacticPredicate() ) {
+ nfa.grammar.synPredUsedInDFA(this, ctx);
+ }
+ return (SemanticContext.Predicate)altStartState.transition[0].label.getSemanticContext();
+ }
+ }
+ }
+ return null;
+ }
+}
diff --git a/antlr_3_1_source/analysis/Label.java b/antlr_3_1_source/analysis/Label.java
new file mode 100644
index 0000000..161250c
--- /dev/null
+++ b/antlr_3_1_source/analysis/Label.java
@@ -0,0 +1,382 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.Grammar;
+import org.antlr.tool.GrammarAST;
+import org.antlr.misc.IntervalSet;
+import org.antlr.misc.IntSet;
+
+/** A state machine transition label. A label can be either a simple
+ * label such as a token or character. A label can be a set of char or
+ * tokens. It can be an epsilon transition. It can be a semantic predicate
+ * (which assumes an epsilon transition) or a tree of predicates (in a DFA).
+ */
+public class Label implements Comparable, Cloneable {
+ public static final int INVALID = -7;
+
+ public static final int ACTION = -6;
+
+ public static final int EPSILON = -5;
+
+ public static final String EPSILON_STR = "";
+
+ /** label is a semantic predicate; implies label is epsilon also */
+ public static final int SEMPRED = -4;
+
+ /** label is a set of tokens or char */
+ public static final int SET = -3;
+
+ /** End of Token is like EOF for lexer rules. It implies that no more
+ * characters are available and that NFA conversion should terminate
+ * for this path. For example
+ *
+ * A : 'a' 'b' | 'a' ;
+ *
+ * yields a DFA predictor:
+ *
+ * o-a->o-b->1 predict alt 1
+ * |
+ * |-EOT->o predict alt 2
+ *
+ * To generate code for EOT, treat it as the "default" path, which
+ * implies there is no way to mismatch a char for the state from
+ * which the EOT emanates.
+ */
+ public static final int EOT = -2;
+
+ public static final int EOF = -1;
+
+ /** We have labels like EPSILON that are below 0; it's hard to
+ * store them in an array with negative index so use this
+ * constant as an index shift when accessing arrays based upon
+ * token type. If real token type is i, then array index would be
+ * NUM_FAUX_LABELS + i.
+ */
+ public static final int NUM_FAUX_LABELS = -INVALID;
+
+ /** Anything at this value or larger can be considered a simple atom int
+ * for easy comparison during analysis only; faux labels are not used
+ * during parse time for real token types or char values.
+ */
+ public static final int MIN_ATOM_VALUE = EOT;
+
+ // TODO: is 0 a valid unicode char? max is FFFF -1, right?
+ public static final int MIN_CHAR_VALUE = '\u0000';
+ public static final int MAX_CHAR_VALUE = '\uFFFE';
+
+ /** End of rule token type; imaginary token type used only for
+ * local, partial FOLLOW sets to indicate that the local FOLLOW
+ * hit the end of rule. During error recovery, the local FOLLOW
+ * of a token reference may go beyond the end of the rule and have
+ * to use FOLLOW(rule). I have to just shift the token types to 2..n
+ * rather than 1..n to accommodate this imaginary token in my bitsets.
+ * If I didn't use a bitset implementation for runtime sets, I wouldn't
+ * need this. EOF is another candidate for a run time token type for
+ * parsers. Follow sets are not computed for lexers so we do not have
+ * this issue.
+ */
+ public static final int EOR_TOKEN_TYPE =
+ org.antlr.runtime.Token.EOR_TOKEN_TYPE;
+
+ public static final int DOWN = org.antlr.runtime.Token.DOWN;
+ public static final int UP = org.antlr.runtime.Token.UP;
+
+ /** tokens and char range overlap; tokens are MIN_TOKEN_TYPE..n */
+ public static final int MIN_TOKEN_TYPE =
+ org.antlr.runtime.Token.MIN_TOKEN_TYPE;
+
+ /** The wildcard '.' char atom implies all valid characters==UNICODE */
+ //public static final IntSet ALLCHAR = IntervalSet.of(MIN_CHAR_VALUE,MAX_CHAR_VALUE);
+
+ /** The token type or character value; or, signifies special label. */
+ protected int label;
+
+ /** A set of token types or character codes if label==SET */
+ // TODO: try IntervalSet for everything
+ protected IntSet labelSet;
+
+ public Label(int label) {
+ this.label = label;
+ }
+
+ /** Make a set label */
+ public Label(IntSet labelSet) {
+ if ( labelSet==null ) {
+ this.label = SET;
+ this.labelSet = IntervalSet.of(INVALID);
+ return;
+ }
+ int singleAtom = labelSet.getSingleElement();
+ if ( singleAtom!=INVALID ) {
+ // convert back to a single atomic element if |labelSet|==1
+ label = singleAtom;
+ return;
+ }
+ this.label = SET;
+ this.labelSet = labelSet;
+ }
+
+ public Object clone() {
+ Label l;
+ try {
+ l = (Label)super.clone();
+ l.label = this.label;
+ l.labelSet = new IntervalSet();
+ l.labelSet.addAll(this.labelSet);
+ }
+ catch (CloneNotSupportedException e) {
+ throw new InternalError();
+ }
+ return l;
+ }
+
+ public void add(Label a) {
+ if ( isAtom() ) {
+ labelSet = IntervalSet.of(label);
+ label=SET;
+ if ( a.isAtom() ) {
+ labelSet.add(a.getAtom());
+ }
+ else if ( a.isSet() ) {
+ labelSet.addAll(a.getSet());
+ }
+ else {
+ throw new IllegalStateException("can't add element to Label of type "+label);
+ }
+ return;
+ }
+ if ( isSet() ) {
+ if ( a.isAtom() ) {
+ labelSet.add(a.getAtom());
+ }
+ else if ( a.isSet() ) {
+ labelSet.addAll(a.getSet());
+ }
+ else {
+ throw new IllegalStateException("can't add element to Label of type "+label);
+ }
+ return;
+ }
+ throw new IllegalStateException("can't add element to Label of type "+label);
+ }
+
+ public boolean isAtom() {
+ return label>=MIN_ATOM_VALUE;
+ }
+
+ public boolean isEpsilon() {
+ return label==EPSILON;
+ }
+
+ public boolean isSemanticPredicate() {
+ return false;
+ }
+
+ public boolean isAction() {
+ return false;
+ }
+
+ public boolean isSet() {
+ return label==SET;
+ }
+
+ /** return the single atom label or INVALID if not a single atom */
+ public int getAtom() {
+ if ( isAtom() ) {
+ return label;
+ }
+ return INVALID;
+ }
+
+ public IntSet getSet() {
+ if ( label!=SET ) {
+ // convert single element to a set if they ask for it.
+ return IntervalSet.of(label);
+ }
+ return labelSet;
+ }
+
+ public void setSet(IntSet set) {
+ label=SET;
+ labelSet = set;
+ }
+
+ public SemanticContext getSemanticContext() {
+ return null;
+ }
+
+ public boolean matches(int atom) {
+ if ( label==atom ) {
+ return true; // handle the single atom case efficiently
+ }
+ if ( isSet() ) {
+ return labelSet.member(atom);
+ }
+ return false;
+ }
+
+ public boolean matches(IntSet set) {
+ if ( isAtom() ) {
+ return set.member(getAtom());
+ }
+ if ( isSet() ) {
+ // matches if intersection non-nil
+ return !getSet().and(set).isNil();
+ }
+ return false;
+ }
+
+
+ public boolean matches(Label other) {
+ if ( other.isSet() ) {
+ return matches(other.getSet());
+ }
+ if ( other.isAtom() ) {
+ return matches(other.getAtom());
+ }
+ return false;
+ }
+
+ public int hashCode() {
+ if (label==SET) {
+ return labelSet.hashCode();
+ }
+ else {
+ return label;
+ }
+ }
+
+ // TODO: do we care about comparing set {A} with atom A? Doesn't now.
+ public boolean equals(Object o) {
+ if ( o==null ) {
+ return false;
+ }
+ if ( this == o ) {
+ return true; // equals if same object
+ }
+ // labels must be the same even if epsilon or set or sempred etc...
+ if ( label!=((Label)o).label ) {
+ return false;
+ }
+ if ( label==SET ) {
+ return this.labelSet.equals(((Label)o).labelSet);
+ }
+ return true; // label values are same, so true
+ }
+
+ public int compareTo(Object o) {
+ return this.label-((Label)o).label;
+ }
+
+ /** Predicates are lists of AST nodes from the NFA created from the
+ * grammar, but the same predicate could be cut/paste into multiple
+ * places in the grammar. I must compare the text of all the
+ * predicates to truly answer whether {p1,p2} .equals {p1,p2}.
+ * Unfortunately, I cannot rely on the AST.equals() to work properly
+ * so I must do a brute force O(n^2) nested traversal of the Set
+ * doing a String compare.
+ *
+ * At this point, Labels are not compared for equals when they are
+ * predicates, but here's the code for future use.
+ */
+ /*
+ protected boolean predicatesEquals(Set others) {
+ Iterator iter = semanticContext.iterator();
+ while (iter.hasNext()) {
+ AST predAST = (AST) iter.next();
+ Iterator inner = semanticContext.iterator();
+ while (inner.hasNext()) {
+ AST otherPredAST = (AST) inner.next();
+ if ( !predAST.getText().equals(otherPredAST.getText()) ) {
+ return false;
+ }
+ }
+ }
+ return true;
+ }
+ */
+
+ public String toString() {
+ switch (label) {
+ case SET :
+ return labelSet.toString();
+ default :
+ return String.valueOf(label);
+ }
+ }
+
+ public String toString(Grammar g) {
+ switch (label) {
+ case SET :
+ return labelSet.toString(g);
+ default :
+ return g.getTokenDisplayName(label);
+ }
+ }
+
+ /*
+ public String predicatesToString() {
+ if ( semanticContext==NFAConfiguration.DEFAULT_CLAUSE_SEMANTIC_CONTEXT ) {
+ return "!other preds";
+ }
+ StringBuffer buf = new StringBuffer();
+ Iterator iter = semanticContext.iterator();
+ while (iter.hasNext()) {
+ AST predAST = (AST) iter.next();
+ buf.append(predAST.getText());
+ if ( iter.hasNext() ) {
+ buf.append("&");
+ }
+ }
+ return buf.toString();
+ }
+ */
+
+ public static boolean intersect(Label label, Label edgeLabel) {
+ boolean hasIntersection = false;
+ boolean labelIsSet = label.isSet();
+ boolean edgeIsSet = edgeLabel.isSet();
+ if ( !labelIsSet && !edgeIsSet && edgeLabel.label==label.label ) {
+ hasIntersection = true;
+ }
+ else if ( labelIsSet && edgeIsSet &&
+ !edgeLabel.getSet().and(label.getSet()).isNil() ) {
+ hasIntersection = true;
+ }
+ else if ( labelIsSet && !edgeIsSet &&
+ label.getSet().member(edgeLabel.label) ) {
+ hasIntersection = true;
+ }
+ else if ( !labelIsSet && edgeIsSet &&
+ edgeLabel.getSet().member(label.label) ) {
+ hasIntersection = true;
+ }
+ return hasIntersection;
+ }
+}
diff --git a/antlr_3_1_source/analysis/LookaheadSet.java b/antlr_3_1_source/analysis/LookaheadSet.java
new file mode 100644
index 0000000..d4aa84e
--- /dev/null
+++ b/antlr_3_1_source/analysis/LookaheadSet.java
@@ -0,0 +1,104 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.misc.IntervalSet;
+import org.antlr.misc.IntSet;
+import org.antlr.tool.Grammar;
+
+/** An LL(1) lookahead set; contains a set of token types and a "hasEOF"
+ * condition when the set contains EOF. Since EOF is -1 everywhere and -1
+ * cannot be stored in my BitSet, I set a condition here. There may be other
+ * reasons in the future to abstract a LookaheadSet over a raw BitSet.
+ */
+public class LookaheadSet {
+ public IntervalSet tokenTypeSet;
+
+ public LookaheadSet() {
+ tokenTypeSet = new IntervalSet();
+ }
+
+ public LookaheadSet(IntSet s) {
+ this();
+ tokenTypeSet.addAll(s);
+ }
+
+ public LookaheadSet(int atom) {
+ tokenTypeSet = IntervalSet.of(atom);
+ }
+
+ public void orInPlace(LookaheadSet other) {
+ this.tokenTypeSet.addAll(other.tokenTypeSet);
+ }
+
+ public LookaheadSet or(LookaheadSet other) {
+ return new LookaheadSet(tokenTypeSet.or(other.tokenTypeSet));
+ }
+
+ public LookaheadSet subtract(LookaheadSet other) {
+ return new LookaheadSet(this.tokenTypeSet.subtract(other.tokenTypeSet));
+ }
+
+ public boolean member(int a) {
+ return tokenTypeSet.member(a);
+ }
+
+ public LookaheadSet intersection(LookaheadSet s) {
+ IntSet i = this.tokenTypeSet.and(s.tokenTypeSet);
+ LookaheadSet intersection = new LookaheadSet(i);
+ return intersection;
+ }
+
+ public boolean isNil() {
+ return tokenTypeSet.isNil();
+ }
+
+ public void remove(int a) {
+ tokenTypeSet = (IntervalSet)tokenTypeSet.subtract(IntervalSet.of(a));
+ }
+
+ public int hashCode() {
+ return tokenTypeSet.hashCode();
+ }
+
+ public boolean equals(Object other) {
+ return tokenTypeSet.equals(((LookaheadSet)other).tokenTypeSet);
+ }
+
+ public String toString(Grammar g) {
+ if ( tokenTypeSet==null ) {
+ return "";
+ }
+ String r = tokenTypeSet.toString(g);
+ return r;
+ }
+
+ public String toString() {
+ return toString(null);
+ }
+}
diff --git a/antlr_3_1_source/analysis/NFA.java b/antlr_3_1_source/analysis/NFA.java
new file mode 100644
index 0000000..426c4ce
--- /dev/null
+++ b/antlr_3_1_source/analysis/NFA.java
@@ -0,0 +1,73 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.Grammar;
+import org.antlr.tool.NFAFactory;
+
+/** An NFA (collection of NFAStates) constructed from a grammar. This
+ * NFA is one big machine for entire grammar. Decision points are recorded
+ * by the Grammar object so we can, for example, convert to DFA or simulate
+ * the NFA (interpret a decision).
+ */
+public class NFA {
+ public static final int INVALID_ALT_NUMBER = -1;
+
+ /** This NFA represents which grammar? */
+ public Grammar grammar;
+
+ /** Which factory created this NFA? */
+ protected NFAFactory factory = null;
+
+ public boolean complete;
+
+ public NFA(Grammar g) {
+ this.grammar = g;
+ }
+
+ public int getNewNFAStateNumber() {
+ return grammar.composite.getNewNFAStateNumber();
+ }
+
+ public void addState(NFAState state) {
+ grammar.composite.addState(state);
+ }
+
+ public NFAState getState(int s) {
+ return grammar.composite.getState(s);
+ }
+
+ public NFAFactory getFactory() {
+ return factory;
+ }
+
+ public void setFactory(NFAFactory factory) {
+ this.factory = factory;
+ }
+}
+
diff --git a/antlr_3_1_source/analysis/NFAConfiguration.java b/antlr_3_1_source/analysis/NFAConfiguration.java
new file mode 100644
index 0000000..6cf9734
--- /dev/null
+++ b/antlr_3_1_source/analysis/NFAConfiguration.java
@@ -0,0 +1,152 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.misc.Utils;
+
+/** An NFA state, predicted alt, and syntactic/semantic context.
+ * The syntactic context is a pointer into the rule invocation
+ * chain used to arrive at the state. The semantic context is
+ * the unordered set semantic predicates encountered before reaching
+ * an NFA state.
+ */
+public class NFAConfiguration {
+ /** The NFA state associated with this configuration */
+ public int state;
+
+ /** What alt is predicted by this configuration */
+ public int alt;
+
+ /** What is the stack of rule invocations that got us to state? */
+ public NFAContext context;
+
+ /** The set of semantic predicates associated with this NFA
+ * configuration. The predicates were found on the way to
+ * the associated NFA state in this syntactic context.
+ * Set: track nodes in grammar containing the predicate
+ * for error messages and such (nice to know where the predicate
+ * came from in case of duplicates etc...). By using a set,
+ * the equals() method will correctly show {pred1,pred2} as equals()
+ * to {pred2,pred1}.
+ */
+ public SemanticContext semanticContext = SemanticContext.EMPTY_SEMANTIC_CONTEXT;
+
+ /** Indicate that this configuration has been resolved and no further
+ * DFA processing should occur with it. Essentially, this is used
+ * as an "ignore" bit so that upon a set of nondeterministic configurations
+ * such as (s|2) and (s|3), I can set (s|3) to resolved=true (and any
+ * other configuration associated with alt 3).
+ */
+ protected boolean resolved;
+
+ /** This bit is used to indicate a semantic predicate will be
+ * used to resolve the conflict. Method
+ * DFA.findNewDFAStatesAndAddDFATransitions will add edges for
+ * the predicates after it performs the reach operation. The
+ * nondeterminism resolver sets this when it finds a set of
+ * nondeterministic configurations (as it does for "resolved" field)
+ * that have enough predicates to resolve the conflit.
+ */
+ protected boolean resolveWithPredicate;
+
+ /** Lots of NFA states have only epsilon edges (1 or 2). We can
+ * safely consider only n>0 during closure.
+ */
+ protected int numberEpsilonTransitionsEmanatingFromState;
+
+ /** Indicates that the NFA state associated with this configuration
+ * has exactly one transition and it's an atom (not epsilon etc...).
+ */
+ protected boolean singleAtomTransitionEmanating;
+
+ //protected boolean addedDuringClosure = true;
+
+ public NFAConfiguration(int state,
+ int alt,
+ NFAContext context,
+ SemanticContext semanticContext)
+ {
+ this.state = state;
+ this.alt = alt;
+ this.context = context;
+ this.semanticContext = semanticContext;
+ }
+
+ /** An NFA configuration is equal to another if both have
+ * the same state, the predict the same alternative, and
+ * syntactic/semantic contexts are the same. I don't think
+ * the state|alt|ctx could be the same and have two different
+ * semantic contexts, but might as well define equals to be
+ * everything.
+ */
+ public boolean equals(Object o) {
+ if ( o==null ) {
+ return false;
+ }
+ NFAConfiguration other = (NFAConfiguration)o;
+ return this.state==other.state &&
+ this.alt==other.alt &&
+ this.context.equals(other.context)&&
+ this.semanticContext.equals(other.semanticContext);
+ }
+
+ public int hashCode() {
+ int h = state + alt + context.hashCode();
+ return h;
+ }
+
+ public String toString() {
+ return toString(true);
+ }
+
+ public String toString(boolean showAlt) {
+ StringBuffer buf = new StringBuffer();
+ buf.append(state);
+ if ( showAlt ) {
+ buf.append("|");
+ buf.append(alt);
+ }
+ if ( context.parent!=null ) {
+ buf.append("|");
+ buf.append(context);
+ }
+ if ( semanticContext!=null &&
+ semanticContext!=SemanticContext.EMPTY_SEMANTIC_CONTEXT ) {
+ buf.append("|");
+ String escQuote = Utils.replace(semanticContext.toString(), "\"", "\\\"");
+ buf.append(escQuote);
+ }
+ if ( resolved ) {
+ buf.append("|resolved");
+ }
+ if ( resolveWithPredicate ) {
+ buf.append("|resolveWithPredicate");
+ }
+ return buf.toString();
+ }
+}
diff --git a/antlr_3_1_source/analysis/NFAContext.java b/antlr_3_1_source/analysis/NFAContext.java
new file mode 100644
index 0000000..9ffec39
--- /dev/null
+++ b/antlr_3_1_source/analysis/NFAContext.java
@@ -0,0 +1,294 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+/** A tree node for tracking the call chains for NFAs that invoke
+ * other NFAs. These trees only have to point upwards to their parents
+ * so we can walk back up the tree (i.e., pop stuff off the stack). We
+ * never walk from stack down down through the children.
+ *
+ * Each alt predicted in a decision has its own context tree,
+ * representing all possible return nodes. The initial stack has
+ * EOF ("$") in it. So, for m alternative productions, the lookahead
+ * DFA will have m NFAContext trees.
+ *
+ * To "push" a new context, just do "new NFAContext(context-parent, state)"
+ * which will add itself to the parent. The root is NFAContext(null, null).
+ *
+ * The complete context for an NFA configuration is the set of invoking states
+ * on the path from this node thru the parent pointers to the root.
+ */
+public class NFAContext {
+ /** This is similar to Bermudez's m constant in his LAR(m) where
+ * you bound the stack so your states don't explode. The main difference
+ * is that I bound only recursion on the stack, not the simple stack size.
+ * This looser constraint will let the conversion roam further to find
+ * lookahead to resolve a decision.
+ *
+ * Bermudez's m operates differently as it is his LR stack depth
+ * I'm pretty sure it therefore includes all stack symbols. Here I
+ * restrict the size of an NFA configuration to be finite because a
+ * stack component may mention the same NFA invocation state at
+ * most m times. Hence, the number of DFA states will not grow forever.
+ * With recursive rules like
+ *
+ * e : '(' e ')' | INT ;
+ *
+ * you could chase your tail forever if somebody said "s : e '.' | e ';' ;"
+ * This constant prevents new states from being created after a stack gets
+ * "too big". Actually (12/14/2007) I realize that this example is
+ * trapped by the non-LL(*) detector for recursion in > 1 alt. Here is
+ * an example that trips stack overflow:
+ *
+ * s : a Y | A A A A A X ; // force recursion past m=4
+ * a : A a | Q;
+ *
+ * If that were:
+ *
+ * s : a Y | A+ X ;
+ *
+ * it could loop forever.
+ *
+ * Imagine doing a depth-first search on the e DFA...as you chase an input
+ * sequence you can recurse to same rule such as e above. You'd have a
+ * chain of ((((. When you get do some point, you have to give up. The
+ * states in the chain will have longer and longer NFA config stacks.
+ * Must limit size.
+ *
+ * max=0 implies you cannot ever jump to another rule during closure.
+ * max=1 implies you can make as many calls as you want--you just
+ * can't ever visit a state that is on your rule invocation stack.
+ * I.e., you cannot ever recurse.
+ * max=2 implies you are able to recurse once (i.e., call a rule twice
+ * from the same place).
+ *
+ * This tracks recursion to a rule specific to an invocation site!
+ * It does not detect multiple calls to a rule from different rule
+ * invocation states. We are guaranteed to terminate because the
+ * stack can only grow as big as the number of NFA states * max.
+ *
+ * I noticed that the Java grammar didn't work with max=1, but did with
+ * max=4. Let's set to 4. Recursion is sometimes needed to resolve some
+ * fixed lookahead decisions.
+ */
+ public static int MAX_SAME_RULE_INVOCATIONS_PER_NFA_CONFIG_STACK = 4;
+
+ public NFAContext parent;
+
+ /** The NFA state that invoked another rule's start state is recorded
+ * on the rule invocation context stack.
+ */
+ public NFAState invokingState;
+
+ /** Computing the hashCode is very expensive and closureBusy()
+ * uses it to track when it's seen a state|ctx before to avoid
+ * infinite loops. As we add new contexts, record the hash code
+ * as this.invokingState + parent.cachedHashCode. Avoids walking
+ * up the tree for every hashCode(). Note that this caching works
+ * because a context is a monotonically growing tree of context nodes
+ * and nothing on the stack is ever modified...ctx just grows
+ * or shrinks.
+ */
+ protected int cachedHashCode;
+
+ public NFAContext(NFAContext parent, NFAState invokingState) {
+ this.parent = parent;
+ this.invokingState = invokingState;
+ if ( invokingState!=null ) {
+ this.cachedHashCode = invokingState.stateNumber;
+ }
+ if ( parent!=null ) {
+ this.cachedHashCode += parent.cachedHashCode;
+ }
+ }
+
+ /** Two contexts are equals() if both have
+ * same call stack; walk upwards to the root.
+ * Recall that the root sentinel node has no invokingStates and no parent.
+ * Note that you may be comparing contexts in different alt trees.
+ *
+ * The hashCode is now cheap as it's computed once upon each context
+ * push on the stack. Use it to make equals() more efficient.
+ */
+ public boolean equals(Object o) {
+ NFAContext other = ((NFAContext)o);
+ if ( this.cachedHashCode != other.cachedHashCode ) {
+ return false; // can't be same if hash is different
+ }
+ if ( this==other ) {
+ return true;
+ }
+ // System.out.println("comparing "+this+" with "+other);
+ NFAContext sp = this;
+ while ( sp.parent!=null && other.parent!=null ) {
+ if ( sp.invokingState != other.invokingState ) {
+ return false;
+ }
+ sp = sp.parent;
+ other = other.parent;
+ }
+ if ( !(sp.parent==null && other.parent==null) ) {
+ return false; // both pointers must be at their roots after walk
+ }
+ return true;
+ }
+
+ /** Two contexts conflict() if they are equals() or one is a stack suffix
+ * of the other. For example, contexts [21 12 $] and [21 9 $] do not
+ * conflict, but [21 $] and [21 12 $] do conflict. Note that I should
+ * probably not show the $ in this case. There is a dummy node for each
+ * stack that just means empty; $ is a marker that's all.
+ *
+ * This is used in relation to checking conflicts associated with a
+ * single NFA state's configurations within a single DFA state.
+ * If there are configurations s and t within a DFA state such that
+ * s.state=t.state && s.alt != t.alt && s.ctx conflicts t.ctx then
+ * the DFA state predicts more than a single alt--it's nondeterministic.
+ * Two contexts conflict if they are the same or if one is a suffix
+ * of the other.
+ *
+ * When comparing contexts, if one context has a stack and the other
+ * does not then they should be considered the same context. The only
+ * way for an NFA state p to have an empty context and a nonempty context
+ * is the case when closure falls off end of rule without a call stack
+ * and re-enters the rule with a context. This resolves the issue I
+ * discussed with Sriram Srinivasan Feb 28, 2005 about not terminating
+ * fast enough upon nondeterminism.
+ */
+ public boolean conflictsWith(NFAContext other) {
+ return this.suffix(other); // || this.equals(other);
+ }
+
+ /** [$] suffix any context
+ * [21 $] suffix [21 12 $]
+ * [21 12 $] suffix [21 $]
+ * [21 18 $] suffix [21 18 12 9 $]
+ * [21 18 12 9 $] suffix [21 18 $]
+ * [21 12 $] not suffix [21 9 $]
+ *
+ * Example "[21 $] suffix [21 12 $]" means: rule r invoked current rule
+ * from state 21. Rule s invoked rule r from state 12 which then invoked
+ * current rule also via state 21. While the context prior to state 21
+ * is different, the fact that both contexts emanate from state 21 implies
+ * that they are now going to track perfectly together. Once they
+ * converged on state 21, there is no way they can separate. In other
+ * words, the prior stack state is not consulted when computing where to
+ * go in the closure operation. ?$ and ??$ are considered the same stack.
+ * If ? is popped off then $ and ?$ remain; they are now an empty and
+ * nonempty context comparison. So, if one stack is a suffix of
+ * another, then it will still degenerate to the simple empty stack
+ * comparison case.
+ */
+ protected boolean suffix(NFAContext other) {
+ NFAContext sp = this;
+ // if one of the contexts is empty, it never enters loop and returns true
+ while ( sp.parent!=null && other.parent!=null ) {
+ if ( sp.invokingState != other.invokingState ) {
+ return false;
+ }
+ sp = sp.parent;
+ other = other.parent;
+ }
+ //System.out.println("suffix");
+ return true;
+ }
+
+ /** Walk upwards to the root of the call stack context looking
+ * for a particular invoking state.
+ public boolean contains(int state) {
+ NFAContext sp = this;
+ int n = 0; // track recursive invocations of state
+ System.out.println("this.context is "+sp);
+ while ( sp.parent!=null ) {
+ if ( sp.invokingState.stateNumber == state ) {
+ return true;
+ }
+ sp = sp.parent;
+ }
+ return false;
+ }
+ */
+
+ /** Given an NFA state number, how many times has the NFA-to-DFA
+ * conversion pushed that state on the stack? In other words,
+ * the NFA state must be a rule invocation state and this method
+ * tells you how many times you've been to this state. If none,
+ * then you have not called the target rule from this state before
+ * (though another NFA state could have called that target rule).
+ * If n=1, then you've been to this state before during this
+ * DFA construction and are going to invoke that rule again.
+ *
+ * Note that many NFA states can invoke rule r, but we ignore recursion
+ * unless you hit the same rule invocation state again.
+ */
+ public int recursionDepthEmanatingFromState(int state) {
+ NFAContext sp = this;
+ int n = 0; // track recursive invocations of target from this state
+ //System.out.println("this.context is "+sp);
+ while ( sp.parent!=null ) {
+ if ( sp.invokingState.stateNumber == state ) {
+ n++;
+ }
+ sp = sp.parent;
+ }
+ return n;
+ }
+
+ public int hashCode() {
+ return cachedHashCode;
+ /*
+ int h = 0;
+ NFAContext sp = this;
+ while ( sp.parent!=null ) {
+ h += sp.invokingState.getStateNumber();
+ sp = sp.parent;
+ }
+ return h;
+ */
+ }
+
+ /** A context is empty if there is no parent; meaning nobody pushed
+ * anything on the call stack.
+ */
+ public boolean isEmpty() {
+ return parent==null;
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer();
+ NFAContext sp = this;
+ buf.append("[");
+ while ( sp.parent!=null ) {
+ buf.append(sp.invokingState.stateNumber);
+ buf.append(" ");
+ sp = sp.parent;
+ }
+ buf.append("$]");
+ return buf.toString();
+ }
+}
diff --git a/antlr_3_1_source/analysis/NFAConversionThread.java b/antlr_3_1_source/analysis/NFAConversionThread.java
new file mode 100644
index 0000000..d1d0d92
--- /dev/null
+++ b/antlr_3_1_source/analysis/NFAConversionThread.java
@@ -0,0 +1,65 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.misc.Barrier;
+import org.antlr.tool.Grammar;
+import org.antlr.tool.ErrorManager;
+
+/** Convert all decisions i..j inclusive in a thread */
+public class NFAConversionThread implements Runnable {
+ Grammar grammar;
+ int i, j;
+ Barrier barrier;
+ public NFAConversionThread(Grammar grammar,
+ Barrier barrier,
+ int i,
+ int j)
+ {
+ this.grammar = grammar;
+ this.barrier = barrier;
+ this.i = i;
+ this.j = j;
+ }
+ public void run() {
+ for (int decision=i; decision<=j; decision++) {
+ NFAState decisionStartState = grammar.getDecisionNFAStartState(decision);
+ if ( decisionStartState.getNumberOfTransitions()>1 ) {
+ grammar.createLookaheadDFA(decision,true);
+ }
+ }
+ // now wait for others to finish
+ try {
+ barrier.waitForRelease();
+ }
+ catch(InterruptedException e) {
+ ErrorManager.internalError("what the hell? DFA interruptus", e);
+ }
+ }
+}
+
diff --git a/antlr_3_1_source/analysis/NFAState.java b/antlr_3_1_source/analysis/NFAState.java
new file mode 100644
index 0000000..80bd534
--- /dev/null
+++ b/antlr_3_1_source/analysis/NFAState.java
@@ -0,0 +1,259 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.GrammarAST;
+import org.antlr.tool.Rule;
+import org.antlr.tool.ErrorManager;
+
+/** A state within an NFA. At most 2 transitions emanate from any NFA state. */
+public class NFAState extends State {
+ // I need to distinguish between NFA decision states for (...)* and (...)+
+ // during NFA interpretation.
+ public static final int LOOPBACK = 1;
+ public static final int BLOCK_START = 2;
+ public static final int OPTIONAL_BLOCK_START = 3;
+ public static final int BYPASS = 4;
+ public static final int RIGHT_EDGE_OF_BLOCK = 5;
+
+ public static final int MAX_TRANSITIONS = 2;
+
+ /** How many transitions; 0, 1, or 2 transitions */
+ int numTransitions = 0;
+ public Transition[] transition = new Transition[MAX_TRANSITIONS];
+
+ /** For o-A->o type NFA tranitions, record the label that leads to this
+ * state. Useful for creating rich error messages when we find
+ * insufficiently (with preds) covered states.
+ */
+ public Label incidentEdgeLabel;
+
+ /** Which NFA are we in? */
+ public NFA nfa = null;
+
+ /** What's its decision number from 1..n? */
+ protected int decisionNumber = 0;
+
+ /** Subrules (...)* and (...)+ have more than one decision point in
+ * the NFA created for them. They both have a loop-exit-or-stay-in
+ * decision node (the loop back node). They both have a normal
+ * alternative block decision node at the left edge. The (...)* is
+ * worse as it even has a bypass decision (2 alts: stay in or bypass)
+ * node at the extreme left edge. This is not how they get generated
+ * in code as a while-loop or whatever deals nicely with either. For
+ * error messages (where I need to print the nondeterministic alts)
+ * and for interpretation, I need to use the single DFA that is created
+ * (for efficiency) but interpret the results differently depending
+ * on which of the 2 or 3 decision states uses the DFA. For example,
+ * the DFA will always report alt n+1 as the exit branch for n real
+ * alts, so I need to translate that depending on the decision state.
+ *
+ * If decisionNumber>0 then this var tells you what kind of decision
+ * state it is.
+ */
+ public int decisionStateType;
+
+ /** What rule do we live in? */
+ public Rule enclosingRule;
+
+ /** During debugging and for nondeterminism warnings, it's useful
+ * to know what relationship this node has to the original grammar.
+ * For example, "start of alt 1 of rule a".
+ */
+ protected String description;
+
+ /** Associate this NFAState with the corresponding GrammarAST node
+ * from which this node was created. This is useful not only for
+ * associating the eventual lookahead DFA with the associated
+ * Grammar position, but also for providing users with
+ * nondeterminism warnings. Mainly used by decision states to
+ * report line:col info. Could also be used to track line:col
+ * for elements such as token refs.
+ */
+ public GrammarAST associatedASTNode;
+
+ /** Is this state the sole target of an EOT transition? */
+ protected boolean EOTTargetState = false;
+
+ /** Jean Bovet needs in the GUI to know which state pairs correspond
+ * to the start/stop of a block.
+ */
+ public int endOfBlockStateNumber = State.INVALID_STATE_NUMBER;
+
+ public NFAState(NFA nfa) {
+ this.nfa = nfa;
+ }
+
+ public int getNumberOfTransitions() {
+ return numTransitions;
+ }
+
+ public void addTransition(Transition e) {
+ if ( e==null ) {
+ throw new IllegalArgumentException("You can't add a null transition");
+ }
+ if ( numTransitions>transition.length ) {
+ throw new IllegalArgumentException("You can only have "+transition.length+" transitions");
+ }
+ if ( e!=null ) {
+ transition[numTransitions] = e;
+ numTransitions++;
+ // Set the "back pointer" of the target state so that it
+ // knows about the label of the incoming edge.
+ Label label = e.label;
+ if ( label.isAtom() || label.isSet() ) {
+ if ( ((NFAState)e.target).incidentEdgeLabel!=null ) {
+ ErrorManager.internalError("Clobbered incident edge");
+ }
+ ((NFAState)e.target).incidentEdgeLabel = e.label;
+ }
+ }
+ }
+
+ /** Used during optimization to reset a state to have the (single)
+ * transition another state has.
+ */
+ public void setTransition0(Transition e) {
+ if ( e==null ) {
+ throw new IllegalArgumentException("You can't use a solitary null transition");
+ }
+ transition[0] = e;
+ transition[1] = null;
+ numTransitions = 1;
+ }
+
+ public Transition transition(int i) {
+ return transition[i];
+ }
+
+ /** The DFA decision for this NFA decision state always has
+ * an exit path for loops as n+1 for n alts in the loop.
+ * That is really useful for displaying nondeterministic alts
+ * and so on, but for walking the NFA to get a sequence of edge
+ * labels or for actually parsing, we need to get the real alt
+ * number. The real alt number for exiting a loop is always 1
+ * as transition 0 points at the exit branch (we compute DFAs
+ * always for loops at the loopback state).
+ *
+ * For walking/parsing the loopback state:
+ * 1 2 3 display alt (for human consumption)
+ * 2 3 1 walk alt
+ *
+ * For walking the block start:
+ * 1 2 3 display alt
+ * 1 2 3
+ *
+ * For walking the bypass state of a (...)* loop:
+ * 1 2 3 display alt
+ * 1 1 2 all block alts map to entering loop exit means take bypass
+ *
+ * Non loop EBNF do not need to be translated; they are ignored by
+ * this method as decisionStateType==0.
+ *
+ * Return same alt if we can't translate.
+ */
+ public int translateDisplayAltToWalkAlt(int displayAlt) {
+ NFAState nfaStart = this;
+ if ( decisionNumber==0 || decisionStateType==0 ) {
+ return displayAlt;
+ }
+ int walkAlt = 0;
+ // find the NFA loopback state associated with this DFA
+ // and count number of alts (all alt numbers are computed
+ // based upon the loopback's NFA state.
+ /*
+ DFA dfa = nfa.grammar.getLookaheadDFA(decisionNumber);
+ if ( dfa==null ) {
+ ErrorManager.internalError("can't get DFA for decision "+decisionNumber);
+ }
+ */
+ int nAlts = nfa.grammar.getNumberOfAltsForDecisionNFA(nfaStart);
+ switch ( nfaStart.decisionStateType ) {
+ case LOOPBACK :
+ walkAlt = displayAlt % nAlts + 1; // rotate right mod 1..3
+ break;
+ case BLOCK_START :
+ case OPTIONAL_BLOCK_START :
+ walkAlt = displayAlt; // identity transformation
+ break;
+ case BYPASS :
+ if ( displayAlt == nAlts ) {
+ walkAlt = 2; // bypass
+ }
+ else {
+ walkAlt = 1; // any non exit branch alt predicts entering
+ }
+ break;
+ }
+ return walkAlt;
+ }
+
+ // Setter/Getters
+
+ /** What AST node is associated with this NFAState? When you
+ * set the AST node, I set the node to point back to this NFA state.
+ */
+ public void setDecisionASTNode(GrammarAST decisionASTNode) {
+ decisionASTNode.setNFAStartState(this);
+ this.associatedASTNode = decisionASTNode;
+ }
+
+ public String getDescription() {
+ return description;
+ }
+
+ public void setDescription(String description) {
+ this.description = description;
+ }
+
+ public int getDecisionNumber() {
+ return decisionNumber;
+ }
+
+ public void setDecisionNumber(int decisionNumber) {
+ this.decisionNumber = decisionNumber;
+ }
+
+ public boolean isEOTTargetState() {
+ return EOTTargetState;
+ }
+
+ public void setEOTTargetState(boolean eot) {
+ EOTTargetState = eot;
+ }
+
+ public boolean isDecisionState() {
+ return decisionStateType>0;
+ }
+
+ public String toString() {
+ return String.valueOf(stateNumber);
+ }
+
+}
+
diff --git a/antlr_3_1_source/analysis/NFAToDFAConverter.java b/antlr_3_1_source/analysis/NFAToDFAConverter.java
new file mode 100644
index 0000000..f5d3456
--- /dev/null
+++ b/antlr_3_1_source/analysis/NFAToDFAConverter.java
@@ -0,0 +1,1733 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.misc.OrderedHashSet;
+import org.antlr.misc.Utils;
+import org.antlr.tool.ErrorManager;
+
+import java.util.*;
+
+import antlr.Token;
+
+/** Code that embodies the NFA conversion to DFA. A new object is needed
+ * per DFA (also required for thread safety if multiple conversions
+ * launched).
+ */
+public class NFAToDFAConverter {
+ /** A list of DFA states we still need to process during NFA conversion */
+ protected List work = new LinkedList();
+
+ /** While converting NFA, we must track states that
+ * reference other rule's NFAs so we know what to do
+ * at the end of a rule. We need to know what context invoked
+ * this rule so we can know where to continue looking for NFA
+ * states. I'm tracking a context tree (record of rule invocation
+ * stack trace) for each alternative that could be predicted.
+ */
+ protected NFAContext[] contextTrees;
+
+ /** We are converting which DFA? */
+ protected DFA dfa;
+
+ public static boolean debug = false;
+
+ /** Should ANTLR launch multiple threads to convert NFAs to DFAs?
+ * With a 2-CPU box, I note that it's about the same single or
+ * multithreaded. Both CPU meters are going even when single-threaded
+ * so I assume the GC is killing us. Could be the compiler. When I
+ * run java -Xint mode, I get about 15% speed improvement with multiple
+ * threads.
+ */
+ public static boolean SINGLE_THREADED_NFA_CONVERSION = true;
+
+ protected boolean computingStartState = false;
+
+ public NFAToDFAConverter(DFA dfa) {
+ this.dfa = dfa;
+ int nAlts = dfa.getNumberOfAlts();
+ initContextTrees(nAlts);
+ }
+
+ public void convert() {
+ dfa.conversionStartTime = System.currentTimeMillis();
+
+ // create the DFA start state
+ dfa.startState = computeStartState();
+
+ // while more DFA states to check, process them
+ while ( work.size()>0 &&
+ !dfa.nfa.grammar.NFAToDFAConversionExternallyAborted() )
+ {
+ DFAState d = (DFAState) work.get(0);
+ if ( dfa.nfa.grammar.composite.watchNFAConversion ) {
+ System.out.println("convert DFA state "+d.stateNumber+
+ " ("+d.nfaConfigurations.size()+" nfa states)");
+ }
+ int k = dfa.getUserMaxLookahead();
+ if ( k>0 && k==d.getLookaheadDepth() ) {
+ // we've hit max lookahead, make this a stop state
+ //System.out.println("stop state @k="+k+" (terminated early)");
+ /*
+ List sampleInputLabels = d.dfa.probe.getSampleNonDeterministicInputSequence(d);
+ String input = d.dfa.probe.getInputSequenceDisplay(sampleInputLabels);
+ System.out.println("sample input: "+input);
+ */
+ resolveNonDeterminisms(d);
+ // Check to see if we need to add any semantic predicate transitions
+ if ( d.isResolvedWithPredicates() ) {
+ addPredicateTransitions(d);
+ }
+ else {
+ d.setAcceptState(true); // must convert to accept state at k
+ }
+ }
+ else {
+ findNewDFAStatesAndAddDFATransitions(d);
+ }
+ work.remove(0); // done with it; remove from work list
+ }
+
+ // Find all manual syn preds (gated). These are not discovered
+ // in tryToResolveWithSemanticPredicates because they are implicitly
+ // added to every edge by code gen, DOT generation etc...
+ dfa.findAllGatedSynPredsUsedInDFAAcceptStates();
+ }
+
+ /** From this first NFA state of a decision, create a DFA.
+ * Walk each alt in decision and compute closure from the start of that
+ * rule, making sure that the closure does not include other alts within
+ * that same decision. The idea is to associate a specific alt number
+ * with the starting closure so we can trace the alt number for all states
+ * derived from this. At a stop state in the DFA, we can return this alt
+ * number, indicating which alt is predicted.
+ *
+ * If this DFA is derived from an loop back NFA state, then the first
+ * transition is actually the exit branch of the loop. Rather than make
+ * this alternative one, let's make this alt n+1 where n is the number of
+ * alts in this block. This is nice to keep the alts of the block 1..n;
+ * helps with error messages.
+ *
+ * I handle nongreedy in findNewDFAStatesAndAddDFATransitions
+ * when nongreedy and EOT transition. Make state with EOT emanating
+ * from it the accept state.
+ */
+ protected DFAState computeStartState() {
+ NFAState alt = dfa.decisionNFAStartState;
+ DFAState startState = dfa.newState();
+ computingStartState = true;
+ int i = 0;
+ int altNum = 1;
+ while ( alt!=null ) {
+ // find the set of NFA states reachable without consuming
+ // any input symbols for each alt. Keep adding to same
+ // overall closure that will represent the DFA start state,
+ // but track the alt number
+ NFAContext initialContext = contextTrees[i];
+ // if first alt is derived from loopback/exit branch of loop,
+ // make alt=n+1 for n alts instead of 1
+ if ( i==0 &&
+ dfa.getNFADecisionStartState().decisionStateType==NFAState.LOOPBACK )
+ {
+ int numAltsIncludingExitBranch = dfa.nfa.grammar
+ .getNumberOfAltsForDecisionNFA(dfa.decisionNFAStartState);
+ altNum = numAltsIncludingExitBranch;
+ closure((NFAState)alt.transition[0].target,
+ altNum,
+ initialContext,
+ SemanticContext.EMPTY_SEMANTIC_CONTEXT,
+ startState,
+ true
+ );
+ altNum = 1; // make next alt the first
+ }
+ else {
+ closure((NFAState)alt.transition[0].target,
+ altNum,
+ initialContext,
+ SemanticContext.EMPTY_SEMANTIC_CONTEXT,
+ startState,
+ true
+ );
+ altNum++;
+ }
+ i++;
+
+ // move to next alternative
+ if ( alt.transition[1] ==null ) {
+ break;
+ }
+ alt = (NFAState)alt.transition[1].target;
+ }
+
+ // now DFA start state has the complete closure for the decision
+ // but we have tracked which alt is associated with which
+ // NFA states.
+ dfa.addState(startState); // make sure dfa knows about this state
+ work.add(startState);
+ computingStartState = false;
+ return startState;
+ }
+
+ /** From this node, add a d--a-->t transition for all
+ * labels 'a' where t is a DFA node created
+ * from the set of NFA states reachable from any NFA
+ * state in DFA state d.
+ */
+ protected void findNewDFAStatesAndAddDFATransitions(DFAState d) {
+ //System.out.println("work on DFA state "+d);
+ OrderedHashSet labels = d.getReachableLabels();
+ //System.out.println("reachable labels="+labels);
+
+ /*
+ System.out.println("|reachable|/|nfaconfigs|="+
+ labels.size()+"/"+d.getNFAConfigurations().size()+"="+
+ labels.size()/(float)d.getNFAConfigurations().size());
+ */
+
+ // normally EOT is the "default" clause and decisions just
+ // choose that last clause when nothing else matches. DFA conversion
+ // continues searching for a unique sequence that predicts the
+ // various alts or until it finds EOT. So this rule
+ //
+ // DUH : ('x'|'y')* "xy!";
+ //
+ // does not need a greedy indicator. The following rule works fine too
+ //
+ // A : ('x')+ ;
+ //
+ // When the follow branch could match what is in the loop, by default,
+ // the nondeterminism is resolved in favor of the loop. You don't
+ // get a warning because the only way to get this condition is if
+ // the DFA conversion hits the end of the token. In that case,
+ // we're not *sure* what will happen next, but it could be anything.
+ // Anyway, EOT is the default case which means it will never be matched
+ // as resolution goes to the lowest alt number. Exit branches are
+ // always alt n+1 for n alts in a block.
+ //
+ // When a loop is nongreedy and we find an EOT transition, the DFA
+ // state should become an accept state, predicting exit of loop. It's
+ // just reversing the resolution of ambiguity.
+ // TODO: should this be done in the resolveAmbig method?
+ Label EOTLabel = new Label(Label.EOT);
+ boolean containsEOT = labels!=null && labels.contains(EOTLabel);
+ if ( !dfa.isGreedy() && containsEOT ) {
+ convertToEOTAcceptState(d);
+ return; // no more work to do on this accept state
+ }
+
+ // if in filter mode for lexer, want to match shortest not longest
+ // string so if we see an EOT edge emanating from this state, then
+ // convert this state to an accept state. This only counts for
+ // The Tokens rule as all other decisions must continue to look for
+ // longest match.
+ // [Taking back out a few days later on Jan 17, 2006. This could
+ // be an option for the future, but this was wrong soluion for
+ // filtering.]
+ /*
+ if ( dfa.nfa.grammar.type==Grammar.LEXER && containsEOT ) {
+ String filterOption = (String)dfa.nfa.grammar.getOption("filter");
+ boolean filterMode = filterOption!=null && filterOption.equals("true");
+ if ( filterMode && d.dfa.isTokensRuleDecision() ) {
+ DFAState t = reach(d, EOTLabel);
+ if ( t.getNFAConfigurations().size()>0 ) {
+ convertToEOTAcceptState(d);
+ //System.out.println("state "+d+" has EOT target "+t.stateNumber);
+ return;
+ }
+ }
+ }
+ */
+
+ int numberOfEdgesEmanating = 0;
+ Map targetToLabelMap = new HashMap();
+ // for each label that could possibly emanate from NFAStates of d
+ int numLabels = 0;
+ if ( labels!=null ) {
+ numLabels = labels.size();
+ }
+ for (int i=0; i"+t);
+ }
+ if ( t==null ) {
+ // nothing was reached by label due to conflict resolution
+ // EOT also seems to be in here occasionally probably due
+ // to an end-of-rule state seeing it even though we'll pop
+ // an invoking state off the state; don't bother to conflict
+ // as this labels set is a covering approximation only.
+ continue;
+ }
+ //System.out.println("dfa.k="+dfa.getUserMaxLookahead());
+ if ( t.getUniqueAlt()==NFA.INVALID_ALT_NUMBER ) {
+ // Only compute closure if a unique alt number is not known.
+ // If a unique alternative is mentioned among all NFA
+ // configurations then there is no possibility of needing to look
+ // beyond this state; also no possibility of a nondeterminism.
+ // This optimization May 22, 2006 just dropped -Xint time
+ // for analysis of Java grammar from 11.5s to 2s! Wow.
+ closure(t); // add any NFA states reachable via epsilon
+ }
+
+ /*
+ System.out.println("DFA state after closure "+d+"-"+
+ label.toString(dfa.nfa.grammar)+
+ "->"+t);
+ */
+
+ // add if not in DFA yet and then make d-label->t
+ DFAState targetState = addDFAStateToWorkList(t);
+
+ numberOfEdgesEmanating +=
+ addTransition(d, label, targetState, targetToLabelMap);
+
+ // lookahead of target must be one larger than d's k
+ // We are possibly setting the depth of a pre-existing state
+ // that is equal to one we just computed...not sure if that's
+ // ok.
+ targetState.setLookaheadDepth(d.getLookaheadDepth() + 1);
+ }
+
+ //System.out.println("DFA after reach / closures:\n"+dfa);
+
+ if ( !d.isResolvedWithPredicates() && numberOfEdgesEmanating==0 ) {
+ //System.out.println("dangling DFA state "+d+"\nAfter reach / closures:\n"+dfa);
+ // TODO: can fixed lookahead hit a dangling state case?
+ // TODO: yes, with left recursion
+ //System.err.println("dangling state alts: "+d.getAltSet());
+ dfa.probe.reportDanglingState(d);
+ // turn off all configurations except for those associated with
+ // min alt number; somebody has to win else some input will not
+ // predict any alt.
+ int minAlt = resolveByPickingMinAlt(d, null);
+ // force it to be an accept state
+ // don't call convertToAcceptState() which merges stop states.
+ // other states point at us; don't want them pointing to dead states
+ d.setAcceptState(true); // might be adding new accept state for alt
+ dfa.setAcceptState(minAlt, d);
+ //convertToAcceptState(d, minAlt); // force it to be an accept state
+ }
+
+ // Check to see if we need to add any semantic predicate transitions
+ if ( d.isResolvedWithPredicates() ) {
+ addPredicateTransitions(d);
+ }
+ }
+
+ /** Add a transition from state d to targetState with label in normal case.
+ * if COLLAPSE_ALL_INCIDENT_EDGES, however, try to merge all edges from
+ * d to targetState; this means merging their labels. Another optimization
+ * is to reduce to a single EOT edge any set of edges from d to targetState
+ * where there exists an EOT state. EOT is like the wildcard so don't
+ * bother to test any other edges. Example:
+ *
+ * NUM_INT
+ * : '1'..'9' ('0'..'9')* ('l'|'L')?
+ * | '0' ('x'|'X') ('0'..'9'|'a'..'f'|'A'..'F')+ ('l'|'L')?
+ * | '0' ('0'..'7')* ('l'|'L')?
+ * ;
+ *
+ * The normal decision to predict alts 1, 2, 3 is:
+ *
+ * if ( (input.LA(1)>='1' && input.LA(1)<='9') ) {
+ * alt7=1;
+ * }
+ * else if ( input.LA(1)=='0' ) {
+ * if ( input.LA(2)=='X'||input.LA(2)=='x' ) {
+ * alt7=2;
+ * }
+ * else if ( (input.LA(2)>='0' && input.LA(2)<='7') ) {
+ * alt7=3;
+ * }
+ * else if ( input.LA(2)=='L'||input.LA(2)=='l' ) {
+ * alt7=3;
+ * }
+ * else {
+ * alt7=3;
+ * }
+ * }
+ * else error
+ *
+ * Clearly, alt 3 is predicted with extra work since it tests 0..7
+ * and [lL] before finally realizing that any character is actually
+ * ok at k=2.
+ *
+ * A better decision is as follows:
+ *
+ * if ( (input.LA(1)>='1' && input.LA(1)<='9') ) {
+ * alt7=1;
+ * }
+ * else if ( input.LA(1)=='0' ) {
+ * if ( input.LA(2)=='X'||input.LA(2)=='x' ) {
+ * alt7=2;
+ * }
+ * else {
+ * alt7=3;
+ * }
+ * }
+ *
+ * The DFA originally has 3 edges going to the state the predicts alt 3,
+ * but upon seeing the EOT edge (the "else"-clause), this method
+ * replaces the old merged label (which would have (0..7|l|L)) with EOT.
+ * The code generator then leaves alt 3 predicted with a simple else-
+ * clause. :)
+ *
+ * The only time the EOT optimization makes no sense is in the Tokens
+ * rule. We want EOT to truly mean you have matched an entire token
+ * so don't bother actually rewinding to execute that rule unless there
+ * are actions in that rule. For now, since I am not preventing
+ * backtracking from Tokens rule, I will simply allow the optimization.
+ */
+ protected static int addTransition(DFAState d,
+ Label label,
+ DFAState targetState,
+ Map targetToLabelMap)
+ {
+ //System.out.println(d.stateNumber+"-"+label.toString(dfa.nfa.grammar)+"->"+targetState.stateNumber);
+ int n = 0;
+ if ( DFAOptimizer.COLLAPSE_ALL_PARALLEL_EDGES ) {
+ // track which targets we've hit
+ Integer tI = Utils.integer(targetState.stateNumber);
+ Transition oldTransition = (Transition)targetToLabelMap.get(tI);
+ if ( oldTransition!=null ) {
+ //System.out.println("extra transition to "+tI+" upon "+label.toString(dfa.nfa.grammar));
+ // already seen state d to target transition, just add label
+ // to old label unless EOT
+ if ( label.getAtom()==Label.EOT ) {
+ // merge with EOT means old edge can go away
+ oldTransition.label = new Label(Label.EOT);
+ }
+ else {
+ // don't add anything to EOT, it's essentially the wildcard
+ if ( oldTransition.label.getAtom()!=Label.EOT ) {
+ // ok, not EOT, add in this label to old label
+ oldTransition.label.add(label);
+ }
+ //System.out.println("label updated to be "+oldTransition.label.toString(dfa.nfa.grammar));
+ }
+ }
+ else {
+ // make a transition from d to t upon 'a'
+ n = 1;
+ label = (Label)label.clone(); // clone in case we alter later
+ int transitionIndex = d.addTransition(targetState, label);
+ Transition trans = d.getTransition(transitionIndex);
+ // track target/transition pairs
+ targetToLabelMap.put(tI, trans);
+ }
+ }
+ else {
+ n = 1;
+ d.addTransition(targetState, label);
+ }
+ return n;
+ }
+
+ /** For all NFA states (configurations) merged in d,
+ * compute the epsilon closure; that is, find all NFA states reachable
+ * from the NFA states in d via purely epsilon transitions.
+ */
+ public void closure(DFAState d) {
+ if ( debug ) {
+ System.out.println("closure("+d+")");
+ }
+
+ List configs = new ArrayList();
+ // Because we are adding to the configurations in closure
+ // must clone initial list so we know when to stop doing closure
+ configs.addAll(d.nfaConfigurations);
+ // for each NFA configuration in d (abort if we detect non-LL(*) state)
+ int numConfigs = configs.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration c = (NFAConfiguration)configs.get(i);
+ if ( c.singleAtomTransitionEmanating ) {
+ continue; // ignore NFA states w/o epsilon transitions
+ }
+ //System.out.println("go do reach for NFA state "+c.state);
+ // figure out reachable NFA states from each of d's nfa states
+ // via epsilon transitions.
+ // Fill configsInClosure rather than altering d configs inline
+ closure(dfa.nfa.getState(c.state),
+ c.alt,
+ c.context,
+ c.semanticContext,
+ d,
+ false);
+ }
+ //System.out.println("after closure d="+d);
+ d.closureBusy = null; // wack all that memory used during closure
+ }
+
+ /** Where can we get from NFA state p traversing only epsilon transitions?
+ * Add new NFA states + context to DFA state d. Also add semantic
+ * predicates to semantic context if collectPredicates is set. We only
+ * collect predicates at hoisting depth 0, meaning before any token/char
+ * have been recognized. This corresponds, during analysis, to the
+ * initial DFA start state construction closure() invocation.
+ *
+ * There are four cases of interest (the last being the usual transition):
+ *
+ * 1. Traverse an edge that takes us to the start state of another
+ * rule, r. We must push this state so that if the DFA
+ * conversion hits the end of rule r, then it knows to continue
+ * the conversion at state following state that "invoked" r. By
+ * construction, there is a single transition emanating from a rule
+ * ref node.
+ *
+ * 2. Reach an NFA state associated with the end of a rule, r, in the
+ * grammar from which it was built. We must add an implicit (i.e.,
+ * don't actually add an epsilon transition) epsilon transition
+ * from r's end state to the NFA state following the NFA state
+ * that transitioned to rule r's start state. Because there are
+ * many states that could reach r, the context for a rule invocation
+ * is part of a call tree not a simple stack. When we fall off end
+ * of rule, "pop" a state off the call tree and add that state's
+ * "following" node to d's NFA configuration list. The context
+ * for this new addition will be the new "stack top" in the call tree.
+ *
+ * 3. Like case 2, we reach an NFA state associated with the end of a
+ * rule, r, in the grammar from which NFA was built. In this case,
+ * however, we realize that during this NFA->DFA conversion, no state
+ * invoked the current rule's NFA. There is no choice but to add
+ * all NFA states that follow references to r's start state. This is
+ * analogous to computing the FOLLOW(r) in the LL(k) world. By
+ * construction, even rule stop state has a chain of nodes emanating
+ * from it that points to every possible following node. This case
+ * is conveniently handled then by the 4th case.
+ *
+ * 4. Normal case. If p can reach another NFA state q, then add
+ * q to d's configuration list, copying p's context for q's context.
+ * If there is a semantic predicate on the transition, then AND it
+ * with any existing semantic context.
+ *
+ * Current state p is always added to d's configuration list as it's part
+ * of the closure as well.
+ *
+ * When is a closure operation in a cycle condition? While it is
+ * very possible to have the same NFA state mentioned twice
+ * within the same DFA state, there are two situations that
+ * would lead to nontermination of closure operation:
+ *
+ * o Whenever closure reaches a configuration where the same state
+ * with same or a suffix context already exists. This catches
+ * the IF-THEN-ELSE tail recursion cycle and things like
+ *
+ * a : A a | B ;
+ *
+ * the context will be $ (empty stack).
+ *
+ * We have to check
+ * larger context stacks because of (...)+ loops. For
+ * example, the context of a (...)+ can be nonempty if the
+ * surrounding rule is invoked by another rule:
+ *
+ * a : b A | X ;
+ * b : (B|)+ ; // nondeterministic by the way
+ *
+ * The context of the (B|)+ loop is "invoked from item
+ * a : . b A ;" and then the empty alt of the loop can reach back
+ * to itself. The context stack will have one "return
+ * address" element and so we must check for same state, same
+ * context for arbitrary context stacks.
+ *
+ * Idea: If we've seen this configuration before during closure, stop.
+ * We also need to avoid reaching same state with conflicting context.
+ * Ultimately analysis would stop and we'd find the conflict, but we
+ * should stop the computation. Previously I only checked for
+ * exact config. Need to check for same state, suffix context
+ * not just exact context.
+ *
+ * o Whenever closure reaches a configuration where state p
+ * is present in its own context stack. This means that
+ * p is a rule invocation state and the target rule has
+ * been called before. NFAContext.MAX_RECURSIVE_INVOCATIONS
+ * (See the comment there also) determines how many times
+ * it's possible to recurse; clearly we cannot recurse forever.
+ * Some grammars such as the following actually require at
+ * least one recursive call to correctly compute the lookahead:
+ *
+ * a : L ID R
+ * | b
+ * ;
+ * b : ID
+ * | L a R
+ * ;
+ *
+ * Input L ID R is ambiguous but to figure this out, ANTLR
+ * needs to go a->b->a->b to find the L ID sequence.
+ *
+ * Do not allow closure to add a configuration that would
+ * allow too much recursion.
+ *
+ * This case also catches infinite left recursion.
+ */
+ public void closure(NFAState p,
+ int alt,
+ NFAContext context,
+ SemanticContext semanticContext,
+ DFAState d,
+ boolean collectPredicates)
+ {
+ if ( debug ){
+ System.out.println("closure at "+p.enclosingRule.name+" state "+p.stateNumber+"|"+
+ alt+" filling DFA state "+d.stateNumber+" with context "+context
+ );
+ }
+
+ if ( DFA.MAX_TIME_PER_DFA_CREATION>0 &&
+ System.currentTimeMillis() - d.dfa.conversionStartTime >=
+ DFA.MAX_TIME_PER_DFA_CREATION )
+ {
+ // bail way out; we've blown up somehow
+ throw new AnalysisTimeoutException(d.dfa);
+ }
+
+ NFAConfiguration proposedNFAConfiguration =
+ new NFAConfiguration(p.stateNumber,
+ alt,
+ context,
+ semanticContext);
+
+ // Avoid infinite recursion
+ if ( closureIsBusy(d, proposedNFAConfiguration) ) {
+ if ( debug ) {
+ System.out.println("avoid visiting exact closure computation NFA config: "+
+ proposedNFAConfiguration+" in "+p.enclosingRule.name);
+ System.out.println("state is "+d.dfa.decisionNumber+"."+d.stateNumber);
+ }
+ return;
+ }
+
+ // set closure to be busy for this NFA configuration
+ d.closureBusy.add(proposedNFAConfiguration);
+
+ // p itself is always in closure
+ d.addNFAConfiguration(p, proposedNFAConfiguration);
+
+ // Case 1: are we a reference to another rule?
+ Transition transition0 = p.transition[0];
+ if ( transition0 instanceof RuleClosureTransition ) {
+ int depth = context.recursionDepthEmanatingFromState(p.stateNumber);
+ // Detect recursion by more than a single alt, which indicates
+ // that the decision's lookahead language is non-regular; terminate
+ if ( depth == 1 && d.dfa.getUserMaxLookahead()==0 ) { // k=* only
+ d.dfa.recursiveAltSet.add(alt); // indicate that this alt is recursive
+ if ( d.dfa.recursiveAltSet.size()>1 ) {
+ //System.out.println("recursive alts: "+d.dfa.recursiveAltSet.toString());
+ d.abortedDueToMultipleRecursiveAlts = true;
+ throw new NonLLStarDecisionException(d.dfa);
+ }
+ /*
+ System.out.println("alt "+alt+" in rule "+p.enclosingRule+" dec "+d.dfa.decisionNumber+
+ " ctx: "+context);
+ System.out.println("d="+d);
+ */
+ }
+ // Detect an attempt to recurse too high
+ // if this context has hit the max recursions for p.stateNumber,
+ // don't allow it to enter p.stateNumber again
+ if ( depth >= NFAContext.MAX_SAME_RULE_INVOCATIONS_PER_NFA_CONFIG_STACK ) {
+ /*
+ System.out.println("OVF state "+d);
+ System.out.println("proposed "+proposedNFAConfiguration);
+ */
+ d.abortedDueToRecursionOverflow = true;
+ d.dfa.probe.reportRecursionOverflow(d, proposedNFAConfiguration);
+ if ( debug ) {
+ System.out.println("analysis overflow in closure("+d.stateNumber+")");
+ }
+ return;
+ }
+
+ // otherwise, it's cool to (re)enter target of this rule ref
+ RuleClosureTransition ref = (RuleClosureTransition)transition0;
+ // first create a new context and push onto call tree,
+ // recording the fact that we are invoking a rule and
+ // from which state (case 2 below will get the following state
+ // via the RuleClosureTransition emanating from the invoking state
+ // pushed on the stack).
+ // Reset the context to reflect the fact we invoked rule
+ NFAContext newContext = new NFAContext(context, p);
+ //System.out.println("invoking rule "+ref.rule.name);
+ // System.out.println(" context="+context);
+ // traverse epsilon edge to new rule
+ NFAState ruleTarget = (NFAState)ref.target;
+ closure(ruleTarget, alt, newContext, semanticContext, d, collectPredicates);
+ }
+ // Case 2: end of rule state, context (i.e., an invoker) exists
+ else if ( p.isAcceptState() && context.parent!=null ) {
+ NFAState whichStateInvokedRule = context.invokingState;
+ RuleClosureTransition edgeToRule =
+ (RuleClosureTransition)whichStateInvokedRule.transition[0];
+ NFAState continueState = edgeToRule.followState;
+ NFAContext newContext = context.parent; // "pop" invoking state
+ closure(continueState, alt, newContext, semanticContext, d, collectPredicates);
+ }
+ // Case 3: end of rule state, nobody invoked this rule (no context)
+ // Fall thru to be handled by case 4 automagically.
+ // Case 4: ordinary NFA->DFA conversion case: simple epsilon transition
+ else {
+ // recurse down any epsilon transitions
+ if ( transition0!=null && transition0.isEpsilon() ) {
+ boolean collectPredicatesAfterAction = collectPredicates;
+ if ( transition0.isAction() && collectPredicates ) {
+ collectPredicatesAfterAction = false;
+ /*
+ if ( computingStartState ) {
+ System.out.println("found action during prediction closure "+((ActionLabel)transition0.label).actionAST.token);
+ }
+ */
+ }
+ closure((NFAState)transition0.target,
+ alt,
+ context,
+ semanticContext,
+ d,
+ collectPredicatesAfterAction
+ );
+ }
+ else if ( transition0!=null && transition0.isSemanticPredicate() ) {
+ if ( computingStartState ) {
+ if ( collectPredicates ) {
+ // only indicate we can see a predicate if we're collecting preds;
+ // Could be computing start state & seen an action before this.
+ dfa.predicateVisible = true;
+ }
+ else {
+ // this state has a pred, but we can't see it.
+ dfa.hasPredicateBlockedByAction = true;
+ // System.out.println("found pred during prediction but blocked by action found previously");
+ }
+ }
+ // continue closure here too, but add the sem pred to ctx
+ SemanticContext newSemanticContext = semanticContext;
+ if ( collectPredicates ) {
+ // AND the previous semantic context with new pred
+ SemanticContext labelContext =
+ transition0.label.getSemanticContext();
+ // do not hoist syn preds from other rules; only get if in
+ // starting state's rule (i.e., context is empty)
+ int walkAlt =
+ dfa.decisionNFAStartState.translateDisplayAltToWalkAlt(alt);
+ NFAState altLeftEdge =
+ dfa.nfa.grammar.getNFAStateForAltOfDecision(dfa.decisionNFAStartState,walkAlt);
+ /*
+ System.out.println("state "+p.stateNumber+" alt "+alt+" walkAlt "+walkAlt+" trans to "+transition0.target);
+ System.out.println("DFA start state "+dfa.decisionNFAStartState.stateNumber);
+ System.out.println("alt left edge "+altLeftEdge.stateNumber+
+ ", epsilon target "+
+ altLeftEdge.transition(0).target.stateNumber);
+ */
+ if ( !labelContext.isSyntacticPredicate() ||
+ p==altLeftEdge.transition[0].target )
+ {
+ //System.out.println("&"+labelContext+" enclosingRule="+p.enclosingRule);
+ newSemanticContext =
+ SemanticContext.and(semanticContext, labelContext);
+ }
+ }
+ closure((NFAState)transition0.target,
+ alt,
+ context,
+ newSemanticContext,
+ d,
+ collectPredicates);
+ }
+ Transition transition1 = p.transition[1];
+ if ( transition1!=null && transition1.isEpsilon() ) {
+ closure((NFAState)transition1.target,
+ alt,
+ context,
+ semanticContext,
+ d,
+ collectPredicates);
+ }
+ }
+
+ // don't remove "busy" flag as we want to prevent all
+ // references to same config of state|alt|ctx|semCtx even
+ // if resulting from another NFA state
+ }
+
+ /** A closure operation should abort if that computation has already
+ * been done or a computation with a conflicting context has already
+ * been done. If proposed NFA config's state and alt are the same
+ * there is potentially a problem. If the stack context is identical
+ * then clearly the exact same computation is proposed. If a context
+ * is a suffix of the other, then again the computation is in an
+ * identical context. ?$ and ??$ are considered the same stack.
+ * We could walk configurations linearly doing the comparison instead
+ * of a set for exact matches but it's much slower because you can't
+ * do a Set lookup. I use exact match as ANTLR
+ * always detect the conflict later when checking for context suffixes...
+ * I check for left-recursive stuff and terminate before analysis to
+ * avoid need to do this more expensive computation.
+ *
+ * 12-31-2007: I had to use the loop again rather than simple
+ * closureBusy.contains(proposedNFAConfiguration) lookup. The
+ * semantic context should not be considered when determining if
+ * a closure operation is busy. I saw a FOLLOW closure operation
+ * spin until time out because the predicate context kept increasing
+ * in size even though it's same boolean value. This seems faster also
+ * because I'm not doing String.equals on the preds all the time.
+ *
+ * 05-05-2008: Hmm...well, i think it was a mistake to remove the sem
+ * ctx check below...adding back in. Coincides with report of ANTLR
+ * getting super slow: http://www.antlr.org:8888/browse/ANTLR-235
+ * This could be because it doesn't properly compute then resolve
+ * a predicate expression. Seems to fix unit test:
+ * TestSemanticPredicates.testSemanticContextPreventsEarlyTerminationOfClosure()
+ * Changing back to Set from List. Changed a large grammar from 8 minutes
+ * to 11 seconds. Cool. Closing ANTLR-235.
+ */
+ public static boolean closureIsBusy(DFAState d,
+ NFAConfiguration proposedNFAConfiguration)
+ {
+ return d.closureBusy.contains(proposedNFAConfiguration);
+/*
+ int numConfigs = d.closureBusy.size();
+ // Check epsilon cycle (same state, same alt, same context)
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration c = (NFAConfiguration) d.closureBusy.get(i);
+ if ( proposedNFAConfiguration.state==c.state &&
+ proposedNFAConfiguration.alt==c.alt &&
+ proposedNFAConfiguration.semanticContext.equals(c.semanticContext) &&
+ proposedNFAConfiguration.context.suffix(c.context) )
+ {
+ return true;
+ }
+ }
+ return false;
+ */
+ }
+
+ /** Given the set of NFA states in DFA state d, find all NFA states
+ * reachable traversing label arcs. By definition, there can be
+ * only one DFA state reachable by an atom from DFA state d so we must
+ * find and merge all NFA states reachable via label. Return a new
+ * DFAState that has all of those NFA states with their context (i.e.,
+ * which alt do they predict and where to return to if they fall off
+ * end of a rule).
+ *
+ * Because we cannot jump to another rule nor fall off the end of a rule
+ * via a non-epsilon transition, NFA states reachable from d have the
+ * same configuration as the NFA state in d. So if NFA state 7 in d's
+ * configurations can reach NFA state 13 then 13 will be added to the
+ * new DFAState (labelDFATarget) with the same configuration as state
+ * 7 had.
+ *
+ * This method does not see EOT transitions off the end of token rule
+ * accept states if the rule was invoked by somebody.
+ */
+ public DFAState reach(DFAState d, Label label) {
+ //System.out.println("reach "+label.toString(dfa.nfa.grammar)+" from "+d.stateNumber);
+ DFAState labelDFATarget = dfa.newState();
+
+ // for each NFA state in d with a labeled edge,
+ // add in target states for label
+ //System.out.println("size(d.state="+d.stateNumber+")="+d.nfaConfigurations.size());
+ //System.out.println("size(labeled edge states)="+d.configurationsWithLabeledEdges.size());
+ List configs = d.configurationsWithLabeledEdges;
+ int numConfigs = configs.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration c = configs.get(i);
+ if ( c.resolved || c.resolveWithPredicate ) {
+ continue; // the conflict resolver indicates we must leave alone
+ }
+ NFAState p = dfa.nfa.getState(c.state);
+ // by design of the grammar->NFA conversion, only transition 0
+ // may have a non-epsilon edge.
+ Transition edge = p.transition[0];
+ if ( edge==null || !c.singleAtomTransitionEmanating ) {
+ continue;
+ }
+ Label edgeLabel = edge.label;
+
+ // SPECIAL CASE
+ // if it's an EOT transition on end of lexer rule, but context
+ // stack is not empty, then don't see the EOT; the closure
+ // will have added in the proper states following the reference
+ // to this rule in the invoking rule. In other words, if
+ // somebody called this rule, don't see the EOT emanating from
+ // this accept state.
+ if ( c.context.parent!=null && edgeLabel.label==Label.EOT ) {
+ continue;
+ }
+
+ // Labels not unique at this point (not until addReachableLabels)
+ // so try simple int label match before general set intersection
+ //System.out.println("comparing "+edgeLabel+" with "+label);
+ if ( Label.intersect(label, edgeLabel) ) {
+ // found a transition with label;
+ // add NFA target to (potentially) new DFA state
+ NFAConfiguration newC = labelDFATarget.addNFAConfiguration(
+ (NFAState)edge.target,
+ c.alt,
+ c.context,
+ c.semanticContext);
+ }
+ }
+ if ( labelDFATarget.nfaConfigurations.size()==0 ) {
+ // kill; it's empty
+ dfa.setState(labelDFATarget.stateNumber, null);
+ labelDFATarget = null;
+ }
+ return labelDFATarget;
+ }
+
+ /** Walk the configurations of this DFA state d looking for the
+ * configuration, c, that has a transition on EOT. State d should
+ * be converted to an accept state predicting the c.alt. Blast
+ * d's current configuration set and make it just have config c.
+ *
+ * TODO: can there be more than one config with EOT transition?
+ * That would mean that two NFA configurations could reach the
+ * end of the token with possibly different predicted alts.
+ * Seems like that would be rare or impossible. Perhaps convert
+ * this routine to find all such configs and give error if >1.
+ */
+ protected void convertToEOTAcceptState(DFAState d) {
+ Label eot = new Label(Label.EOT);
+ int numConfigs = d.nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration c = (NFAConfiguration)d.nfaConfigurations.get(i);
+ if ( c.resolved || c.resolveWithPredicate ) {
+ continue; // the conflict resolver indicates we must leave alone
+ }
+ NFAState p = dfa.nfa.getState(c.state);
+ Transition edge = p.transition[0];
+ Label edgeLabel = edge.label;
+ if ( edgeLabel.equals(eot) ) {
+ //System.out.println("config with EOT: "+c);
+ d.setAcceptState(true);
+ //System.out.println("d goes from "+d);
+ d.nfaConfigurations.clear();
+ d.addNFAConfiguration(p,c.alt,c.context,c.semanticContext);
+ //System.out.println("to "+d);
+ return; // assume only one EOT transition
+ }
+ }
+ }
+
+ /** Add a new DFA state to the DFA if not already present.
+ * If the DFA state uniquely predicts a single alternative, it
+ * becomes a stop state; don't add to work list. Further, if
+ * there exists an NFA state predicted by > 1 different alternatives
+ * and with the same syn and sem context, the DFA is nondeterministic for
+ * at least one input sequence reaching that NFA state.
+ */
+ protected DFAState addDFAStateToWorkList(DFAState d) {
+ DFAState existingState = dfa.addState(d);
+ if ( d != existingState ) {
+ // already there...use/return the existing DFA state.
+ // But also set the states[d.stateNumber] to the existing
+ // DFA state because the closureIsBusy must report
+ // infinite recursion on a state before it knows
+ // whether or not the state will already be
+ // found after closure on it finishes. It could be
+ // referring to a state that will ultimately not make it
+ // into the reachable state space and the error
+ // reporting must be able to compute the path from
+ // start to the error state with infinite recursion
+ dfa.setState(d.stateNumber, existingState);
+ return existingState;
+ }
+
+ // if not there, then examine new state.
+
+ // resolve syntactic conflicts by choosing a single alt or
+ // by using semantic predicates if present.
+ resolveNonDeterminisms(d);
+
+ // If deterministic, don't add this state; it's an accept state
+ // Just return as a valid DFA state
+ int alt = d.getUniquelyPredictedAlt();
+ if ( alt!=NFA.INVALID_ALT_NUMBER ) { // uniquely predicts an alt?
+ d = convertToAcceptState(d, alt);
+ /*
+ System.out.println("convert to accept; DFA "+d.dfa.decisionNumber+" state "+d.stateNumber+" uniquely predicts alt "+
+ d.getUniquelyPredictedAlt());
+ */
+ }
+ else {
+ // unresolved, add to work list to continue NFA conversion
+ work.add(d);
+ }
+ return d;
+ }
+
+ protected DFAState convertToAcceptState(DFAState d, int alt) {
+ // only merge stop states if they are deterministic and no
+ // recursion problems and only if they have the same gated pred
+ // context!
+ // Later, the error reporting may want to trace the path from
+ // the start state to the nondet state
+ if ( DFAOptimizer.MERGE_STOP_STATES &&
+ d.getNonDeterministicAlts()==null &&
+ !d.abortedDueToRecursionOverflow &&
+ !d.abortedDueToMultipleRecursiveAlts )
+ {
+ // check to see if we already have an accept state for this alt
+ // [must do this after we resolve nondeterminisms in general]
+ DFAState acceptStateForAlt = dfa.getAcceptState(alt);
+ if ( acceptStateForAlt!=null ) {
+ // we already have an accept state for alt;
+ // Are their gate sem pred contexts the same?
+ // For now we assume a braindead version: both must not
+ // have gated preds or share exactly same single gated pred.
+ // The equals() method is only defined on Predicate contexts not
+ // OR etc...
+ SemanticContext gatedPreds = d.getGatedPredicatesInNFAConfigurations();
+ SemanticContext existingStateGatedPreds =
+ acceptStateForAlt.getGatedPredicatesInNFAConfigurations();
+ if ( (gatedPreds==null && existingStateGatedPreds==null) ||
+ ((gatedPreds!=null && existingStateGatedPreds!=null) &&
+ gatedPreds.equals(existingStateGatedPreds)) )
+ {
+ // make this d.statenumber point at old DFA state
+ dfa.setState(d.stateNumber, acceptStateForAlt);
+ dfa.removeState(d); // remove this state from unique DFA state set
+ d = acceptStateForAlt; // use old accept state; throw this one out
+ return d;
+ }
+ // else consider it a new accept state; fall through.
+ }
+ }
+ d.setAcceptState(true); // new accept state for alt
+ dfa.setAcceptState(alt, d);
+ return d;
+ }
+
+ /** If > 1 NFA configurations within this DFA state have identical
+ * NFA state and context, but differ in their predicted
+ * TODO update for new context suffix stuff 3-9-2005
+ * alternative then a single input sequence predicts multiple alts.
+ * The NFA decision is therefore syntactically indistinguishable
+ * from the left edge upon at least one input sequence. We may
+ * terminate the NFA to DFA conversion for these paths since no
+ * paths emanating from those NFA states can possibly separate
+ * these conjoined twins once interwined to make things
+ * deterministic (unless there are semantic predicates; see below).
+ *
+ * Upon a nondeterministic set of NFA configurations, we should
+ * report a problem to the grammar designer and resolve the issue
+ * by aribitrarily picking the first alternative (this usually
+ * ends up producing the most natural behavior). Pick the lowest
+ * alt number and just turn off all NFA configurations
+ * associated with the other alts. Rather than remove conflicting
+ * NFA configurations, I set the "resolved" bit so that future
+ * computations will ignore them. In this way, we maintain the
+ * complete DFA state with all its configurations, but prevent
+ * future DFA conversion operations from pursuing undesirable
+ * paths. Remember that we want to terminate DFA conversion as
+ * soon as we know the decision is deterministic *or*
+ * nondeterministic.
+ *
+ * [BTW, I have convinced myself that there can be at most one
+ * set of nondeterministic configurations in a DFA state. Only NFA
+ * configurations arising from the same input sequence can appear
+ * in a DFA state. There is no way to have another complete set
+ * of nondeterministic NFA configurations without another input
+ * sequence, which would reach a different DFA state. Therefore,
+ * the two nondeterministic NFA configuration sets cannot collide
+ * in the same DFA state.]
+ *
+ * Consider DFA state {(s|1),(s|2),(s|3),(t|3),(v|4)} where (s|a)
+ * is state 's' and alternative 'a'. Here, configuration set
+ * {(s|1),(s|2),(s|3)} predicts 3 different alts. Configurations
+ * (s|2) and (s|3) are "resolved", leaving {(s|1),(t|3),(v|4)} as
+ * items that must still be considered by the DFA conversion
+ * algorithm in DFA.findNewDFAStatesAndAddDFATransitions().
+ *
+ * Consider the following grammar where alts 1 and 2 are no
+ * problem because of the 2nd lookahead symbol. Alts 3 and 4 are
+ * identical and will therefore reach the rule end NFA state but
+ * predicting 2 different alts (no amount of future lookahead
+ * will render them deterministic/separable):
+ *
+ * a : A B
+ * | A C
+ * | A
+ * | A
+ * ;
+ *
+ * Here is a (slightly reduced) NFA of this grammar:
+ *
+ * (1)-A->(2)-B->(end)-EOF->(8)
+ * | ^
+ * (2)-A->(3)-C----|
+ * | ^
+ * (4)-A->(5)------|
+ * | ^
+ * (6)-A->(7)------|
+ *
+ * where (n) is NFA state n. To begin DFA conversion, the start
+ * state is created:
+ *
+ * {(1|1),(2|2),(4|3),(6|4)}
+ *
+ * Upon A, all NFA configurations lead to new NFA states yielding
+ * new DFA state:
+ *
+ * {(2|1),(3|2),(5|3),(7|4),(end|3),(end|4)}
+ *
+ * where the configurations with state end in them are added
+ * during the epsilon closure operation. State end predicts both
+ * alts 3 and 4. An error is reported, the latter configuration is
+ * flagged as resolved leaving the DFA state as:
+ *
+ * {(2|1),(3|2),(5|3),(7|4|resolved),(end|3),(end|4|resolved)}
+ *
+ * As NFA configurations are added to a DFA state during its
+ * construction, the reachable set of labels is computed. Here
+ * reachable is {B,C,EOF} because there is at least one NFA state
+ * in the DFA state that can transition upon those symbols.
+ *
+ * The final DFA looks like:
+ *
+ * {(1|1),(2|2),(4|3),(6|4)}
+ * |
+ * v
+ * {(2|1),(3|2),(5|3),(7|4),(end|3),(end|4)} -B-> (end|1)
+ * | |
+ * C ----EOF-> (8,3)
+ * |
+ * v
+ * (end|2)
+ *
+ * Upon AB, alt 1 is predicted. Upon AC, alt 2 is predicted.
+ * Upon A EOF, alt 3 is predicted. Alt 4 is not a viable
+ * alternative.
+ *
+ * The algorithm is essentially to walk all the configurations
+ * looking for a conflict of the form (s|i) and (s|j) for i!=j.
+ * Use a hash table to track state+context pairs for collisions
+ * so that we have O(n) to walk the n configurations looking for
+ * a conflict. Upon every conflict, track the alt number so
+ * we have a list of all nondeterministically predicted alts. Also
+ * track the minimum alt. Next go back over the configurations, setting
+ * the "resolved" bit for any that have an alt that is a member of
+ * the nondeterministic set. This will effectively remove any alts
+ * but the one we want from future consideration.
+ *
+ * See resolveWithSemanticPredicates()
+ *
+ * AMBIGUOUS TOKENS
+ *
+ * With keywords and ID tokens, there is an inherit ambiguity in that
+ * "int" can be matched by ID also. Each lexer rule has an EOT
+ * transition emanating from it which is used whenever the end of
+ * a rule is reached and another token rule did not invoke it. EOT
+ * is the only thing that can be seen next. If two rules are identical
+ * like "int" and "int" then the 2nd def is unreachable and you'll get
+ * a warning. We prevent a warning though for the keyword/ID issue as
+ * ID is still reachable. This can be a bit weird. '+' rule then a
+ * '+'|'+=' rule will fail to match '+' for the 2nd rule.
+ *
+ * If all NFA states in this DFA state are targets of EOT transitions,
+ * (and there is more than one state plus no unique alt is predicted)
+ * then DFA conversion will leave this state as a dead state as nothing
+ * can be reached from this state. To resolve the ambiguity, just do
+ * what flex and friends do: pick the first rule (alt in this case) to
+ * win. This means you should put keywords before the ID rule.
+ * If the DFA state has only one NFA state then there is no issue:
+ * it uniquely predicts one alt. :) Problem
+ * states will look like this during conversion:
+ *
+ * DFA 1:{9|1, 19|2, 14|3, 20|2, 23|2, 24|2, ...}-->5:{41|3, 42|2}
+ *
+ * Worse, when you have two identical literal rules, you will see 3 alts
+ * in the EOT state (one for ID and one each for the identical rules).
+ */
+ public void resolveNonDeterminisms(DFAState d) {
+ if ( debug ) {
+ System.out.println("resolveNonDeterminisms "+d.toString());
+ }
+ boolean conflictingLexerRules = false;
+ Set nondeterministicAlts = d.getNonDeterministicAlts();
+ if ( debug && nondeterministicAlts!=null ) {
+ System.out.println("nondet alts="+nondeterministicAlts);
+ }
+
+ // CHECK FOR AMBIGUOUS EOT (if |allAlts|>1 and EOT state, resolve)
+ // grab any config to see if EOT state; any other configs must
+ // transition on EOT to get to this DFA state as well so all
+ // states in d must be targets of EOT. These are the end states
+ // created in NFAFactory.build_EOFState
+ NFAConfiguration anyConfig = d.nfaConfigurations.get(0);
+ NFAState anyState = dfa.nfa.getState(anyConfig.state);
+
+ // if d is target of EOT and more than one predicted alt
+ // indicate that d is nondeterministic on all alts otherwise
+ // it looks like state has no problem
+ if ( anyState.isEOTTargetState() ) {
+ Set allAlts = d.getAltSet();
+ // is more than 1 alt predicted?
+ if ( allAlts!=null && allAlts.size()>1 ) {
+ nondeterministicAlts = allAlts;
+ // track Tokens rule issues differently than other decisions
+ if ( d.dfa.isTokensRuleDecision() ) {
+ dfa.probe.reportLexerRuleNondeterminism(d,allAlts);
+ //System.out.println("Tokens rule DFA state "+d+" nondeterministic");
+ conflictingLexerRules = true;
+ }
+ }
+ }
+
+ // if no problems return unless we aborted work on d to avoid inf recursion
+ if ( !d.abortedDueToRecursionOverflow && nondeterministicAlts==null ) {
+ return; // no problems, return
+ }
+
+ // if we're not a conflicting lexer rule and we didn't abort, report ambig
+ // We should get a report for abort so don't give another
+ if ( !d.abortedDueToRecursionOverflow && !conflictingLexerRules ) {
+ // TODO: with k=x option set, this is called twice for same state
+ dfa.probe.reportNondeterminism(d, nondeterministicAlts);
+ // TODO: how to turn off when it's only the FOLLOW that is
+ // conflicting. This used to shut off even alts i,j < n
+ // conflict warnings. :(
+ }
+
+ // ATTEMPT TO RESOLVE WITH SEMANTIC PREDICATES
+ boolean resolved =
+ tryToResolveWithSemanticPredicates(d, nondeterministicAlts);
+ if ( resolved ) {
+ if ( debug ) {
+ System.out.println("resolved DFA state "+d.stateNumber+" with pred");
+ }
+ d.resolvedWithPredicates = true;
+ dfa.probe.reportNondeterminismResolvedWithSemanticPredicate(d);
+ return;
+ }
+
+ // RESOLVE SYNTACTIC CONFLICT BY REMOVING ALL BUT ONE ALT
+ resolveByChoosingFirstAlt(d, nondeterministicAlts);
+
+ //System.out.println("state "+d.stateNumber+" resolved to alt "+winningAlt);
+ }
+
+ protected int resolveByChoosingFirstAlt(DFAState d, Set nondeterministicAlts) {
+ int winningAlt = 0;
+ if ( dfa.isGreedy() ) {
+ winningAlt = resolveByPickingMinAlt(d,nondeterministicAlts);
+ }
+ else {
+ // If nongreedy, the exit alt shout win, but only if it's
+ // involved in the nondeterminism!
+ /*
+ System.out.println("resolving exit alt for decision="+
+ dfa.decisionNumber+" state="+d);
+ System.out.println("nondet="+nondeterministicAlts);
+ System.out.println("exit alt "+exitAlt);
+ */
+ int exitAlt = dfa.getNumberOfAlts();
+ if ( nondeterministicAlts.contains(Utils.integer(exitAlt)) ) {
+ // if nongreedy and exit alt is one of those nondeterministic alts
+ // predicted, resolve in favor of what follows block
+ winningAlt = resolveByPickingExitAlt(d,nondeterministicAlts);
+ }
+ else {
+ winningAlt = resolveByPickingMinAlt(d,nondeterministicAlts);
+ }
+ }
+ return winningAlt;
+ }
+
+ /** Turn off all configurations associated with the
+ * set of incoming nondeterministic alts except the min alt number.
+ * There may be many alts among the configurations but only turn off
+ * the ones with problems (other than the min alt of course).
+ *
+ * If nondeterministicAlts is null then turn off all configs 'cept those
+ * associated with the minimum alt.
+ *
+ * Return the min alt found.
+ */
+ protected int resolveByPickingMinAlt(DFAState d, Set nondeterministicAlts) {
+ int min = Integer.MAX_VALUE;
+ if ( nondeterministicAlts!=null ) {
+ min = getMinAlt(nondeterministicAlts);
+ }
+ else {
+ min = d.minAltInConfigurations;
+ }
+
+ turnOffOtherAlts(d, min, nondeterministicAlts);
+
+ return min;
+ }
+
+ /** Resolve state d by choosing exit alt, which is same value as the
+ * number of alternatives. Return that exit alt.
+ */
+ protected int resolveByPickingExitAlt(DFAState d, Set nondeterministicAlts) {
+ int exitAlt = dfa.getNumberOfAlts();
+ turnOffOtherAlts(d, exitAlt, nondeterministicAlts);
+ return exitAlt;
+ }
+
+ /** turn off all states associated with alts other than the good one
+ * (as long as they are one of the nondeterministic ones)
+ */
+ protected static void turnOffOtherAlts(DFAState d, int min, Set nondeterministicAlts) {
+ int numConfigs = d.nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration)d.nfaConfigurations.get(i);
+ if ( configuration.alt!=min ) {
+ if ( nondeterministicAlts==null ||
+ nondeterministicAlts.contains(Utils.integer(configuration.alt)) )
+ {
+ configuration.resolved = true;
+ }
+ }
+ }
+ }
+
+ protected static int getMinAlt(Set nondeterministicAlts) {
+ int min = Integer.MAX_VALUE;
+ for (Integer altI : nondeterministicAlts) {
+ int alt = altI.intValue();
+ if ( alt < min ) {
+ min = alt;
+ }
+ }
+ return min;
+ }
+
+ /** See if a set of nondeterministic alternatives can be disambiguated
+ * with the semantic predicate contexts of the alternatives.
+ *
+ * Without semantic predicates, syntactic conflicts are resolved
+ * by simply choosing the first viable alternative. In the
+ * presence of semantic predicates, you can resolve the issue by
+ * evaluating boolean expressions at run time. During analysis,
+ * this amounts to suppressing grammar error messages to the
+ * developer. NFA configurations are always marked as "to be
+ * resolved with predicates" so that
+ * DFA.findNewDFAStatesAndAddDFATransitions() will know to ignore
+ * these configurations and add predicate transitions to the DFA
+ * after adding token/char labels.
+ *
+ * During analysis, we can simply make sure that for n
+ * ambiguously predicted alternatives there are at least n-1
+ * unique predicate sets. The nth alternative can be predicted
+ * with "not" the "or" of all other predicates. NFA configurations without
+ * predicates are assumed to have the default predicate of
+ * "true" from a user point of view. When true is combined via || with
+ * another predicate, the predicate is a tautology and must be removed
+ * from consideration for disambiguation:
+ *
+ * a : b | B ; // hoisting p1||true out of rule b, yields no predicate
+ * b : {p1}? B | B ;
+ *
+ * This is done down in getPredicatesPerNonDeterministicAlt().
+ */
+ protected boolean tryToResolveWithSemanticPredicates(DFAState d,
+ Set nondeterministicAlts)
+ {
+ Map altToPredMap =
+ getPredicatesPerNonDeterministicAlt(d, nondeterministicAlts);
+
+ if ( altToPredMap.size()==0 ) {
+ return false;
+ }
+
+ //System.out.println("nondeterministic alts with predicates: "+altToPredMap);
+ dfa.probe.reportAltPredicateContext(d, altToPredMap);
+
+ if ( nondeterministicAlts.size()-altToPredMap.size()>1 ) {
+ // too few predicates to resolve; just return
+ // TODO: actually do we need to gen error here?
+ return false;
+ }
+
+ // Handle case where 1 predicate is missing
+ // Case 1. Semantic predicates
+ // If the missing pred is on nth alt, !(union of other preds)==true
+ // so we can avoid that computation. If naked alt is ith, then must
+ // test it with !(union) since semantic predicated alts are order
+ // independent
+ // Case 2: Syntactic predicates
+ // The naked alt is always assumed to be true as the order of
+ // alts is the order of precedence. The naked alt will be a tautology
+ // anyway as it's !(union of other preds). This implies
+ // that there is no such thing as noviable alt for synpred edges
+ // emanating from a DFA state.
+ if ( altToPredMap.size()==nondeterministicAlts.size()-1 ) {
+ // if there are n-1 predicates for n nondeterministic alts, can fix
+ org.antlr.misc.BitSet ndSet = org.antlr.misc.BitSet.of(nondeterministicAlts);
+ org.antlr.misc.BitSet predSet = org.antlr.misc.BitSet.of(altToPredMap);
+ int nakedAlt = ndSet.subtract(predSet).getSingleElement();
+ SemanticContext nakedAltPred = null;
+ if ( nakedAlt == max(nondeterministicAlts) ) {
+ // the naked alt is the last nondet alt and will be the default clause
+ nakedAltPred = new SemanticContext.TruePredicate();
+ }
+ else {
+ // pretend naked alternative is covered with !(union other preds)
+ // unless it's a synpred since those have precedence same
+ // as alt order
+ SemanticContext unionOfPredicatesFromAllAlts =
+ getUnionOfPredicates(altToPredMap);
+ //System.out.println("all predicates "+unionOfPredicatesFromAllAlts);
+ if ( unionOfPredicatesFromAllAlts.isSyntacticPredicate() ) {
+ nakedAltPred = new SemanticContext.TruePredicate();
+ }
+ else {
+ nakedAltPred =
+ SemanticContext.not(unionOfPredicatesFromAllAlts);
+ }
+ }
+
+ //System.out.println("covering naked alt="+nakedAlt+" with "+nakedAltPred);
+
+ altToPredMap.put(Utils.integer(nakedAlt), nakedAltPred);
+ // set all config with alt=nakedAlt to have the computed predicate
+ int numConfigs = d.nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration)d.nfaConfigurations.get(i);
+ if ( configuration.alt == nakedAlt ) {
+ configuration.semanticContext = nakedAltPred;
+ }
+ }
+ }
+
+ if ( altToPredMap.size()==nondeterministicAlts.size() ) {
+ // RESOLVE CONFLICT by picking one NFA configuration for each alt
+ // and setting its resolvedWithPredicate flag
+ // First, prevent a recursion warning on this state due to
+ // pred resolution
+ if ( d.abortedDueToRecursionOverflow ) {
+ d.dfa.probe.removeRecursiveOverflowState(d);
+ }
+ int numConfigs = d.nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration)d.nfaConfigurations.get(i);
+ SemanticContext semCtx = (SemanticContext)
+ altToPredMap.get(Utils.integer(configuration.alt));
+ if ( semCtx!=null ) {
+ // resolve (first found) with pred
+ // and remove alt from problem list
+ configuration.resolveWithPredicate = true;
+ configuration.semanticContext = semCtx; // reset to combined
+ altToPredMap.remove(Utils.integer(configuration.alt));
+
+ // notify grammar that we've used the preds contained in semCtx
+ if ( semCtx.isSyntacticPredicate() ) {
+ dfa.nfa.grammar.synPredUsedInDFA(dfa, semCtx);
+ }
+ }
+ else if ( nondeterministicAlts.contains(Utils.integer(configuration.alt)) ) {
+ // resolve all configurations for nondeterministic alts
+ // for which there is no predicate context by turning it off
+ configuration.resolved = true;
+ }
+ }
+ return true;
+ }
+
+ return false; // couldn't fix the problem with predicates
+ }
+
+ /** Return a mapping from nondeterministc alt to combined list of predicates.
+ * If both (s|i|semCtx1) and (t|i|semCtx2) exist, then the proper predicate
+ * for alt i is semCtx1||semCtx2 because you have arrived at this single
+ * DFA state via two NFA paths, both of which have semantic predicates.
+ * We ignore deterministic alts because syntax alone is sufficient
+ * to predict those. Do not include their predicates.
+ *
+ * Alts with no predicate are assumed to have {true}? pred.
+ *
+ * When combining via || with "true", all predicates are removed from
+ * consideration since the expression will always be true and hence
+ * not tell us how to resolve anything. So, if any NFA configuration
+ * in this DFA state does not have a semantic context, the alt cannot
+ * be resolved with a predicate.
+ *
+ * If nonnull, incidentEdgeLabel tells us what NFA transition label
+ * we did a reach on to compute state d. d may have insufficient
+ * preds, so we really want this for the error message.
+ */
+ protected Map getPredicatesPerNonDeterministicAlt(DFAState d,
+ Set nondeterministicAlts)
+ {
+ // map alt to combined SemanticContext
+ Map altToPredicateContextMap =
+ new HashMap();
+ // init the alt to predicate set map
+ Map> altToSetOfContextsMap =
+ new HashMap>();
+ for (Iterator it = nondeterministicAlts.iterator(); it.hasNext();) {
+ Integer altI = (Integer) it.next();
+ altToSetOfContextsMap.put(altI, new HashSet());
+ }
+
+ /*
+ List sampleInputLabels = d.dfa.probe.getSampleNonDeterministicInputSequence(d);
+ String input = d.dfa.probe.getInputSequenceDisplay(sampleInputLabels);
+ System.out.println("sample input: "+input);
+ */
+
+ // for each configuration, create a unique set of predicates
+ // Also, track the alts with at least one uncovered configuration
+ // (one w/o a predicate); tracks tautologies like p1||true
+ Map> altToLocationsReachableWithoutPredicate = new HashMap>();
+ Set nondetAltsWithUncoveredConfiguration = new HashSet();
+ //System.out.println("configs="+d.nfaConfigurations);
+ //System.out.println("configs with preds?"+d.atLeastOneConfigurationHasAPredicate);
+ //System.out.println("configs with preds="+d.configurationsWithPredicateEdges);
+ int numConfigs = d.nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration)d.nfaConfigurations.get(i);
+ Integer altI = Utils.integer(configuration.alt);
+ // if alt is nondeterministic, combine its predicates
+ if ( nondeterministicAlts.contains(altI) ) {
+ // if there is a predicate for this NFA configuration, OR in
+ if ( configuration.semanticContext !=
+ SemanticContext.EMPTY_SEMANTIC_CONTEXT )
+ {
+ Set predSet = altToSetOfContextsMap.get(altI);
+ predSet.add(configuration.semanticContext);
+ }
+ else {
+ // if no predicate, but it's part of nondeterministic alt
+ // then at least one path exists not covered by a predicate.
+ // must remove predicate for this alt; track incomplete alts
+ nondetAltsWithUncoveredConfiguration.add(altI);
+ /*
+ NFAState s = dfa.nfa.getState(configuration.state);
+ System.out.println("###\ndec "+dfa.decisionNumber+" alt "+configuration.alt+
+ " enclosing rule for nfa state not covered "+
+ s.enclosingRule);
+ if ( s.associatedASTNode!=null ) {
+ System.out.println("token="+s.associatedASTNode.token);
+ }
+ System.out.println("nfa state="+s);
+
+ if ( s.incidentEdgeLabel!=null && Label.intersect(incidentEdgeLabel, s.incidentEdgeLabel) ) {
+ Set locations = altToLocationsReachableWithoutPredicate.get(altI);
+ if ( locations==null ) {
+ locations = new HashSet();
+ altToLocationsReachableWithoutPredicate.put(altI, locations);
+ }
+ locations.add(s.associatedASTNode.token);
+ }
+ */
+ }
+ }
+ }
+
+ // For each alt, OR together all unique predicates associated with
+ // all configurations
+ // Also, track the list of incompletely covered alts: those alts
+ // with at least 1 predicate and at least one configuration w/o a
+ // predicate. We want this in order to report to the decision probe.
+ List incompletelyCoveredAlts = new ArrayList();
+ for (Iterator it = nondeterministicAlts.iterator(); it.hasNext();) {
+ Integer altI = (Integer) it.next();
+ Set contextsForThisAlt = altToSetOfContextsMap.get(altI);
+ if ( nondetAltsWithUncoveredConfiguration.contains(altI) ) { // >= 1 config has no ctx
+ if ( contextsForThisAlt.size()>0 ) { // && at least one pred
+ incompletelyCoveredAlts.add(altI); // this alt incompleted covered
+ }
+ continue; // don't include at least 1 config has no ctx
+ }
+ SemanticContext combinedContext = null;
+ for (Iterator itrSet = contextsForThisAlt.iterator(); itrSet.hasNext();) {
+ SemanticContext ctx = (SemanticContext) itrSet.next();
+ combinedContext =
+ SemanticContext.or(combinedContext,ctx);
+ }
+ altToPredicateContextMap.put(altI, combinedContext);
+ }
+
+ if ( incompletelyCoveredAlts.size()>0 ) {
+ /*
+ System.out.println("prob in dec "+dfa.decisionNumber+" state="+d);
+ FASerializer serializer = new FASerializer(dfa.nfa.grammar);
+ String result = serializer.serialize(dfa.startState);
+ System.out.println("dfa: "+result);
+ System.out.println("incomplete alts: "+incompletelyCoveredAlts);
+ System.out.println("nondet="+nondeterministicAlts);
+ System.out.println("nondetAltsWithUncoveredConfiguration="+ nondetAltsWithUncoveredConfiguration);
+ System.out.println("altToCtxMap="+altToSetOfContextsMap);
+ System.out.println("altToPredicateContextMap="+altToPredicateContextMap);
+ */
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration configuration = (NFAConfiguration)d.nfaConfigurations.get(i);
+ Integer altI = Utils.integer(configuration.alt);
+ if ( incompletelyCoveredAlts.contains(altI) &&
+ configuration.semanticContext == SemanticContext.EMPTY_SEMANTIC_CONTEXT )
+ {
+ NFAState s = dfa.nfa.getState(configuration.state);
+ /*
+ System.out.print("nondet config w/o context "+configuration+
+ " incident "+(s.incidentEdgeLabel!=null?s.incidentEdgeLabel.toString(dfa.nfa.grammar):null));
+ if ( s.associatedASTNode!=null ) {
+ System.out.print(" token="+s.associatedASTNode.token);
+ }
+ else System.out.println();
+ */
+ // We want to report getting to an NFA state with an
+ // incoming label, unless it's EOF, w/o a predicate.
+ if ( s.incidentEdgeLabel!=null && s.incidentEdgeLabel.label != Label.EOF ) {
+ if ( s.associatedASTNode==null || s.associatedASTNode.token==null ) {
+ ErrorManager.internalError("no AST/token for nonepsilon target w/o predicate");
+ }
+ else {
+ Set locations = altToLocationsReachableWithoutPredicate.get(altI);
+ if ( locations==null ) {
+ locations = new HashSet();
+ altToLocationsReachableWithoutPredicate.put(altI, locations);
+ }
+ locations.add(s.associatedASTNode.token);
+ }
+ }
+ }
+ }
+ dfa.probe.reportIncompletelyCoveredAlts(d,
+ altToLocationsReachableWithoutPredicate);
+ }
+
+ return altToPredicateContextMap;
+ }
+
+ /** OR together all predicates from the alts. Note that the predicate
+ * for an alt could itself be a combination of predicates.
+ */
+ protected static SemanticContext getUnionOfPredicates(Map altToPredMap) {
+ Iterator iter;
+ SemanticContext unionOfPredicatesFromAllAlts = null;
+ iter = altToPredMap.values().iterator();
+ while ( iter.hasNext() ) {
+ SemanticContext semCtx = (SemanticContext)iter.next();
+ if ( unionOfPredicatesFromAllAlts==null ) {
+ unionOfPredicatesFromAllAlts = semCtx;
+ }
+ else {
+ unionOfPredicatesFromAllAlts =
+ SemanticContext.or(unionOfPredicatesFromAllAlts,semCtx);
+ }
+ }
+ return unionOfPredicatesFromAllAlts;
+ }
+
+ /** for each NFA config in d, look for "predicate required" sign set
+ * during nondeterminism resolution.
+ *
+ * Add the predicate edges sorted by the alternative number; I'm fairly
+ * sure that I could walk the configs backwards so they are added to
+ * the predDFATarget in the right order, but it's best to make sure.
+ * Predicates succeed in the order they are specifed. Alt i wins
+ * over alt i+1 if both predicates are true.
+ */
+ protected void addPredicateTransitions(DFAState d) {
+ List configsWithPreds = new ArrayList();
+ // get a list of all configs with predicates
+ int numConfigs = d.nfaConfigurations.size();
+ for (int i = 0; i < numConfigs; i++) {
+ NFAConfiguration c = (NFAConfiguration)d.nfaConfigurations.get(i);
+ if ( c.resolveWithPredicate ) {
+ configsWithPreds.add(c);
+ }
+ }
+ // Sort ascending according to alt; alt i has higher precedence than i+1
+ Collections.sort(configsWithPreds,
+ new Comparator() {
+ public int compare(Object a, Object b) {
+ NFAConfiguration ca = (NFAConfiguration)a;
+ NFAConfiguration cb = (NFAConfiguration)b;
+ if ( ca.alt < cb.alt ) return -1;
+ else if ( ca.alt > cb.alt ) return 1;
+ return 0;
+ }
+ });
+ List predConfigsSortedByAlt = configsWithPreds;
+ // Now, we can add edges emanating from d for these preds in right order
+ for (int i = 0; i < predConfigsSortedByAlt.size(); i++) {
+ NFAConfiguration c = (NFAConfiguration)predConfigsSortedByAlt.get(i);
+ DFAState predDFATarget = d.dfa.getAcceptState(c.alt);
+ if ( predDFATarget==null ) {
+ predDFATarget = dfa.newState(); // create if not there.
+ // create a new DFA state that is a target of the predicate from d
+ predDFATarget.addNFAConfiguration(dfa.nfa.getState(c.state),
+ c.alt,
+ c.context,
+ c.semanticContext);
+ predDFATarget.setAcceptState(true);
+ dfa.setAcceptState(c.alt, predDFATarget);
+ DFAState existingState = dfa.addState(predDFATarget);
+ if ( predDFATarget != existingState ) {
+ // already there...use/return the existing DFA state that
+ // is a target of this predicate. Make this state number
+ // point at the existing state
+ dfa.setState(predDFATarget.stateNumber, existingState);
+ predDFATarget = existingState;
+ }
+ }
+ // add a transition to pred target from d
+ d.addTransition(predDFATarget, new PredicateLabel(c.semanticContext));
+ }
+ }
+
+ protected void initContextTrees(int numberOfAlts) {
+ contextTrees = new NFAContext[numberOfAlts];
+ for (int i = 0; i < contextTrees.length; i++) {
+ int alt = i+1;
+ // add a dummy root node so that an NFA configuration can
+ // always point at an NFAContext. If a context refers to this
+ // node then it implies there is no call stack for
+ // that configuration
+ contextTrees[i] = new NFAContext(null, null);
+ }
+ }
+
+ public static int max(Set s) {
+ if ( s==null ) {
+ return Integer.MIN_VALUE;
+ }
+ int i = 0;
+ int m = 0;
+ for (Iterator it = s.iterator(); it.hasNext();) {
+ i++;
+ Integer I = (Integer) it.next();
+ if ( i==1 ) { // init m with first value
+ m = I.intValue();
+ continue;
+ }
+ if ( I.intValue()>m ) {
+ m = I.intValue();
+ }
+ }
+ return m;
+ }
+}
diff --git a/antlr_3_1_source/analysis/NonLLStarDecisionException.java b/antlr_3_1_source/analysis/NonLLStarDecisionException.java
new file mode 100644
index 0000000..885bdd9
--- /dev/null
+++ b/antlr_3_1_source/analysis/NonLLStarDecisionException.java
@@ -0,0 +1,38 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+/** Used to abort DFA construction when we find non-LL(*) decision; i.e.,
+ * a decision that has recursion in more than a single alt.
+ */
+public class NonLLStarDecisionException extends RuntimeException {
+ public DFA abortedDFA;
+ public NonLLStarDecisionException(DFA abortedDFA) {
+ this.abortedDFA = abortedDFA;
+ }
+}
diff --git a/antlr_3_1_source/analysis/PredicateLabel.java b/antlr_3_1_source/analysis/PredicateLabel.java
new file mode 100644
index 0000000..47595ed
--- /dev/null
+++ b/antlr_3_1_source/analysis/PredicateLabel.java
@@ -0,0 +1,85 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.GrammarAST;
+import org.antlr.tool.Grammar;
+
+public class PredicateLabel extends Label {
+ /** A tree of semantic predicates from the grammar AST if label==SEMPRED.
+ * In the NFA, labels will always be exactly one predicate, but the DFA
+ * may have to combine a bunch of them as it collects predicates from
+ * multiple NFA configurations into a single DFA state.
+ */
+ protected SemanticContext semanticContext;
+
+ /** Make a semantic predicate label */
+ public PredicateLabel(GrammarAST predicateASTNode) {
+ super(SEMPRED);
+ this.semanticContext = new SemanticContext.Predicate(predicateASTNode);
+ }
+
+ /** Make a semantic predicates label */
+ public PredicateLabel(SemanticContext semCtx) {
+ super(SEMPRED);
+ this.semanticContext = semCtx;
+ }
+
+ public int hashCode() {
+ return semanticContext.hashCode();
+ }
+
+ public boolean equals(Object o) {
+ if ( o==null ) {
+ return false;
+ }
+ if ( this == o ) {
+ return true; // equals if same object
+ }
+ if ( !(o instanceof PredicateLabel) ) {
+ return false;
+ }
+ return semanticContext.equals(((PredicateLabel)o).semanticContext);
+ }
+
+ public boolean isSemanticPredicate() {
+ return true;
+ }
+
+ public SemanticContext getSemanticContext() {
+ return semanticContext;
+ }
+
+ public String toString() {
+ return "{"+semanticContext+"}?";
+ }
+
+ public String toString(Grammar g) {
+ return toString();
+ }
+}
diff --git a/antlr_3_1_source/analysis/RuleClosureTransition.java b/antlr_3_1_source/analysis/RuleClosureTransition.java
new file mode 100644
index 0000000..16fd26b
--- /dev/null
+++ b/antlr_3_1_source/analysis/RuleClosureTransition.java
@@ -0,0 +1,55 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.tool.Grammar;
+import org.antlr.tool.Rule;
+
+/** A transition used to reference another rule. It tracks two targets
+ * really: the actual transition target and the state following the
+ * state that refers to the other rule. Conversion of an NFA that
+ * falls off the end of a rule will be able to figure out who invoked
+ * that rule because of these special transitions.
+ */
+public class RuleClosureTransition extends Transition {
+ /** Ptr to the rule definition object for this rule ref */
+ public Rule rule;
+
+ /** What node to begin computations following ref to rule */
+ public NFAState followState;
+
+ public RuleClosureTransition(Rule rule,
+ NFAState ruleStart,
+ NFAState followState)
+ {
+ super(Label.EPSILON, ruleStart);
+ this.rule = rule;
+ this.followState = followState;
+ }
+}
+
diff --git a/antlr_3_1_source/analysis/SemanticContext.java b/antlr_3_1_source/analysis/SemanticContext.java
new file mode 100644
index 0000000..010c562
--- /dev/null
+++ b/antlr_3_1_source/analysis/SemanticContext.java
@@ -0,0 +1,486 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.stringtemplate.StringTemplateGroup;
+import org.antlr.codegen.CodeGenerator;
+import org.antlr.tool.ANTLRParser;
+import org.antlr.tool.GrammarAST;
+import org.antlr.tool.Grammar;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Iterator;
+
+/** A binary tree structure used to record the semantic context in which
+ * an NFA configuration is valid. It's either a single predicate or
+ * a tree representing an operation tree such as: p1&&p2 or p1||p2.
+ *
+ * For NFA o-p1->o-p2->o, create tree AND(p1,p2).
+ * For NFA (1)-p1->(2)
+ * | ^
+ * | |
+ * (3)-p2----
+ * we will have to combine p1 and p2 into DFA state as we will be
+ * adding NFA configurations for state 2 with two predicates p1,p2.
+ * So, set context for combined NFA config for state 2: OR(p1,p2).
+ *
+ * I have scoped the AND, NOT, OR, and Predicate subclasses of
+ * SemanticContext within the scope of this outer class.
+ *
+ * July 7, 2006: TJP altered OR to be set of operands. the Binary tree
+ * made it really hard to reduce complicated || sequences to their minimum.
+ * Got huge repeated || conditions.
+ */
+public abstract class SemanticContext {
+ /** Create a default value for the semantic context shared among all
+ * NFAConfigurations that do not have an actual semantic context.
+ * This prevents lots of if!=null type checks all over; it represents
+ * just an empty set of predicates.
+ */
+ public static final SemanticContext EMPTY_SEMANTIC_CONTEXT = new Predicate();
+
+ /** Given a semantic context expression tree, return a tree with all
+ * nongated predicates set to true and then reduced. So p&&(q||r) would
+ * return p&&r if q is nongated but p and r are gated.
+ */
+ public abstract SemanticContext getGatedPredicateContext();
+
+ /** Generate an expression that will evaluate the semantic context,
+ * given a set of output templates.
+ */
+ public abstract StringTemplate genExpr(CodeGenerator generator,
+ StringTemplateGroup templates,
+ DFA dfa);
+
+ public abstract boolean isSyntacticPredicate();
+
+ /** Notify the indicated grammar of any syn preds used within this context */
+ public void trackUseOfSyntacticPredicates(Grammar g) {
+ }
+
+ public static class Predicate extends SemanticContext {
+ /** The AST node in tree created from the grammar holding the predicate */
+ public GrammarAST predicateAST;
+
+ /** Is this a {...}?=> gating predicate or a normal disambiguating {..}?
+ * If any predicate in expression is gated, then expression is considered
+ * gated.
+ *
+ * The simple Predicate object's predicate AST's type is used to set
+ * gated to true if type==GATED_SEMPRED.
+ */
+ protected boolean gated = false;
+
+ /** syntactic predicates are converted to semantic predicates
+ * but synpreds are generated slightly differently.
+ */
+ protected boolean synpred = false;
+
+ public static final int INVALID_PRED_VALUE = -1;
+ public static final int FALSE_PRED = 0;
+ public static final int TRUE_PRED = 1;
+
+ /** sometimes predicates are known to be true or false; we need
+ * a way to represent this without resorting to a target language
+ * value like true or TRUE.
+ */
+ protected int constantValue = INVALID_PRED_VALUE;
+
+ public Predicate() {
+ predicateAST = new GrammarAST();
+ this.gated=false;
+ }
+
+ public Predicate(GrammarAST predicate) {
+ this.predicateAST = predicate;
+ this.gated =
+ predicate.getType()==ANTLRParser.GATED_SEMPRED ||
+ predicate.getType()==ANTLRParser.SYN_SEMPRED ;
+ this.synpred =
+ predicate.getType()==ANTLRParser.SYN_SEMPRED ||
+ predicate.getType()==ANTLRParser.BACKTRACK_SEMPRED;
+ }
+
+ public Predicate(Predicate p) {
+ this.predicateAST = p.predicateAST;
+ this.gated = p.gated;
+ this.synpred = p.synpred;
+ this.constantValue = p.constantValue;
+ }
+
+ /** Two predicates are the same if they are literally the same
+ * text rather than same node in the grammar's AST.
+ * Or, if they have the same constant value, return equal.
+ * As of July 2006 I'm not sure these are needed.
+ */
+ public boolean equals(Object o) {
+ if ( !(o instanceof Predicate) ) {
+ return false;
+ }
+ return predicateAST.getText().equals(((Predicate)o).predicateAST.getText());
+ }
+
+ public int hashCode() {
+ if ( predicateAST ==null ) {
+ return 0;
+ }
+ return predicateAST.getText().hashCode();
+ }
+
+ public StringTemplate genExpr(CodeGenerator generator,
+ StringTemplateGroup templates,
+ DFA dfa)
+ {
+ StringTemplate eST = null;
+ if ( templates!=null ) {
+ if ( synpred ) {
+ eST = templates.getInstanceOf("evalSynPredicate");
+ }
+ else {
+ eST = templates.getInstanceOf("evalPredicate");
+ generator.grammar.decisionsWhoseDFAsUsesSemPreds.add(dfa);
+ }
+ String predEnclosingRuleName = predicateAST.enclosingRuleName;
+ /*
+ String decisionEnclosingRuleName =
+ dfa.getNFADecisionStartState().getEnclosingRule();
+ // if these rulenames are diff, then pred was hoisted out of rule
+ // Currently I don't warn you about this as it could be annoying.
+ // I do the translation anyway.
+ */
+ //eST.setAttribute("pred", this.toString());
+ if ( generator!=null ) {
+ eST.setAttribute("pred",
+ generator.translateAction(predEnclosingRuleName,predicateAST));
+ }
+ }
+ else {
+ eST = new StringTemplate("$pred$");
+ eST.setAttribute("pred", this.toString());
+ return eST;
+ }
+ if ( generator!=null ) {
+ String description =
+ generator.target.getTargetStringLiteralFromString(this.toString());
+ eST.setAttribute("description", description);
+ }
+ return eST;
+ }
+
+ public SemanticContext getGatedPredicateContext() {
+ if ( gated ) {
+ return this;
+ }
+ return null;
+ }
+
+ public boolean isSyntacticPredicate() {
+ return predicateAST !=null &&
+ ( predicateAST.getType()==ANTLRParser.SYN_SEMPRED ||
+ predicateAST.getType()==ANTLRParser.BACKTRACK_SEMPRED );
+ }
+
+ public void trackUseOfSyntacticPredicates(Grammar g) {
+ if ( synpred ) {
+ g.synPredNamesUsedInDFA.add(predicateAST.getText());
+ }
+ }
+
+ public String toString() {
+ if ( predicateAST ==null ) {
+ return "";
+ }
+ return predicateAST.getText();
+ }
+ }
+
+ public static class TruePredicate extends Predicate {
+ public TruePredicate() {
+ super();
+ this.constantValue = TRUE_PRED;
+ }
+
+ public StringTemplate genExpr(CodeGenerator generator,
+ StringTemplateGroup templates,
+ DFA dfa)
+ {
+ if ( templates!=null ) {
+ return templates.getInstanceOf("true");
+ }
+ return new StringTemplate("true");
+ }
+
+ public String toString() {
+ return "true"; // not used for code gen, just DOT and print outs
+ }
+ }
+
+ /*
+ public static class FalsePredicate extends Predicate {
+ public FalsePredicate() {
+ super();
+ this.constantValue = FALSE_PRED;
+ }
+ public StringTemplate genExpr(CodeGenerator generator,
+ StringTemplateGroup templates,
+ DFA dfa)
+ {
+ if ( templates!=null ) {
+ return templates.getInstanceOf("false");
+ }
+ return new StringTemplate("false");
+ }
+ public String toString() {
+ return "false"; // not used for code gen, just DOT and print outs
+ }
+ }
+ */
+
+ public static class AND extends SemanticContext {
+ protected SemanticContext left,right;
+ public AND(SemanticContext a, SemanticContext b) {
+ this.left = a;
+ this.right = b;
+ }
+ public StringTemplate genExpr(CodeGenerator generator,
+ StringTemplateGroup templates,
+ DFA dfa)
+ {
+ StringTemplate eST = null;
+ if ( templates!=null ) {
+ eST = templates.getInstanceOf("andPredicates");
+ }
+ else {
+ eST = new StringTemplate("($left$&&$right$)");
+ }
+ eST.setAttribute("left", left.genExpr(generator,templates,dfa));
+ eST.setAttribute("right", right.genExpr(generator,templates,dfa));
+ return eST;
+ }
+ public SemanticContext getGatedPredicateContext() {
+ SemanticContext gatedLeft = left.getGatedPredicateContext();
+ SemanticContext gatedRight = right.getGatedPredicateContext();
+ if ( gatedLeft==null ) {
+ return gatedRight;
+ }
+ if ( gatedRight==null ) {
+ return gatedLeft;
+ }
+ return new AND(gatedLeft, gatedRight);
+ }
+ public boolean isSyntacticPredicate() {
+ return left.isSyntacticPredicate()||right.isSyntacticPredicate();
+ }
+ public void trackUseOfSyntacticPredicates(Grammar g) {
+ left.trackUseOfSyntacticPredicates(g);
+ right.trackUseOfSyntacticPredicates(g);
+ }
+ public String toString() {
+ return "("+left+"&&"+right+")";
+ }
+ }
+
+ public static class OR extends SemanticContext {
+ protected Set operands;
+ public OR(SemanticContext a, SemanticContext b) {
+ operands = new HashSet();
+ if ( a instanceof OR ) {
+ operands.addAll(((OR)a).operands);
+ }
+ else if ( a!=null ) {
+ operands.add(a);
+ }
+ if ( b instanceof OR ) {
+ operands.addAll(((OR)b).operands);
+ }
+ else if ( b!=null ) {
+ operands.add(b);
+ }
+ }
+ public StringTemplate genExpr(CodeGenerator generator,
+ StringTemplateGroup templates,
+ DFA dfa)
+ {
+ StringTemplate eST = null;
+ if ( templates!=null ) {
+ eST = templates.getInstanceOf("orPredicates");
+ }
+ else {
+ eST = new StringTemplate("($first(operands)$$rest(operands):{o | ||$o$}$)");
+ }
+ for (Iterator it = operands.iterator(); it.hasNext();) {
+ SemanticContext semctx = (SemanticContext) it.next();
+ eST.setAttribute("operands", semctx.genExpr(generator,templates,dfa));
+ }
+ return eST;
+ }
+ public SemanticContext getGatedPredicateContext() {
+ SemanticContext result = null;
+ for (Iterator it = operands.iterator(); it.hasNext();) {
+ SemanticContext semctx = (SemanticContext) it.next();
+ SemanticContext gatedPred = semctx.getGatedPredicateContext();
+ if ( gatedPred!=null ) {
+ result = or(result, gatedPred);
+ // result = new OR(result, gatedPred);
+ }
+ }
+ return result;
+ }
+ public boolean isSyntacticPredicate() {
+ for (Iterator it = operands.iterator(); it.hasNext();) {
+ SemanticContext semctx = (SemanticContext) it.next();
+ if ( semctx.isSyntacticPredicate() ) {
+ return true;
+ }
+ }
+ return false;
+ }
+ public void trackUseOfSyntacticPredicates(Grammar g) {
+ for (Iterator it = operands.iterator(); it.hasNext();) {
+ SemanticContext semctx = (SemanticContext) it.next();
+ semctx.trackUseOfSyntacticPredicates(g);
+ }
+ }
+ public String toString() {
+ StringBuffer buf = new StringBuffer();
+ buf.append("(");
+ int i = 0;
+ for (Iterator it = operands.iterator(); it.hasNext();) {
+ SemanticContext semctx = (SemanticContext) it.next();
+ if ( i>0 ) {
+ buf.append("||");
+ }
+ buf.append(semctx.toString());
+ i++;
+ }
+ buf.append(")");
+ return buf.toString();
+ }
+ }
+
+ public static class NOT extends SemanticContext {
+ protected SemanticContext ctx;
+ public NOT(SemanticContext ctx) {
+ this.ctx = ctx;
+ }
+ public StringTemplate genExpr(CodeGenerator generator,
+ StringTemplateGroup templates,
+ DFA dfa)
+ {
+ StringTemplate eST = null;
+ if ( templates!=null ) {
+ eST = templates.getInstanceOf("notPredicate");
+ }
+ else {
+ eST = new StringTemplate("?!($pred$)");
+ }
+ eST.setAttribute("pred", ctx.genExpr(generator,templates,dfa));
+ return eST;
+ }
+ public SemanticContext getGatedPredicateContext() {
+ SemanticContext p = ctx.getGatedPredicateContext();
+ if ( p==null ) {
+ return null;
+ }
+ return new NOT(p);
+ }
+ public boolean isSyntacticPredicate() {
+ return ctx.isSyntacticPredicate();
+ }
+ public void trackUseOfSyntacticPredicates(Grammar g) {
+ ctx.trackUseOfSyntacticPredicates(g);
+ }
+
+ public boolean equals(Object object) {
+ if ( !(object instanceof NOT) ) {
+ return false;
+ }
+ return this.ctx.equals(((NOT)object).ctx);
+ }
+
+ public String toString() {
+ return "!("+ctx+")";
+ }
+ }
+
+ public static SemanticContext and(SemanticContext a, SemanticContext b) {
+ //System.out.println("AND: "+a+"&&"+b);
+ if ( a==EMPTY_SEMANTIC_CONTEXT || a==null ) {
+ return b;
+ }
+ if ( b==EMPTY_SEMANTIC_CONTEXT || b==null ) {
+ return a;
+ }
+ if ( a.equals(b) ) {
+ return a; // if same, just return left one
+ }
+ //System.out.println("## have to AND");
+ return new AND(a,b);
+ }
+
+ public static SemanticContext or(SemanticContext a, SemanticContext b) {
+ //System.out.println("OR: "+a+"||"+b);
+ if ( a==EMPTY_SEMANTIC_CONTEXT || a==null ) {
+ return b;
+ }
+ if ( b==EMPTY_SEMANTIC_CONTEXT || b==null ) {
+ return a;
+ }
+ if ( a instanceof TruePredicate ) {
+ return a;
+ }
+ if ( b instanceof TruePredicate ) {
+ return b;
+ }
+ if ( a instanceof NOT && b instanceof Predicate ) {
+ NOT n = (NOT)a;
+ // check for !p||p
+ if ( n.ctx.equals(b) ) {
+ return new TruePredicate();
+ }
+ }
+ else if ( b instanceof NOT && a instanceof Predicate ) {
+ NOT n = (NOT)b;
+ // check for p||!p
+ if ( n.ctx.equals(a) ) {
+ return new TruePredicate();
+ }
+ }
+ else if ( a.equals(b) ) {
+ return a;
+ }
+ //System.out.println("## have to OR");
+ return new OR(a,b);
+ }
+
+ public static SemanticContext not(SemanticContext a) {
+ return new NOT(a);
+ }
+
+}
diff --git a/antlr_3_1_source/analysis/State.java b/antlr_3_1_source/analysis/State.java
new file mode 100644
index 0000000..9c56124
--- /dev/null
+++ b/antlr_3_1_source/analysis/State.java
@@ -0,0 +1,54 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+/** A generic state machine state. */
+public abstract class State {
+ public static final int INVALID_STATE_NUMBER = -1;
+
+ public int stateNumber = INVALID_STATE_NUMBER;
+
+ /** An accept state is an end of rule state for lexers and
+ * parser grammar rules.
+ */
+ protected boolean acceptState = false;
+
+ public abstract int getNumberOfTransitions();
+
+ public abstract void addTransition(Transition e);
+
+ public abstract Transition transition(int i);
+
+ public boolean isAcceptState() {
+ return acceptState;
+ }
+
+ public void setAcceptState(boolean acceptState) {
+ this.acceptState = acceptState;
+ }
+}
diff --git a/antlr_3_1_source/analysis/StateCluster.java b/antlr_3_1_source/analysis/StateCluster.java
new file mode 100644
index 0000000..c31e9e2
--- /dev/null
+++ b/antlr_3_1_source/analysis/StateCluster.java
@@ -0,0 +1,41 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+/** A Cluster object points to the left/right (start and end) states of a
+ * state machine. Used to build NFAs.
+ */
+public class StateCluster {
+ public NFAState left;
+ public NFAState right;
+
+ public StateCluster(NFAState left, NFAState right) {
+ this.left = left;
+ this.right = right;
+ }
+}
diff --git a/antlr_3_1_source/analysis/Transition.java b/antlr_3_1_source/analysis/Transition.java
new file mode 100644
index 0000000..bc74ecf
--- /dev/null
+++ b/antlr_3_1_source/analysis/Transition.java
@@ -0,0 +1,84 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.analysis;
+
+/** A generic transition between any two state machine states. It defines
+ * some special labels that indicate things like epsilon transitions and
+ * that the label is actually a set of labels or a semantic predicate.
+ * This is a one way link. It emanates from a state (usually via a list of
+ * transitions) and has a label/target pair. I have abstracted the notion
+ * of a Label to handle the various kinds of things it can be.
+ */
+public class Transition implements Comparable {
+ /** What label must be consumed to transition to target */
+ public Label label;
+
+ /** The target of this transition */
+ public State target;
+
+ public Transition(Label label, State target) {
+ this.label = label;
+ this.target = target;
+ }
+
+ public Transition(int label, State target) {
+ this.label = new Label(label);
+ this.target = target;
+ }
+
+ public boolean isEpsilon() {
+ return label.isEpsilon();
+ }
+
+ public boolean isAction() {
+ return label.isAction();
+ }
+
+ public boolean isSemanticPredicate() {
+ return label.isSemanticPredicate();
+ }
+
+ public int hashCode() {
+ return label.hashCode() + target.stateNumber;
+ }
+
+ public boolean equals(Object o) {
+ Transition other = (Transition)o;
+ return this.label.equals(other.label) &&
+ this.target.equals(other.target);
+ }
+
+ public int compareTo(Object o) {
+ Transition other = (Transition)o;
+ return this.label.compareTo(other.label);
+ }
+
+ public String toString() {
+ return label+"->"+target.stateNumber;
+ }
+}
diff --git a/antlr_3_1_source/codegen/ACyclicDFACodeGenerator.java b/antlr_3_1_source/codegen/ACyclicDFACodeGenerator.java
new file mode 100644
index 0000000..2a198a4
--- /dev/null
+++ b/antlr_3_1_source/codegen/ACyclicDFACodeGenerator.java
@@ -0,0 +1,190 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.analysis.*;
+import org.antlr.misc.Utils;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.stringtemplate.StringTemplateGroup;
+
+import java.util.List;
+
+public class ACyclicDFACodeGenerator {
+ protected CodeGenerator parentGenerator;
+
+ public ACyclicDFACodeGenerator(CodeGenerator parent) {
+ this.parentGenerator = parent;
+ }
+
+ public StringTemplate genFixedLookaheadDecision(StringTemplateGroup templates,
+ DFA dfa)
+ {
+ return walkFixedDFAGeneratingStateMachine(templates, dfa, dfa.startState, 1);
+ }
+
+ protected StringTemplate walkFixedDFAGeneratingStateMachine(
+ StringTemplateGroup templates,
+ DFA dfa,
+ DFAState s,
+ int k)
+ {
+ //System.out.println("walk "+s.stateNumber+" in dfa for decision "+dfa.decisionNumber);
+ if ( s.isAcceptState() ) {
+ StringTemplate dfaST = templates.getInstanceOf("dfaAcceptState");
+ dfaST.setAttribute("alt", Utils.integer(s.getUniquelyPredictedAlt()));
+ return dfaST;
+ }
+
+ // the default templates for generating a state and its edges
+ // can be an if-then-else structure or a switch
+ String dfaStateName = "dfaState";
+ String dfaLoopbackStateName = "dfaLoopbackState";
+ String dfaOptionalBlockStateName = "dfaOptionalBlockState";
+ String dfaEdgeName = "dfaEdge";
+ if ( parentGenerator.canGenerateSwitch(s) ) {
+ dfaStateName = "dfaStateSwitch";
+ dfaLoopbackStateName = "dfaLoopbackStateSwitch";
+ dfaOptionalBlockStateName = "dfaOptionalBlockStateSwitch";
+ dfaEdgeName = "dfaEdgeSwitch";
+ }
+
+ StringTemplate dfaST = templates.getInstanceOf(dfaStateName);
+ if ( dfa.getNFADecisionStartState().decisionStateType==NFAState.LOOPBACK ) {
+ dfaST = templates.getInstanceOf(dfaLoopbackStateName);
+ }
+ else if ( dfa.getNFADecisionStartState().decisionStateType==NFAState.OPTIONAL_BLOCK_START ) {
+ dfaST = templates.getInstanceOf(dfaOptionalBlockStateName);
+ }
+ dfaST.setAttribute("k", Utils.integer(k));
+ dfaST.setAttribute("stateNumber", Utils.integer(s.stateNumber));
+ dfaST.setAttribute("semPredState",
+ Boolean.valueOf(s.isResolvedWithPredicates()));
+ /*
+ String description = dfa.getNFADecisionStartState().getDescription();
+ description = parentGenerator.target.getTargetStringLiteralFromString(description);
+ //System.out.println("DFA: "+description+" associated with AST "+dfa.getNFADecisionStartState());
+ if ( description!=null ) {
+ dfaST.setAttribute("description", description);
+ }
+ */
+ int EOTPredicts = NFA.INVALID_ALT_NUMBER;
+ DFAState EOTTarget = null;
+ //System.out.println("DFA state "+s.stateNumber);
+ for (int i = 0; i < s.getNumberOfTransitions(); i++) {
+ Transition edge = (Transition) s.transition(i);
+ //System.out.println("edge "+s.stateNumber+"-"+edge.label.toString()+"->"+edge.target.stateNumber);
+ if ( edge.label.getAtom()==Label.EOT ) {
+ // don't generate a real edge for EOT; track alt EOT predicts
+ // generate that prediction in the else clause as default case
+ EOTTarget = (DFAState)edge.target;
+ EOTPredicts = EOTTarget.getUniquelyPredictedAlt();
+ /*
+ System.out.println("DFA s"+s.stateNumber+" EOT goes to s"+
+ edge.target.stateNumber+" predicates alt "+
+ EOTPredicts);
+ */
+ continue;
+ }
+ StringTemplate edgeST = templates.getInstanceOf(dfaEdgeName);
+ // If the template wants all the label values delineated, do that
+ if ( edgeST.getFormalArgument("labels")!=null ) {
+ List labels = edge.label.getSet().toList();
+ for (int j = 0; j < labels.size(); j++) {
+ Integer vI = (Integer) labels.get(j);
+ String label =
+ parentGenerator.getTokenTypeAsTargetLabel(vI.intValue());
+ labels.set(j, label); // rewrite List element to be name
+ }
+ edgeST.setAttribute("labels", labels);
+ }
+ else { // else create an expression to evaluate (the general case)
+ edgeST.setAttribute("labelExpr",
+ parentGenerator.genLabelExpr(templates,edge,k));
+ }
+
+ // stick in any gated predicates for any edge if not already a pred
+ if ( !edge.label.isSemanticPredicate() ) {
+ DFAState target = (DFAState)edge.target;
+ SemanticContext preds =
+ target.getGatedPredicatesInNFAConfigurations();
+ if ( preds!=null ) {
+ //System.out.println("preds="+target.getGatedPredicatesInNFAConfigurations());
+ StringTemplate predST = preds.genExpr(parentGenerator,
+ parentGenerator.getTemplates(),
+ dfa);
+ edgeST.setAttribute("predicates", predST);
+ }
+ }
+
+ StringTemplate targetST =
+ walkFixedDFAGeneratingStateMachine(templates,
+ dfa,
+ (DFAState)edge.target,
+ k+1);
+ edgeST.setAttribute("targetState", targetST);
+ dfaST.setAttribute("edges", edgeST);
+ /*
+ System.out.println("back to DFA "+
+ dfa.decisionNumber+"."+s.stateNumber);
+ */
+ }
+
+ // HANDLE EOT EDGE
+ if ( EOTPredicts!=NFA.INVALID_ALT_NUMBER ) {
+ // EOT unique predicts an alt
+ dfaST.setAttribute("eotPredictsAlt", Utils.integer(EOTPredicts));
+ }
+ else if ( EOTTarget!=null && EOTTarget.getNumberOfTransitions()>0 ) {
+ // EOT state has transitions so must split on predicates.
+ // Generate predicate else-if clauses and then generate
+ // NoViableAlt exception as else clause.
+ // Note: these predicates emanate from the EOT target state
+ // rather than the current DFAState s so the error message
+ // might be slightly misleading if you are looking at the
+ // state number. Predicates emanating from EOT targets are
+ // hoisted up to the state that has the EOT edge.
+ for (int i = 0; i < EOTTarget.getNumberOfTransitions(); i++) {
+ Transition predEdge = (Transition)EOTTarget.transition(i);
+ StringTemplate edgeST = templates.getInstanceOf(dfaEdgeName);
+ edgeST.setAttribute("labelExpr",
+ parentGenerator.genSemanticPredicateExpr(templates,predEdge));
+ // the target must be an accept state
+ //System.out.println("EOT edge");
+ StringTemplate targetST =
+ walkFixedDFAGeneratingStateMachine(templates,
+ dfa,
+ (DFAState)predEdge.target,
+ k+1);
+ edgeST.setAttribute("targetState", targetST);
+ dfaST.setAttribute("edges", edgeST);
+ }
+ }
+ return dfaST;
+ }
+}
+
diff --git a/antlr_3_1_source/codegen/ANTLRTokenTypes.txt b/antlr_3_1_source/codegen/ANTLRTokenTypes.txt
new file mode 100644
index 0000000..214a287
--- /dev/null
+++ b/antlr_3_1_source/codegen/ANTLRTokenTypes.txt
@@ -0,0 +1,100 @@
+// $ANTLR 2.7.7 (2006-01-29): antlr.g -> ANTLRTokenTypes.txt$
+ANTLR // output token vocab name
+OPTIONS="options"=4
+TOKENS="tokens"=5
+PARSER="parser"=6
+LEXER=7
+RULE=8
+BLOCK=9
+OPTIONAL=10
+CLOSURE=11
+POSITIVE_CLOSURE=12
+SYNPRED=13
+RANGE=14
+CHAR_RANGE=15
+EPSILON=16
+ALT=17
+EOR=18
+EOB=19
+EOA=20
+ID=21
+ARG=22
+ARGLIST=23
+RET=24
+LEXER_GRAMMAR=25
+PARSER_GRAMMAR=26
+TREE_GRAMMAR=27
+COMBINED_GRAMMAR=28
+INITACTION=29
+FORCED_ACTION=30
+LABEL=31
+TEMPLATE=32
+SCOPE="scope"=33
+IMPORT="import"=34
+GATED_SEMPRED=35
+SYN_SEMPRED=36
+BACKTRACK_SEMPRED=37
+FRAGMENT="fragment"=38
+DOT=39
+ACTION=40
+DOC_COMMENT=41
+SEMI=42
+LITERAL_lexer="lexer"=43
+LITERAL_tree="tree"=44
+LITERAL_grammar="grammar"=45
+AMPERSAND=46
+COLON=47
+RCURLY=48
+ASSIGN=49
+STRING_LITERAL=50
+CHAR_LITERAL=51
+INT=52
+STAR=53
+COMMA=54
+TOKEN_REF=55
+LITERAL_protected="protected"=56
+LITERAL_public="public"=57
+LITERAL_private="private"=58
+BANG=59
+ARG_ACTION=60
+LITERAL_returns="returns"=61
+LITERAL_throws="throws"=62
+LPAREN=63
+OR=64
+RPAREN=65
+LITERAL_catch="catch"=66
+LITERAL_finally="finally"=67
+PLUS_ASSIGN=68
+SEMPRED=69
+IMPLIES=70
+ROOT=71
+WILDCARD=72
+RULE_REF=73
+NOT=74
+TREE_BEGIN=75
+QUESTION=76
+PLUS=77
+OPEN_ELEMENT_OPTION=78
+CLOSE_ELEMENT_OPTION=79
+REWRITE=80
+ETC=81
+DOLLAR=82
+DOUBLE_QUOTE_STRING_LITERAL=83
+DOUBLE_ANGLE_STRING_LITERAL=84
+WS=85
+COMMENT=86
+SL_COMMENT=87
+ML_COMMENT=88
+STRAY_BRACKET=89
+ESC=90
+DIGIT=91
+XDIGIT=92
+NESTED_ARG_ACTION=93
+NESTED_ACTION=94
+ACTION_CHAR_LITERAL=95
+ACTION_STRING_LITERAL=96
+ACTION_ESC=97
+WS_LOOP=98
+INTERNAL_RULE_REF=99
+WS_OPT=100
+SRC=101
diff --git a/antlr_3_1_source/codegen/ActionScriptTarget.java b/antlr_3_1_source/codegen/ActionScriptTarget.java
new file mode 100644
index 0000000..f521e5f
--- /dev/null
+++ b/antlr_3_1_source/codegen/ActionScriptTarget.java
@@ -0,0 +1,134 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.Tool;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.Grammar;
+
+public class ActionScriptTarget extends Target {
+
+ public String getTargetCharLiteralFromANTLRCharLiteral(
+ CodeGenerator generator,
+ String literal) {
+
+ int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
+ return String.valueOf(c);
+ }
+
+ public String getTokenTypeAsTargetLabel(CodeGenerator generator,
+ int ttype) {
+ // use ints for predefined types;
+ //
+ if (ttype >= 0 && ttype <= 3) {
+ return String.valueOf(ttype);
+ }
+
+ String name = generator.grammar.getTokenDisplayName(ttype);
+
+ // If name is a literal, return the token type instead
+ if (name.charAt(0) == '\'') {
+ return String.valueOf(ttype);
+ }
+
+ return name;
+ }
+
+ /**
+ * ActionScript doesn't support Unicode String literals that are considered "illegal"
+ * or are in the surrogate pair ranges. For example "/uffff" will not encode properly
+ * nor will "/ud800". To keep things as compact as possible we use the following encoding
+ * if the int is below 255, we encode as hex literal
+ * If the int is between 255 and 0x7fff we use a single unicode literal with the value
+ * If the int is above 0x7fff, we use a unicode literal of 0x80hh, where hh is the high-order
+ * bits followed by \xll where ll is the lower order bits of a 16-bit number.
+ *
+ * Ideally this should be improved at a future date. The most optimal way to encode this
+ * may be a compressed AMF encoding that is embedded using an Embed tag in ActionScript.
+ *
+ * @param v
+ * @return
+ */
+ public String encodeIntAsCharEscape(int v) {
+ // encode as hex
+ if ( v<=255 ) {
+ return "\\x"+ Integer.toHexString(v|0x100).substring(1,3);
+ }
+ if (v <= 0x7fff) {
+ String hex = Integer.toHexString(v|0x10000).substring(1,5);
+ return "\\u"+hex;
+ }
+ if (v > 0xffff) {
+ System.err.println("Warning: character literal out of range for ActionScript target " + v);
+ return "";
+ }
+ StringBuffer buf = new StringBuffer("\\u80");
+ buf.append(Integer.toHexString((v >> 8) | 0x100).substring(1, 3)); // high - order bits
+ buf.append("\\x");
+ buf.append(Integer.toHexString((v & 0xff) | 0x100).substring(1, 3)); // low -order bits
+ return buf.toString();
+ }
+
+ /** Convert long to two 32-bit numbers separted by a comma.
+ * ActionScript does not support 64-bit numbers, so we need to break
+ * the number into two 32-bit literals to give to the Bit. A number like
+ * 0xHHHHHHHHLLLLLLLL is broken into the following string:
+ * "0xLLLLLLLL, 0xHHHHHHHH"
+ * Note that the low order bits are first, followed by the high order bits.
+ * This is to match how the BitSet constructor works, where the bits are
+ * passed in in 32-bit chunks with low-order bits coming first.
+ */
+ public String getTarget64BitStringFromValue(long word) {
+ StringBuffer buf = new StringBuffer(22); // enough for the two "0x", "," and " "
+ buf.append("0x");
+ writeHexWithPadding(buf, Integer.toHexString((int)(word & 0x00000000ffffffffL)));
+ buf.append(", 0x");
+ writeHexWithPadding(buf, Integer.toHexString((int)(word >> 32)));
+
+ return buf.toString();
+ }
+
+ private void writeHexWithPadding(StringBuffer buf, String digits) {
+ digits = digits.toUpperCase();
+ int padding = 8 - digits.length();
+ // pad left with zeros
+ for (int i=1; i<=padding; i++) {
+ buf.append('0');
+ }
+ buf.append(digits);
+ }
+
+ protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate recognizerST,
+ StringTemplate cyclicDFAST) {
+ return recognizerST;
+ }
+}
+
diff --git a/antlr_3_1_source/codegen/ActionTranslator.g b/antlr_3_1_source/codegen/ActionTranslator.g
new file mode 100644
index 0000000..e044f27
--- /dev/null
+++ b/antlr_3_1_source/codegen/ActionTranslator.g
@@ -0,0 +1,801 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+lexer grammar ActionTranslator;
+options {
+ filter=true; // try all non-fragment rules in order specified
+ // output=template; TODO: can we make tokens return templates somehow?
+}
+
+@header {
+package org.antlr.codegen;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.runtime.*;
+import org.antlr.tool.*;
+}
+
+@members {
+public List chunks = new ArrayList();
+Rule enclosingRule;
+int outerAltNum;
+Grammar grammar;
+CodeGenerator generator;
+antlr.Token actionToken;
+
+ public ActionTranslator(CodeGenerator generator,
+ String ruleName,
+ GrammarAST actionAST)
+ {
+ this(new ANTLRStringStream(actionAST.token.getText()));
+ this.generator = generator;
+ this.grammar = generator.grammar;
+ this.enclosingRule = grammar.getLocallyDefinedRule(ruleName);
+ this.actionToken = actionAST.token;
+ this.outerAltNum = actionAST.outerAltNum;
+ }
+
+ public ActionTranslator(CodeGenerator generator,
+ String ruleName,
+ antlr.Token actionToken,
+ int outerAltNum)
+ {
+ this(new ANTLRStringStream(actionToken.getText()));
+ this.generator = generator;
+ grammar = generator.grammar;
+ this.enclosingRule = grammar.getRule(ruleName);
+ this.actionToken = actionToken;
+ this.outerAltNum = outerAltNum;
+ }
+
+/** Return a list of strings and StringTemplate objects that
+ * represent the translated action.
+ */
+public List translateToChunks() {
+ // System.out.println("###\naction="+action);
+ Token t;
+ do {
+ t = nextToken();
+ } while ( t.getType()!= Token.EOF );
+ return chunks;
+}
+
+public String translate() {
+ List theChunks = translateToChunks();
+ //System.out.println("chunks="+a.chunks);
+ StringBuffer buf = new StringBuffer();
+ for (int i = 0; i < theChunks.size(); i++) {
+ Object o = (Object) theChunks.get(i);
+ buf.append(o);
+ }
+ //System.out.println("translated: "+buf.toString());
+ return buf.toString();
+}
+
+public List translateAction(String action) {
+ String rname = null;
+ if ( enclosingRule!=null ) {
+ rname = enclosingRule.name;
+ }
+ ActionTranslator translator =
+ new ActionTranslator(generator,
+ rname,
+ new antlr.CommonToken(ANTLRParser.ACTION,action),outerAltNum);
+ return translator.translateToChunks();
+}
+
+public boolean isTokenRefInAlt(String id) {
+ return enclosingRule.getTokenRefsInAlt(id, outerAltNum)!=null;
+}
+public boolean isRuleRefInAlt(String id) {
+ return enclosingRule.getRuleRefsInAlt(id, outerAltNum)!=null;
+}
+public Grammar.LabelElementPair getElementLabel(String id) {
+ return enclosingRule.getLabel(id);
+}
+
+public void checkElementRefUniqueness(String ref, boolean isToken) {
+ List refs = null;
+ if ( isToken ) {
+ refs = enclosingRule.getTokenRefsInAlt(ref, outerAltNum);
+ }
+ else {
+ refs = enclosingRule.getRuleRefsInAlt(ref, outerAltNum);
+ }
+ if ( refs!=null && refs.size()>1 ) {
+ ErrorManager.grammarError(ErrorManager.MSG_NONUNIQUE_REF,
+ grammar,
+ actionToken,
+ ref);
+ }
+}
+
+/** For \$rulelabel.name, return the Attribute found for name. It
+ * will be a predefined property or a return value.
+ */
+public Attribute getRuleLabelAttribute(String ruleName, String attrName) {
+ Rule r = grammar.getRule(ruleName);
+ AttributeScope scope = r.getLocalAttributeScope(attrName);
+ if ( scope!=null && !scope.isParameterScope ) {
+ return scope.getAttribute(attrName);
+ }
+ return null;
+}
+
+AttributeScope resolveDynamicScope(String scopeName) {
+ if ( grammar.getGlobalScope(scopeName)!=null ) {
+ return grammar.getGlobalScope(scopeName);
+ }
+ Rule scopeRule = grammar.getRule(scopeName);
+ if ( scopeRule!=null ) {
+ return scopeRule.ruleScope;
+ }
+ return null; // not a valid dynamic scope
+}
+
+protected StringTemplate template(String name) {
+ StringTemplate st = generator.getTemplates().getInstanceOf(name);
+ chunks.add(st);
+ return st;
+}
+
+
+}
+
+/** $x.y x is enclosing rule, y is a return value, parameter, or
+ * predefined property.
+ *
+ * r[int i] returns [int j]
+ * : {$r.i, $r.j, $r.start, $r.stop, $r.st, $r.tree}
+ * ;
+ */
+SET_ENCLOSING_RULE_SCOPE_ATTR
+ : '$' x=ID '.' y=ID WS? '=' expr=ATTR_VALUE_EXPR ';'
+ {enclosingRule!=null &&
+ $x.text.equals(enclosingRule.name) &&
+ enclosingRule.getLocalAttributeScope($y.text)!=null}?
+ //{System.out.println("found \$rule.attr");}
+ {
+ StringTemplate st = null;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope($y.text);
+ if ( scope.isPredefinedRuleScope ) {
+ if ( $y.text.equals("st") || $y.text.equals("tree") ) {
+ st = template("ruleSetPropertyRef_"+$y.text);
+ grammar.referenceRuleLabelPredefinedAttribute($x.text);
+ st.setAttribute("scope", $x.text);
+ st.setAttribute("attr", $y.text);
+ st.setAttribute("expr", translateAction($expr.text));
+ } else {
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ $x.text,
+ $y.text);
+ }
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ // this is a better message to emit than the previous one...
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ $x.text,
+ $y.text);
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterSetAttributeRef");
+ st.setAttribute("attr", scope.getAttribute($y.text));
+ st.setAttribute("expr", translateAction($expr.text));
+ }
+ else { // must be return value
+ st = template("returnSetAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute($y.text));
+ st.setAttribute("expr", translateAction($expr.text));
+ }
+ }
+ ;
+ENCLOSING_RULE_SCOPE_ATTR
+ : '$' x=ID '.' y=ID {enclosingRule!=null &&
+ $x.text.equals(enclosingRule.name) &&
+ enclosingRule.getLocalAttributeScope($y.text)!=null}?
+ //{System.out.println("found \$rule.attr");}
+ {
+ if ( isRuleRefInAlt($x.text) ) {
+ ErrorManager.grammarError(ErrorManager.MSG_RULE_REF_AMBIG_WITH_RULE_IN_ALT,
+ grammar,
+ actionToken,
+ $x.text);
+ }
+ StringTemplate st = null;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope($y.text);
+ if ( scope.isPredefinedRuleScope ) {
+ st = template("rulePropertyRef_"+$y.text);
+ grammar.referenceRuleLabelPredefinedAttribute($x.text);
+ st.setAttribute("scope", $x.text);
+ st.setAttribute("attr", $y.text);
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ // perhaps not the most precise error message to use, but...
+ ErrorManager.grammarError(ErrorManager.MSG_RULE_HAS_NO_ARGS,
+ grammar,
+ actionToken,
+ $x.text);
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterAttributeRef");
+ st.setAttribute("attr", scope.getAttribute($y.text));
+ }
+ else { // must be return value
+ st = template("returnAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute($y.text));
+ }
+ }
+ ;
+
+/** Setting $tokenlabel.attr or $tokenref.attr where attr is predefined property of a token is an error. */
+SET_TOKEN_SCOPE_ATTR
+ : '$' x=ID '.' y=ID WS? '='
+ {enclosingRule!=null && input.LA(1)!='=' &&
+ (enclosingRule.getTokenLabel($x.text)!=null||
+ isTokenRefInAlt($x.text)) &&
+ AttributeScope.tokenScope.getAttribute($y.text)!=null}?
+ //{System.out.println("found \$tokenlabel.attr or \$tokenref.attr");}
+ {
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ $x.text,
+ $y.text);
+ }
+ ;
+
+/** $tokenlabel.attr or $tokenref.attr where attr is predefined property of a token.
+ * If in lexer grammar, only translate for strings and tokens (rule refs)
+ */
+TOKEN_SCOPE_ATTR
+ : '$' x=ID '.' y=ID {enclosingRule!=null &&
+ (enclosingRule.getTokenLabel($x.text)!=null||
+ isTokenRefInAlt($x.text)) &&
+ AttributeScope.tokenScope.getAttribute($y.text)!=null &&
+ (grammar.type!=Grammar.LEXER ||
+ getElementLabel($x.text).elementRef.token.getType()==ANTLRParser.TOKEN_REF ||
+ getElementLabel($x.text).elementRef.token.getType()==ANTLRParser.STRING_LITERAL)}?
+ // {System.out.println("found \$tokenlabel.attr or \$tokenref.attr");}
+ {
+ String label = $x.text;
+ if ( enclosingRule.getTokenLabel($x.text)==null ) {
+ // \$tokenref.attr gotta get old label or compute new one
+ checkElementRefUniqueness($x.text, true);
+ label = enclosingRule.getElementLabel($x.text, outerAltNum, generator);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ "\$"+$x.text+"."+$y.text);
+ label = $x.text;
+ }
+ }
+ StringTemplate st = template("tokenLabelPropertyRef_"+$y.text);
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", AttributeScope.tokenScope.getAttribute($y.text));
+ }
+ ;
+
+/** Setting $rulelabel.attr or $ruleref.attr where attr is a predefined property is an error
+ * This must also fail, if we try to access a local attribute's field, like $tree.scope = localObject
+ * That must be handled by LOCAL_ATTR below. ANTLR only concerns itself with the top-level scope
+ * attributes declared in scope {} or parameters, return values and the like.
+ */
+SET_RULE_SCOPE_ATTR
+@init {
+Grammar.LabelElementPair pair=null;
+String refdRuleName=null;
+}
+ : '$' x=ID '.' y=ID WS? '=' {enclosingRule!=null && input.LA(1)!='='}?
+ {
+ pair = enclosingRule.getRuleLabel($x.text);
+ refdRuleName = $x.text;
+ if ( pair!=null ) {
+ refdRuleName = pair.referencedRuleName;
+ }
+ }
+ // supercomplicated because I can't exec the above action.
+ // This asserts that if it's a label or a ref to a rule proceed but only if the attribute
+ // is valid for that rule's scope
+ {(enclosingRule.getRuleLabel($x.text)!=null || isRuleRefInAlt($x.text)) &&
+ getRuleLabelAttribute(enclosingRule.getRuleLabel($x.text)!=null?enclosingRule.getRuleLabel($x.text).referencedRuleName:$x.text,$y.text)!=null}?
+ //{System.out.println("found set \$rulelabel.attr or \$ruleref.attr: "+$x.text+"."+$y.text);}
+ {
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ $x.text,
+ $y.text);
+ }
+ ;
+
+/** $rulelabel.attr or $ruleref.attr where attr is a predefined property*/
+RULE_SCOPE_ATTR
+@init {
+Grammar.LabelElementPair pair=null;
+String refdRuleName=null;
+}
+ : '$' x=ID '.' y=ID {enclosingRule!=null}?
+ {
+ pair = enclosingRule.getRuleLabel($x.text);
+ refdRuleName = $x.text;
+ if ( pair!=null ) {
+ refdRuleName = pair.referencedRuleName;
+ }
+ }
+ // supercomplicated because I can't exec the above action.
+ // This asserts that if it's a label or a ref to a rule proceed but only if the attribute
+ // is valid for that rule's scope
+ {(enclosingRule.getRuleLabel($x.text)!=null || isRuleRefInAlt($x.text)) &&
+ getRuleLabelAttribute(enclosingRule.getRuleLabel($x.text)!=null?enclosingRule.getRuleLabel($x.text).referencedRuleName:$x.text,$y.text)!=null}?
+ //{System.out.println("found \$rulelabel.attr or \$ruleref.attr: "+$x.text+"."+$y.text);}
+ {
+ String label = $x.text;
+ if ( pair==null ) {
+ // \$ruleref.attr gotta get old label or compute new one
+ checkElementRefUniqueness($x.text, false);
+ label = enclosingRule.getElementLabel($x.text, outerAltNum, generator);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ "\$"+$x.text+"."+$y.text);
+ label = $x.text;
+ }
+ }
+ StringTemplate st;
+ Rule refdRule = grammar.getRule(refdRuleName);
+ AttributeScope scope = refdRule.getLocalAttributeScope($y.text);
+ if ( scope.isPredefinedRuleScope ) {
+ st = template("ruleLabelPropertyRef_"+$y.text);
+ grammar.referenceRuleLabelPredefinedAttribute(refdRuleName);
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", $y.text);
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ st = template("lexerRuleLabelPropertyRef_"+$y.text);
+ grammar.referenceRuleLabelPredefinedAttribute(refdRuleName);
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", $y.text);
+ }
+ else if ( scope.isParameterScope ) {
+ // TODO: error!
+ }
+ else {
+ st = template("ruleLabelRef");
+ st.setAttribute("referencedRule", refdRule);
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", scope.getAttribute($y.text));
+ }
+ }
+ ;
+
+
+/** $label either a token label or token/rule list label like label+=expr */
+LABEL_REF
+ : '$' ID {enclosingRule!=null &&
+ getElementLabel($ID.text)!=null &&
+ enclosingRule.getRuleLabel($ID.text)==null}?
+ // {System.out.println("found \$label");}
+ {
+ StringTemplate st;
+ Grammar.LabelElementPair pair = getElementLabel($ID.text);
+ if ( pair.type==Grammar.TOKEN_LABEL ||
+ pair.type==Grammar.CHAR_LABEL )
+ {
+ st = template("tokenLabelRef");
+ }
+ else {
+ st = template("listLabelRef");
+ }
+ st.setAttribute("label", $ID.text);
+ }
+ ;
+
+/** $tokenref in a non-lexer grammar */
+ISOLATED_TOKEN_REF
+ : '$' ID {grammar.type!=Grammar.LEXER && enclosingRule!=null && isTokenRefInAlt($ID.text)}?
+ //{System.out.println("found \$tokenref");}
+ {
+ String label = enclosingRule.getElementLabel($ID.text, outerAltNum, generator);
+ checkElementRefUniqueness($ID.text, true);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ $ID.text);
+ }
+ else {
+ StringTemplate st = template("tokenLabelRef");
+ st.setAttribute("label", label);
+ }
+ }
+ ;
+
+/** $lexerruleref from within the lexer */
+ISOLATED_LEXER_RULE_REF
+ : '$' ID {grammar.type==Grammar.LEXER &&
+ enclosingRule!=null &&
+ isRuleRefInAlt($ID.text)}?
+ //{System.out.println("found \$lexerruleref");}
+ {
+ String label = enclosingRule.getElementLabel($ID.text, outerAltNum, generator);
+ checkElementRefUniqueness($ID.text, false);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ $ID.text);
+ }
+ else {
+ StringTemplate st = template("lexerRuleLabel");
+ st.setAttribute("label", label);
+ }
+ }
+ ;
+
+/** $y return value, parameter, predefined rule property, or token/rule
+ * reference within enclosing rule's outermost alt.
+ * y must be a "local" reference; i.e., it must be referring to
+ * something defined within the enclosing rule.
+ *
+ * r[int i] returns [int j]
+ * : {$i, $j, $start, $stop, $st, $tree}
+ * ;
+ *
+ * TODO: this might get the dynamic scope's elements too.!!!!!!!!!
+ */
+SET_LOCAL_ATTR
+ : '$' ID WS? '=' expr=ATTR_VALUE_EXPR ';' {enclosingRule!=null
+ && enclosingRule.getLocalAttributeScope($ID.text)!=null
+ && !enclosingRule.getLocalAttributeScope($ID.text).isPredefinedLexerRuleScope}?
+ //{System.out.println("found set \$localattr");}
+ {
+ StringTemplate st;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope($ID.text);
+ if ( scope.isPredefinedRuleScope ) {
+ if ($ID.text.equals("tree") || $ID.text.equals("st")) {
+ st = template("ruleSetPropertyRef_"+$ID.text);
+ grammar.referenceRuleLabelPredefinedAttribute(enclosingRule.name);
+ st.setAttribute("scope", enclosingRule.name);
+ st.setAttribute("attr", $ID.text);
+ st.setAttribute("expr", translateAction($expr.text));
+ } else {
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ $ID.text,
+ "");
+ }
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterSetAttributeRef");
+ st.setAttribute("attr", scope.getAttribute($ID.text));
+ st.setAttribute("expr", translateAction($expr.text));
+ }
+ else {
+ st = template("returnSetAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute($ID.text));
+ st.setAttribute("expr", translateAction($expr.text));
+ }
+ }
+ ;
+LOCAL_ATTR
+ : '$' ID {enclosingRule!=null && enclosingRule.getLocalAttributeScope($ID.text)!=null}?
+ //{System.out.println("found \$localattr");}
+ {
+ StringTemplate st;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope($ID.text);
+ if ( scope.isPredefinedRuleScope ) {
+ st = template("rulePropertyRef_"+$ID.text);
+ grammar.referenceRuleLabelPredefinedAttribute(enclosingRule.name);
+ st.setAttribute("scope", enclosingRule.name);
+ st.setAttribute("attr", $ID.text);
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ st = template("lexerRulePropertyRef_"+$ID.text);
+ st.setAttribute("scope", enclosingRule.name);
+ st.setAttribute("attr", $ID.text);
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterAttributeRef");
+ st.setAttribute("attr", scope.getAttribute($ID.text));
+ }
+ else {
+ st = template("returnAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute($ID.text));
+ }
+ }
+ ;
+
+/** $x::y the only way to access the attributes within a dynamic scope
+ * regardless of whether or not you are in the defining rule.
+ *
+ * scope Symbols { List names; }
+ * r
+ * scope {int i;}
+ * scope Symbols;
+ * : {$r::i=3;} s {$Symbols::names;}
+ * ;
+ * s : {$r::i; $Symbols::names;}
+ * ;
+ */
+SET_DYNAMIC_SCOPE_ATTR
+ : '$' x=ID '::' y=ID WS? '=' expr=ATTR_VALUE_EXPR ';'
+ {resolveDynamicScope($x.text)!=null &&
+ resolveDynamicScope($x.text).getAttribute($y.text)!=null}?
+ //{System.out.println("found set \$scope::attr "+ $x.text + "::" + $y.text + " to " + $expr.text);}
+ {
+ AttributeScope scope = resolveDynamicScope($x.text);
+ if ( scope!=null ) {
+ StringTemplate st = template("scopeSetAttributeRef");
+ st.setAttribute("scope", $x.text);
+ st.setAttribute("attr", scope.getAttribute($y.text));
+ st.setAttribute("expr", translateAction($expr.text));
+ }
+ else {
+ // error: invalid dynamic attribute
+ }
+ }
+ ;
+
+DYNAMIC_SCOPE_ATTR
+ : '$' x=ID '::' y=ID
+ {resolveDynamicScope($x.text)!=null &&
+ resolveDynamicScope($x.text).getAttribute($y.text)!=null}?
+ //{System.out.println("found \$scope::attr "+ $x.text + "::" + $y.text);}
+ {
+ AttributeScope scope = resolveDynamicScope($x.text);
+ if ( scope!=null ) {
+ StringTemplate st = template("scopeAttributeRef");
+ st.setAttribute("scope", $x.text);
+ st.setAttribute("attr", scope.getAttribute($y.text));
+ }
+ else {
+ // error: invalid dynamic attribute
+ }
+ }
+ ;
+
+
+ERROR_SCOPED_XY
+ : '$' x=ID '::' y=ID
+ {
+ chunks.add(getText());
+ generator.issueInvalidScopeError($x.text,$y.text,
+ enclosingRule,actionToken,
+ outerAltNum);
+ }
+ ;
+
+/** To access deeper (than top of stack) scopes, use the notation:
+ *
+ * $x[-1]::y previous (just under top of stack)
+ * $x[-i]::y top of stack - i where the '-' MUST BE PRESENT;
+ * i.e., i cannot simply be negative without the '-' sign!
+ * $x[i]::y absolute index i (0..size-1)
+ * $x[0]::y is the absolute 0 indexed element (bottom of the stack)
+ */
+DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR
+ : '$' x=ID '[' '-' expr=SCOPE_INDEX_EXPR ']' '::' y=ID
+ // {System.out.println("found \$scope[-...]::attr");}
+ {
+ StringTemplate st = template("scopeAttributeRef");
+ st.setAttribute("scope", $x.text);
+ st.setAttribute("attr", resolveDynamicScope($x.text).getAttribute($y.text));
+ st.setAttribute("negIndex", $expr.text);
+ }
+ ;
+
+DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR
+ : '$' x=ID '[' expr=SCOPE_INDEX_EXPR ']' '::' y=ID
+ // {System.out.println("found \$scope[...]::attr");}
+ {
+ StringTemplate st = template("scopeAttributeRef");
+ st.setAttribute("scope", $x.text);
+ st.setAttribute("attr", resolveDynamicScope($x.text).getAttribute($y.text));
+ st.setAttribute("index", $expr.text);
+ }
+ ;
+
+fragment
+SCOPE_INDEX_EXPR
+ : (~']')+
+ ;
+
+/** $r y is a rule's dynamic scope or a global shared scope.
+ * Isolated $rulename is not allowed unless it has a dynamic scope *and*
+ * there is no reference to rulename in the enclosing alternative,
+ * which would be ambiguous. See TestAttributes.testAmbiguousRuleRef()
+ */
+ISOLATED_DYNAMIC_SCOPE
+ : '$' ID {resolveDynamicScope($ID.text)!=null}?
+ // {System.out.println("found isolated \$scope where scope is a dynamic scope");}
+ {
+ StringTemplate st = template("isolatedDynamicScopeRef");
+ st.setAttribute("scope", $ID.text);
+ }
+ ;
+
+// antlr.g then codegen.g does these first two currently.
+// don't want to duplicate that code.
+
+/** %foo(a={},b={},...) ctor */
+TEMPLATE_INSTANCE
+ : '%' ID '(' ( WS? ARG (',' WS? ARG)* WS? )? ')'
+ // {System.out.println("found \%foo(args)");}
+ {
+ String action = getText().substring(1,getText().length());
+ String ruleName = "";
+ if ( enclosingRule!=null ) {
+ ruleName = enclosingRule.name;
+ }
+ StringTemplate st =
+ generator.translateTemplateConstructor(ruleName,
+ outerAltNum,
+ actionToken,
+ action);
+ if ( st!=null ) {
+ chunks.add(st);
+ }
+ }
+ ;
+
+/** %({name-expr})(a={},...) indirect template ctor reference */
+INDIRECT_TEMPLATE_INSTANCE
+ : '%' '(' ACTION ')' '(' ( WS? ARG (',' WS? ARG)* WS? )? ')'
+ // {System.out.println("found \%({...})(args)");}
+ {
+ String action = getText().substring(1,getText().length());
+ StringTemplate st =
+ generator.translateTemplateConstructor(enclosingRule.name,
+ outerAltNum,
+ actionToken,
+ action);
+ chunks.add(st);
+ }
+ ;
+
+fragment
+ARG : ID '=' ACTION
+ ;
+
+/** %{expr}.y = z; template attribute y of StringTemplate-typed expr to z */
+SET_EXPR_ATTRIBUTE
+ : '%' a=ACTION '.' ID WS? '=' expr=ATTR_VALUE_EXPR ';'
+ // {System.out.println("found \%{expr}.y = z;");}
+ {
+ StringTemplate st = template("actionSetAttribute");
+ String action = $a.text;
+ action = action.substring(1,action.length()-1); // stuff inside {...}
+ st.setAttribute("st", translateAction(action));
+ st.setAttribute("attrName", $ID.text);
+ st.setAttribute("expr", translateAction($expr.text));
+ }
+ ;
+
+/* %x.y = z; set template attribute y of x (always set never get attr)
+ * to z [languages like python without ';' must still use the
+ * ';' which the code generator is free to remove during code gen]
+ */
+SET_ATTRIBUTE
+ : '%' x=ID '.' y=ID WS? '=' expr=ATTR_VALUE_EXPR ';'
+ // {System.out.println("found \%x.y = z;");}
+ {
+ StringTemplate st = template("actionSetAttribute");
+ st.setAttribute("st", $x.text);
+ st.setAttribute("attrName", $y.text);
+ st.setAttribute("expr", translateAction($expr.text));
+ }
+ ;
+
+/** Don't allow an = as first char to prevent $x == 3; kind of stuff. */
+fragment
+ATTR_VALUE_EXPR
+ : ~'=' (~';')*
+ ;
+
+/** %{string-expr} anonymous template from string expr */
+TEMPLATE_EXPR
+ : '%' a=ACTION
+ // {System.out.println("found \%{expr}");}
+ {
+ StringTemplate st = template("actionStringConstructor");
+ String action = $a.text;
+ action = action.substring(1,action.length()-1); // stuff inside {...}
+ st.setAttribute("stringExpr", translateAction(action));
+ }
+ ;
+
+fragment
+ACTION
+ : '{' (options {greedy=false;}:.)* '}'
+ ;
+
+ESC : '\\' '$' {chunks.add("\$");}
+ | '\\' '%' {chunks.add("\%");}
+ | '\\' ~('$'|'%') {chunks.add(getText());}
+ ;
+
+ERROR_XY
+ : '$' x=ID '.' y=ID
+ {
+ chunks.add(getText());
+ generator.issueInvalidAttributeError($x.text,$y.text,
+ enclosingRule,actionToken,
+ outerAltNum);
+ }
+ ;
+
+ERROR_X
+ : '$' x=ID
+ {
+ chunks.add(getText());
+ generator.issueInvalidAttributeError($x.text,
+ enclosingRule,actionToken,
+ outerAltNum);
+ }
+ ;
+
+UNKNOWN_SYNTAX
+ : '$'
+ {
+ chunks.add(getText());
+ // shouldn't need an error here. Just accept \$ if it doesn't look like anything
+ }
+ | '%' (ID|'.'|'('|')'|','|'{'|'}'|'"')*
+ {
+ chunks.add(getText());
+ ErrorManager.grammarError(ErrorManager.MSG_INVALID_TEMPLATE_ACTION,
+ grammar,
+ actionToken,
+ getText());
+ }
+ ;
+
+TEXT: ~('$'|'%'|'\\')+ {chunks.add(getText());}
+ ;
+
+fragment
+ID : ('a'..'z'|'A'..'Z'|'_') ('a'..'z'|'A'..'Z'|'_'|'0'..'9')*
+ ;
+
+fragment
+INT : '0'..'9'+
+ ;
+
+fragment
+WS : (' '|'\t'|'\n'|'\r')+
+ ;
diff --git a/antlr_3_1_source/codegen/ActionTranslator.java b/antlr_3_1_source/codegen/ActionTranslator.java
new file mode 100644
index 0000000..3d2420c
--- /dev/null
+++ b/antlr_3_1_source/codegen/ActionTranslator.java
@@ -0,0 +1,3538 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+// $ANTLR 3.1b1 ActionTranslator.g 2008-05-01 15:02:49
+
+package org.antlr.codegen;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.runtime.*;
+import org.antlr.tool.*;
+
+
+import org.antlr.runtime.*;
+import java.util.Stack;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+public class ActionTranslator extends Lexer {
+ public static final int LOCAL_ATTR=17;
+ public static final int SET_DYNAMIC_SCOPE_ATTR=18;
+ public static final int ISOLATED_DYNAMIC_SCOPE=24;
+ public static final int WS=5;
+ public static final int UNKNOWN_SYNTAX=35;
+ public static final int DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR=23;
+ public static final int SCOPE_INDEX_EXPR=21;
+ public static final int DYNAMIC_SCOPE_ATTR=19;
+ public static final int ISOLATED_TOKEN_REF=14;
+ public static final int SET_ATTRIBUTE=30;
+ public static final int SET_EXPR_ATTRIBUTE=29;
+ public static final int ACTION=27;
+ public static final int ERROR_X=34;
+ public static final int TEMPLATE_INSTANCE=26;
+ public static final int TOKEN_SCOPE_ATTR=10;
+ public static final int ISOLATED_LEXER_RULE_REF=15;
+ public static final int ESC=32;
+ public static final int SET_ENCLOSING_RULE_SCOPE_ATTR=7;
+ public static final int ATTR_VALUE_EXPR=6;
+ public static final int RULE_SCOPE_ATTR=12;
+ public static final int LABEL_REF=13;
+ public static final int INT=37;
+ public static final int ARG=25;
+ public static final int EOF=-1;
+ public static final int SET_LOCAL_ATTR=16;
+ public static final int TEXT=36;
+ public static final int DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR=22;
+ public static final int SET_TOKEN_SCOPE_ATTR=9;
+ public static final int ERROR_SCOPED_XY=20;
+ public static final int SET_RULE_SCOPE_ATTR=11;
+ public static final int ENCLOSING_RULE_SCOPE_ATTR=8;
+ public static final int ERROR_XY=33;
+ public static final int TEMPLATE_EXPR=31;
+ public static final int INDIRECT_TEMPLATE_INSTANCE=28;
+ public static final int ID=4;
+
+ public List chunks = new ArrayList();
+ Rule enclosingRule;
+ int outerAltNum;
+ Grammar grammar;
+ CodeGenerator generator;
+ antlr.Token actionToken;
+
+ public ActionTranslator(CodeGenerator generator,
+ String ruleName,
+ GrammarAST actionAST)
+ {
+ this(new ANTLRStringStream(actionAST.token.getText()));
+ this.generator = generator;
+ this.grammar = generator.grammar;
+ this.enclosingRule = grammar.getLocallyDefinedRule(ruleName);
+ this.actionToken = actionAST.token;
+ this.outerAltNum = actionAST.outerAltNum;
+ }
+
+ public ActionTranslator(CodeGenerator generator,
+ String ruleName,
+ antlr.Token actionToken,
+ int outerAltNum)
+ {
+ this(new ANTLRStringStream(actionToken.getText()));
+ this.generator = generator;
+ grammar = generator.grammar;
+ this.enclosingRule = grammar.getRule(ruleName);
+ this.actionToken = actionToken;
+ this.outerAltNum = outerAltNum;
+ }
+
+ /** Return a list of strings and StringTemplate objects that
+ * represent the translated action.
+ */
+ public List translateToChunks() {
+ // System.out.println("###\naction="+action);
+ Token t;
+ do {
+ t = nextToken();
+ } while ( t.getType()!= Token.EOF );
+ return chunks;
+ }
+
+ public String translate() {
+ List theChunks = translateToChunks();
+ //System.out.println("chunks="+a.chunks);
+ StringBuffer buf = new StringBuffer();
+ for (int i = 0; i < theChunks.size(); i++) {
+ Object o = (Object) theChunks.get(i);
+ buf.append(o);
+ }
+ //System.out.println("translated: "+buf.toString());
+ return buf.toString();
+ }
+
+ public List translateAction(String action) {
+ String rname = null;
+ if ( enclosingRule!=null ) {
+ rname = enclosingRule.name;
+ }
+ ActionTranslator translator =
+ new ActionTranslator(generator,
+ rname,
+ new antlr.CommonToken(ANTLRParser.ACTION,action),outerAltNum);
+ return translator.translateToChunks();
+ }
+
+ public boolean isTokenRefInAlt(String id) {
+ return enclosingRule.getTokenRefsInAlt(id, outerAltNum)!=null;
+ }
+ public boolean isRuleRefInAlt(String id) {
+ return enclosingRule.getRuleRefsInAlt(id, outerAltNum)!=null;
+ }
+ public Grammar.LabelElementPair getElementLabel(String id) {
+ return enclosingRule.getLabel(id);
+ }
+
+ public void checkElementRefUniqueness(String ref, boolean isToken) {
+ List refs = null;
+ if ( isToken ) {
+ refs = enclosingRule.getTokenRefsInAlt(ref, outerAltNum);
+ }
+ else {
+ refs = enclosingRule.getRuleRefsInAlt(ref, outerAltNum);
+ }
+ if ( refs!=null && refs.size()>1 ) {
+ ErrorManager.grammarError(ErrorManager.MSG_NONUNIQUE_REF,
+ grammar,
+ actionToken,
+ ref);
+ }
+ }
+
+ /** For $rulelabel.name, return the Attribute found for name. It
+ * will be a predefined property or a return value.
+ */
+ public Attribute getRuleLabelAttribute(String ruleName, String attrName) {
+ Rule r = grammar.getRule(ruleName);
+ AttributeScope scope = r.getLocalAttributeScope(attrName);
+ if ( scope!=null && !scope.isParameterScope ) {
+ return scope.getAttribute(attrName);
+ }
+ return null;
+ }
+
+ AttributeScope resolveDynamicScope(String scopeName) {
+ if ( grammar.getGlobalScope(scopeName)!=null ) {
+ return grammar.getGlobalScope(scopeName);
+ }
+ Rule scopeRule = grammar.getRule(scopeName);
+ if ( scopeRule!=null ) {
+ return scopeRule.ruleScope;
+ }
+ return null; // not a valid dynamic scope
+ }
+
+ protected StringTemplate template(String name) {
+ StringTemplate st = generator.getTemplates().getInstanceOf(name);
+ chunks.add(st);
+ return st;
+ }
+
+
+
+
+ // delegates
+ // delegators
+
+ public ActionTranslator() {;}
+ public ActionTranslator(CharStream input) {
+ this(input, new RecognizerSharedState());
+ }
+ public ActionTranslator(CharStream input, RecognizerSharedState state) {
+ super(input,state);
+
+ }
+ public String getGrammarFileName() { return "ActionTranslator.g"; }
+
+ public Token nextToken() {
+ while (true) {
+ if ( input.LA(1)==CharStream.EOF ) {
+ return Token.EOF_TOKEN;
+ }
+ state.token = null;
+ state.channel = Token.DEFAULT_CHANNEL;
+ state.tokenStartCharIndex = input.index();
+ state.tokenStartCharPositionInLine = input.getCharPositionInLine();
+ state.tokenStartLine = input.getLine();
+ state.text = null;
+ try {
+ int m = input.mark();
+ state.backtracking=1;
+ state.failed=false;
+ mTokens();
+ state.backtracking=0;
+
+ if ( state.failed ) {
+ input.rewind(m);
+ input.consume();
+ }
+ else {
+ emit();
+ return state.token;
+ }
+ }
+ catch (RecognitionException re) {
+ // shouldn't happen in backtracking mode, but...
+ reportError(re);
+ recover(re);
+ }
+ }
+ }
+
+ public void memoize(IntStream input,
+ int ruleIndex,
+ int ruleStartIndex)
+ {
+ if ( state.backtracking>1 ) super.memoize(input, ruleIndex, ruleStartIndex);
+ }
+
+ public boolean alreadyParsedRule(IntStream input, int ruleIndex) {
+ if ( state.backtracking>1 ) return super.alreadyParsedRule(input, ruleIndex);
+ return false;
+ }// $ANTLR start SET_ENCLOSING_RULE_SCOPE_ATTR
+ public final void mSET_ENCLOSING_RULE_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = SET_ENCLOSING_RULE_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+ Token expr=null;
+
+ // ActionTranslator.g:177:2: ( '$' x= ID '.' y= ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' {...}?)
+ // ActionTranslator.g:177:4: '$' x= ID '.' y= ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart50 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart50, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart56 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart56, getCharIndex()-1);
+ // ActionTranslator.g:177:22: ( WS )?
+ int alt1=2;
+ int LA1_0 = input.LA(1);
+
+ if ( ((LA1_0>='\t' && LA1_0<='\n')||LA1_0=='\r'||LA1_0==' ') ) {
+ alt1=1;
+ }
+ switch (alt1) {
+ case 1 :
+ // ActionTranslator.g:177:22: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ match('='); if (state.failed) return ;
+ int exprStart65 = getCharIndex();
+ mATTR_VALUE_EXPR(); if (state.failed) return ;
+ expr = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, exprStart65, getCharIndex()-1);
+ match(';'); if (state.failed) return ;
+ if ( !(enclosingRule!=null &&
+ (x!=null?x.getText():null).equals(enclosingRule.name) &&
+ enclosingRule.getLocalAttributeScope((y!=null?y.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "SET_ENCLOSING_RULE_SCOPE_ATTR", "enclosingRule!=null &&\n\t $x.text.equals(enclosingRule.name) &&\n\t enclosingRule.getLocalAttributeScope($y.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st = null;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope((y!=null?y.getText():null));
+ if ( scope.isPredefinedRuleScope ) {
+ if ( (y!=null?y.getText():null).equals("st") || (y!=null?y.getText():null).equals("tree") ) {
+ st = template("ruleSetPropertyRef_"+(y!=null?y.getText():null));
+ grammar.referenceRuleLabelPredefinedAttribute((x!=null?x.getText():null));
+ st.setAttribute("scope", (x!=null?x.getText():null));
+ st.setAttribute("attr", (y!=null?y.getText():null));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+ } else {
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ (x!=null?x.getText():null),
+ (y!=null?y.getText():null));
+ }
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ // this is a better message to emit than the previous one...
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ (x!=null?x.getText():null),
+ (y!=null?y.getText():null));
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterSetAttributeRef");
+ st.setAttribute("attr", scope.getAttribute((y!=null?y.getText():null)));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+ }
+ else { // must be return value
+ st = template("returnSetAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute((y!=null?y.getText():null)));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SET_ENCLOSING_RULE_SCOPE_ATTR
+
+ // $ANTLR start ENCLOSING_RULE_SCOPE_ATTR
+ public final void mENCLOSING_RULE_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = ENCLOSING_RULE_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+
+ // ActionTranslator.g:222:2: ( '$' x= ID '.' y= ID {...}?)
+ // ActionTranslator.g:222:4: '$' x= ID '.' y= ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart97 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart97, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart103 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart103, getCharIndex()-1);
+ if ( !(enclosingRule!=null &&
+ (x!=null?x.getText():null).equals(enclosingRule.name) &&
+ enclosingRule.getLocalAttributeScope((y!=null?y.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "ENCLOSING_RULE_SCOPE_ATTR", "enclosingRule!=null &&\n\t $x.text.equals(enclosingRule.name) &&\n\t enclosingRule.getLocalAttributeScope($y.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ if ( isRuleRefInAlt((x!=null?x.getText():null)) ) {
+ ErrorManager.grammarError(ErrorManager.MSG_RULE_REF_AMBIG_WITH_RULE_IN_ALT,
+ grammar,
+ actionToken,
+ (x!=null?x.getText():null));
+ }
+ StringTemplate st = null;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope((y!=null?y.getText():null));
+ if ( scope.isPredefinedRuleScope ) {
+ st = template("rulePropertyRef_"+(y!=null?y.getText():null));
+ grammar.referenceRuleLabelPredefinedAttribute((x!=null?x.getText():null));
+ st.setAttribute("scope", (x!=null?x.getText():null));
+ st.setAttribute("attr", (y!=null?y.getText():null));
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ // perhaps not the most precise error message to use, but...
+ ErrorManager.grammarError(ErrorManager.MSG_RULE_HAS_NO_ARGS,
+ grammar,
+ actionToken,
+ (x!=null?x.getText():null));
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterAttributeRef");
+ st.setAttribute("attr", scope.getAttribute((y!=null?y.getText():null)));
+ }
+ else { // must be return value
+ st = template("returnAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute((y!=null?y.getText():null)));
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ENCLOSING_RULE_SCOPE_ATTR
+
+ // $ANTLR start SET_TOKEN_SCOPE_ATTR
+ public final void mSET_TOKEN_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = SET_TOKEN_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+
+ // ActionTranslator.g:262:2: ( '$' x= ID '.' y= ID ( WS )? '=' {...}?)
+ // ActionTranslator.g:262:4: '$' x= ID '.' y= ID ( WS )? '=' {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart129 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart129, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart135 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart135, getCharIndex()-1);
+ // ActionTranslator.g:262:22: ( WS )?
+ int alt2=2;
+ int LA2_0 = input.LA(1);
+
+ if ( ((LA2_0>='\t' && LA2_0<='\n')||LA2_0=='\r'||LA2_0==' ') ) {
+ alt2=1;
+ }
+ switch (alt2) {
+ case 1 :
+ // ActionTranslator.g:262:22: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ match('='); if (state.failed) return ;
+ if ( !(enclosingRule!=null && input.LA(1)!='=' &&
+ (enclosingRule.getTokenLabel((x!=null?x.getText():null))!=null||
+ isTokenRefInAlt((x!=null?x.getText():null))) &&
+ AttributeScope.tokenScope.getAttribute((y!=null?y.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "SET_TOKEN_SCOPE_ATTR", "enclosingRule!=null && input.LA(1)!='=' &&\n\t (enclosingRule.getTokenLabel($x.text)!=null||\n\t isTokenRefInAlt($x.text)) &&\n\t AttributeScope.tokenScope.getAttribute($y.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ (x!=null?x.getText():null),
+ (y!=null?y.getText():null));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SET_TOKEN_SCOPE_ATTR
+
+ // $ANTLR start TOKEN_SCOPE_ATTR
+ public final void mTOKEN_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = TOKEN_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+
+ // ActionTranslator.g:281:2: ( '$' x= ID '.' y= ID {...}?)
+ // ActionTranslator.g:281:4: '$' x= ID '.' y= ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart174 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart174, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart180 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart180, getCharIndex()-1);
+ if ( !(enclosingRule!=null &&
+ (enclosingRule.getTokenLabel((x!=null?x.getText():null))!=null||
+ isTokenRefInAlt((x!=null?x.getText():null))) &&
+ AttributeScope.tokenScope.getAttribute((y!=null?y.getText():null))!=null &&
+ (grammar.type!=Grammar.LEXER ||
+ getElementLabel((x!=null?x.getText():null)).elementRef.token.getType()==ANTLRParser.TOKEN_REF ||
+ getElementLabel((x!=null?x.getText():null)).elementRef.token.getType()==ANTLRParser.STRING_LITERAL)) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "TOKEN_SCOPE_ATTR", "enclosingRule!=null &&\n\t (enclosingRule.getTokenLabel($x.text)!=null||\n\t isTokenRefInAlt($x.text)) &&\n\t AttributeScope.tokenScope.getAttribute($y.text)!=null &&\n\t (grammar.type!=Grammar.LEXER ||\n\t getElementLabel($x.text).elementRef.token.getType()==ANTLRParser.TOKEN_REF ||\n\t getElementLabel($x.text).elementRef.token.getType()==ANTLRParser.STRING_LITERAL)");
+ }
+ if ( state.backtracking==1 ) {
+
+ String label = (x!=null?x.getText():null);
+ if ( enclosingRule.getTokenLabel((x!=null?x.getText():null))==null ) {
+ // $tokenref.attr gotta get old label or compute new one
+ checkElementRefUniqueness((x!=null?x.getText():null), true);
+ label = enclosingRule.getElementLabel((x!=null?x.getText():null), outerAltNum, generator);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ "$"+(x!=null?x.getText():null)+"."+(y!=null?y.getText():null));
+ label = (x!=null?x.getText():null);
+ }
+ }
+ StringTemplate st = template("tokenLabelPropertyRef_"+(y!=null?y.getText():null));
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", AttributeScope.tokenScope.getAttribute((y!=null?y.getText():null)));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end TOKEN_SCOPE_ATTR
+
+ // $ANTLR start SET_RULE_SCOPE_ATTR
+ public final void mSET_RULE_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = SET_RULE_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+
+
+ Grammar.LabelElementPair pair=null;
+ String refdRuleName=null;
+
+ // ActionTranslator.g:319:2: ( '$' x= ID '.' y= ID ( WS )? '=' {...}?{...}?)
+ // ActionTranslator.g:319:4: '$' x= ID '.' y= ID ( WS )? '=' {...}?{...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart211 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart211, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart217 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart217, getCharIndex()-1);
+ // ActionTranslator.g:319:22: ( WS )?
+ int alt3=2;
+ int LA3_0 = input.LA(1);
+
+ if ( ((LA3_0>='\t' && LA3_0<='\n')||LA3_0=='\r'||LA3_0==' ') ) {
+ alt3=1;
+ }
+ switch (alt3) {
+ case 1 :
+ // ActionTranslator.g:319:22: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ match('='); if (state.failed) return ;
+ if ( !(enclosingRule!=null && input.LA(1)!='=') ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "SET_RULE_SCOPE_ATTR", "enclosingRule!=null && input.LA(1)!='='");
+ }
+ if ( state.backtracking==1 ) {
+
+ pair = enclosingRule.getRuleLabel((x!=null?x.getText():null));
+ refdRuleName = (x!=null?x.getText():null);
+ if ( pair!=null ) {
+ refdRuleName = pair.referencedRuleName;
+ }
+
+ }
+ if ( !((enclosingRule.getRuleLabel((x!=null?x.getText():null))!=null || isRuleRefInAlt((x!=null?x.getText():null))) &&
+ getRuleLabelAttribute(enclosingRule.getRuleLabel((x!=null?x.getText():null))!=null?enclosingRule.getRuleLabel((x!=null?x.getText():null)).referencedRuleName:(x!=null?x.getText():null),(y!=null?y.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "SET_RULE_SCOPE_ATTR", "(enclosingRule.getRuleLabel($x.text)!=null || isRuleRefInAlt($x.text)) &&\n\t getRuleLabelAttribute(enclosingRule.getRuleLabel($x.text)!=null?enclosingRule.getRuleLabel($x.text).referencedRuleName:$x.text,$y.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ (x!=null?x.getText():null),
+ (y!=null?y.getText():null));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SET_RULE_SCOPE_ATTR
+
+ // $ANTLR start RULE_SCOPE_ATTR
+ public final void mRULE_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = RULE_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+
+
+ Grammar.LabelElementPair pair=null;
+ String refdRuleName=null;
+
+ // ActionTranslator.g:348:2: ( '$' x= ID '.' y= ID {...}?{...}?)
+ // ActionTranslator.g:348:4: '$' x= ID '.' y= ID {...}?{...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart270 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart270, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart276 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart276, getCharIndex()-1);
+ if ( !(enclosingRule!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "RULE_SCOPE_ATTR", "enclosingRule!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ pair = enclosingRule.getRuleLabel((x!=null?x.getText():null));
+ refdRuleName = (x!=null?x.getText():null);
+ if ( pair!=null ) {
+ refdRuleName = pair.referencedRuleName;
+ }
+
+ }
+ if ( !((enclosingRule.getRuleLabel((x!=null?x.getText():null))!=null || isRuleRefInAlt((x!=null?x.getText():null))) &&
+ getRuleLabelAttribute(enclosingRule.getRuleLabel((x!=null?x.getText():null))!=null?enclosingRule.getRuleLabel((x!=null?x.getText():null)).referencedRuleName:(x!=null?x.getText():null),(y!=null?y.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "RULE_SCOPE_ATTR", "(enclosingRule.getRuleLabel($x.text)!=null || isRuleRefInAlt($x.text)) &&\n\t getRuleLabelAttribute(enclosingRule.getRuleLabel($x.text)!=null?enclosingRule.getRuleLabel($x.text).referencedRuleName:$x.text,$y.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ String label = (x!=null?x.getText():null);
+ if ( pair==null ) {
+ // $ruleref.attr gotta get old label or compute new one
+ checkElementRefUniqueness((x!=null?x.getText():null), false);
+ label = enclosingRule.getElementLabel((x!=null?x.getText():null), outerAltNum, generator);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ "$"+(x!=null?x.getText():null)+"."+(y!=null?y.getText():null));
+ label = (x!=null?x.getText():null);
+ }
+ }
+ StringTemplate st;
+ Rule refdRule = grammar.getRule(refdRuleName);
+ AttributeScope scope = refdRule.getLocalAttributeScope((y!=null?y.getText():null));
+ if ( scope.isPredefinedRuleScope ) {
+ st = template("ruleLabelPropertyRef_"+(y!=null?y.getText():null));
+ grammar.referenceRuleLabelPredefinedAttribute(refdRuleName);
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", (y!=null?y.getText():null));
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ st = template("lexerRuleLabelPropertyRef_"+(y!=null?y.getText():null));
+ grammar.referenceRuleLabelPredefinedAttribute(refdRuleName);
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", (y!=null?y.getText():null));
+ }
+ else if ( scope.isParameterScope ) {
+ // TODO: error!
+ }
+ else {
+ st = template("ruleLabelRef");
+ st.setAttribute("referencedRule", refdRule);
+ st.setAttribute("scope", label);
+ st.setAttribute("attr", scope.getAttribute((y!=null?y.getText():null)));
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end RULE_SCOPE_ATTR
+
+ // $ANTLR start LABEL_REF
+ public final void mLABEL_REF() throws RecognitionException {
+ try {
+ int _type = LABEL_REF;
+ Token ID1=null;
+
+ // ActionTranslator.g:406:2: ( '$' ID {...}?)
+ // ActionTranslator.g:406:4: '$' ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int ID1Start318 = getCharIndex();
+ mID(); if (state.failed) return ;
+ ID1 = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, ID1Start318, getCharIndex()-1);
+ if ( !(enclosingRule!=null &&
+ getElementLabel((ID1!=null?ID1.getText():null))!=null &&
+ enclosingRule.getRuleLabel((ID1!=null?ID1.getText():null))==null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "LABEL_REF", "enclosingRule!=null &&\n\t getElementLabel($ID.text)!=null &&\n\t\t enclosingRule.getRuleLabel($ID.text)==null");
+ }
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st;
+ Grammar.LabelElementPair pair = getElementLabel((ID1!=null?ID1.getText():null));
+ if ( pair.type==Grammar.TOKEN_LABEL ||
+ pair.type==Grammar.CHAR_LABEL )
+ {
+ st = template("tokenLabelRef");
+ }
+ else {
+ st = template("listLabelRef");
+ }
+ st.setAttribute("label", (ID1!=null?ID1.getText():null));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end LABEL_REF
+
+ // $ANTLR start ISOLATED_TOKEN_REF
+ public final void mISOLATED_TOKEN_REF() throws RecognitionException {
+ try {
+ int _type = ISOLATED_TOKEN_REF;
+ Token ID2=null;
+
+ // ActionTranslator.g:427:2: ( '$' ID {...}?)
+ // ActionTranslator.g:427:4: '$' ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int ID2Start342 = getCharIndex();
+ mID(); if (state.failed) return ;
+ ID2 = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, ID2Start342, getCharIndex()-1);
+ if ( !(grammar.type!=Grammar.LEXER && enclosingRule!=null && isTokenRefInAlt((ID2!=null?ID2.getText():null))) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "ISOLATED_TOKEN_REF", "grammar.type!=Grammar.LEXER && enclosingRule!=null && isTokenRefInAlt($ID.text)");
+ }
+ if ( state.backtracking==1 ) {
+
+ String label = enclosingRule.getElementLabel((ID2!=null?ID2.getText():null), outerAltNum, generator);
+ checkElementRefUniqueness((ID2!=null?ID2.getText():null), true);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ (ID2!=null?ID2.getText():null));
+ }
+ else {
+ StringTemplate st = template("tokenLabelRef");
+ st.setAttribute("label", label);
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ISOLATED_TOKEN_REF
+
+ // $ANTLR start ISOLATED_LEXER_RULE_REF
+ public final void mISOLATED_LEXER_RULE_REF() throws RecognitionException {
+ try {
+ int _type = ISOLATED_LEXER_RULE_REF;
+ Token ID3=null;
+
+ // ActionTranslator.g:447:2: ( '$' ID {...}?)
+ // ActionTranslator.g:447:4: '$' ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int ID3Start366 = getCharIndex();
+ mID(); if (state.failed) return ;
+ ID3 = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, ID3Start366, getCharIndex()-1);
+ if ( !(grammar.type==Grammar.LEXER &&
+ enclosingRule!=null &&
+ isRuleRefInAlt((ID3!=null?ID3.getText():null))) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "ISOLATED_LEXER_RULE_REF", "grammar.type==Grammar.LEXER &&\n\t enclosingRule!=null &&\n\t isRuleRefInAlt($ID.text)");
+ }
+ if ( state.backtracking==1 ) {
+
+ String label = enclosingRule.getElementLabel((ID3!=null?ID3.getText():null), outerAltNum, generator);
+ checkElementRefUniqueness((ID3!=null?ID3.getText():null), false);
+ if ( label==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_FORWARD_ELEMENT_REF,
+ grammar,
+ actionToken,
+ (ID3!=null?ID3.getText():null));
+ }
+ else {
+ StringTemplate st = template("lexerRuleLabel");
+ st.setAttribute("label", label);
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ISOLATED_LEXER_RULE_REF
+
+ // $ANTLR start SET_LOCAL_ATTR
+ public final void mSET_LOCAL_ATTR() throws RecognitionException {
+ try {
+ int _type = SET_LOCAL_ATTR;
+ Token expr=null;
+ Token ID4=null;
+
+ // ActionTranslator.g:479:2: ( '$' ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' {...}?)
+ // ActionTranslator.g:479:4: '$' ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int ID4Start390 = getCharIndex();
+ mID(); if (state.failed) return ;
+ ID4 = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, ID4Start390, getCharIndex()-1);
+ // ActionTranslator.g:479:11: ( WS )?
+ int alt4=2;
+ int LA4_0 = input.LA(1);
+
+ if ( ((LA4_0>='\t' && LA4_0<='\n')||LA4_0=='\r'||LA4_0==' ') ) {
+ alt4=1;
+ }
+ switch (alt4) {
+ case 1 :
+ // ActionTranslator.g:479:11: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ match('='); if (state.failed) return ;
+ int exprStart399 = getCharIndex();
+ mATTR_VALUE_EXPR(); if (state.failed) return ;
+ expr = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, exprStart399, getCharIndex()-1);
+ match(';'); if (state.failed) return ;
+ if ( !(enclosingRule!=null
+ && enclosingRule.getLocalAttributeScope((ID4!=null?ID4.getText():null))!=null
+ && !enclosingRule.getLocalAttributeScope((ID4!=null?ID4.getText():null)).isPredefinedLexerRuleScope) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "SET_LOCAL_ATTR", "enclosingRule!=null\n\t\t\t\t\t\t\t\t\t\t\t\t\t&& enclosingRule.getLocalAttributeScope($ID.text)!=null\n\t\t\t\t\t\t\t\t\t\t\t\t\t&& !enclosingRule.getLocalAttributeScope($ID.text).isPredefinedLexerRuleScope");
+ }
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope((ID4!=null?ID4.getText():null));
+ if ( scope.isPredefinedRuleScope ) {
+ if ((ID4!=null?ID4.getText():null).equals("tree") || (ID4!=null?ID4.getText():null).equals("st")) {
+ st = template("ruleSetPropertyRef_"+(ID4!=null?ID4.getText():null));
+ grammar.referenceRuleLabelPredefinedAttribute(enclosingRule.name);
+ st.setAttribute("scope", enclosingRule.name);
+ st.setAttribute("attr", (ID4!=null?ID4.getText():null));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+ } else {
+ ErrorManager.grammarError(ErrorManager.MSG_WRITE_TO_READONLY_ATTR,
+ grammar,
+ actionToken,
+ (ID4!=null?ID4.getText():null),
+ "");
+ }
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterSetAttributeRef");
+ st.setAttribute("attr", scope.getAttribute((ID4!=null?ID4.getText():null)));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+ }
+ else {
+ st = template("returnSetAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute((ID4!=null?ID4.getText():null)));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SET_LOCAL_ATTR
+
+ // $ANTLR start LOCAL_ATTR
+ public final void mLOCAL_ATTR() throws RecognitionException {
+ try {
+ int _type = LOCAL_ATTR;
+ Token ID5=null;
+
+ // ActionTranslator.g:515:2: ( '$' ID {...}?)
+ // ActionTranslator.g:515:4: '$' ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int ID5Start422 = getCharIndex();
+ mID(); if (state.failed) return ;
+ ID5 = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, ID5Start422, getCharIndex()-1);
+ if ( !(enclosingRule!=null && enclosingRule.getLocalAttributeScope((ID5!=null?ID5.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "LOCAL_ATTR", "enclosingRule!=null && enclosingRule.getLocalAttributeScope($ID.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st;
+ AttributeScope scope = enclosingRule.getLocalAttributeScope((ID5!=null?ID5.getText():null));
+ if ( scope.isPredefinedRuleScope ) {
+ st = template("rulePropertyRef_"+(ID5!=null?ID5.getText():null));
+ grammar.referenceRuleLabelPredefinedAttribute(enclosingRule.name);
+ st.setAttribute("scope", enclosingRule.name);
+ st.setAttribute("attr", (ID5!=null?ID5.getText():null));
+ }
+ else if ( scope.isPredefinedLexerRuleScope ) {
+ st = template("lexerRulePropertyRef_"+(ID5!=null?ID5.getText():null));
+ st.setAttribute("scope", enclosingRule.name);
+ st.setAttribute("attr", (ID5!=null?ID5.getText():null));
+ }
+ else if ( scope.isParameterScope ) {
+ st = template("parameterAttributeRef");
+ st.setAttribute("attr", scope.getAttribute((ID5!=null?ID5.getText():null)));
+ }
+ else {
+ st = template("returnAttributeRef");
+ st.setAttribute("ruleDescriptor", enclosingRule);
+ st.setAttribute("attr", scope.getAttribute((ID5!=null?ID5.getText():null)));
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end LOCAL_ATTR
+
+ // $ANTLR start SET_DYNAMIC_SCOPE_ATTR
+ public final void mSET_DYNAMIC_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = SET_DYNAMIC_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+ Token expr=null;
+
+ // ActionTranslator.g:556:2: ( '$' x= ID '::' y= ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' {...}?)
+ // ActionTranslator.g:556:4: '$' x= ID '::' y= ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart448 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart448, getCharIndex()-1);
+ match("::"); if (state.failed) return ;
+
+ int yStart454 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart454, getCharIndex()-1);
+ // ActionTranslator.g:556:23: ( WS )?
+ int alt5=2;
+ int LA5_0 = input.LA(1);
+
+ if ( ((LA5_0>='\t' && LA5_0<='\n')||LA5_0=='\r'||LA5_0==' ') ) {
+ alt5=1;
+ }
+ switch (alt5) {
+ case 1 :
+ // ActionTranslator.g:556:23: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ match('='); if (state.failed) return ;
+ int exprStart463 = getCharIndex();
+ mATTR_VALUE_EXPR(); if (state.failed) return ;
+ expr = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, exprStart463, getCharIndex()-1);
+ match(';'); if (state.failed) return ;
+ if ( !(resolveDynamicScope((x!=null?x.getText():null))!=null &&
+ resolveDynamicScope((x!=null?x.getText():null)).getAttribute((y!=null?y.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "SET_DYNAMIC_SCOPE_ATTR", "resolveDynamicScope($x.text)!=null &&\n\t\t\t\t\t\t resolveDynamicScope($x.text).getAttribute($y.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ AttributeScope scope = resolveDynamicScope((x!=null?x.getText():null));
+ if ( scope!=null ) {
+ StringTemplate st = template("scopeSetAttributeRef");
+ st.setAttribute("scope", (x!=null?x.getText():null));
+ st.setAttribute("attr", scope.getAttribute((y!=null?y.getText():null)));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+ }
+ else {
+ // error: invalid dynamic attribute
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SET_DYNAMIC_SCOPE_ATTR
+
+ // $ANTLR start DYNAMIC_SCOPE_ATTR
+ public final void mDYNAMIC_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = DYNAMIC_SCOPE_ATTR;
+ Token x=null;
+ Token y=null;
+
+ // ActionTranslator.g:575:2: ( '$' x= ID '::' y= ID {...}?)
+ // ActionTranslator.g:575:4: '$' x= ID '::' y= ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int xStart498 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart498, getCharIndex()-1);
+ match("::"); if (state.failed) return ;
+
+ int yStart504 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart504, getCharIndex()-1);
+ if ( !(resolveDynamicScope((x!=null?x.getText():null))!=null &&
+ resolveDynamicScope((x!=null?x.getText():null)).getAttribute((y!=null?y.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "DYNAMIC_SCOPE_ATTR", "resolveDynamicScope($x.text)!=null &&\n\t\t\t\t\t\t resolveDynamicScope($x.text).getAttribute($y.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ AttributeScope scope = resolveDynamicScope((x!=null?x.getText():null));
+ if ( scope!=null ) {
+ StringTemplate st = template("scopeAttributeRef");
+ st.setAttribute("scope", (x!=null?x.getText():null));
+ st.setAttribute("attr", scope.getAttribute((y!=null?y.getText():null)));
+ }
+ else {
+ // error: invalid dynamic attribute
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end DYNAMIC_SCOPE_ATTR
+
+ // $ANTLR start ERROR_SCOPED_XY
+ public final void mERROR_SCOPED_XY() throws RecognitionException {
+ try {
+ int _type = ERROR_SCOPED_XY;
+ Token x=null;
+ Token y=null;
+
+ // ActionTranslator.g:594:2: ( '$' x= ID '::' y= ID )
+ // ActionTranslator.g:594:4: '$' x= ID '::' y= ID
+ {
+ match('$'); if (state.failed) return ;
+ int xStart538 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart538, getCharIndex()-1);
+ match("::"); if (state.failed) return ;
+
+ int yStart544 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart544, getCharIndex()-1);
+ if ( state.backtracking==1 ) {
+
+ chunks.add(getText());
+ generator.issueInvalidScopeError((x!=null?x.getText():null),(y!=null?y.getText():null),
+ enclosingRule,actionToken,
+ outerAltNum);
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ERROR_SCOPED_XY
+
+ // $ANTLR start DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR
+ public final void mDYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR;
+ Token x=null;
+ Token expr=null;
+ Token y=null;
+
+ // ActionTranslator.g:612:2: ( '$' x= ID '[' '-' expr= SCOPE_INDEX_EXPR ']' '::' y= ID )
+ // ActionTranslator.g:612:4: '$' x= ID '[' '-' expr= SCOPE_INDEX_EXPR ']' '::' y= ID
+ {
+ match('$'); if (state.failed) return ;
+ int xStart566 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart566, getCharIndex()-1);
+ match('['); if (state.failed) return ;
+ match('-'); if (state.failed) return ;
+ int exprStart574 = getCharIndex();
+ mSCOPE_INDEX_EXPR(); if (state.failed) return ;
+ expr = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, exprStart574, getCharIndex()-1);
+ match(']'); if (state.failed) return ;
+ match("::"); if (state.failed) return ;
+
+ int yStart582 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart582, getCharIndex()-1);
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st = template("scopeAttributeRef");
+ st.setAttribute("scope", (x!=null?x.getText():null));
+ st.setAttribute("attr", resolveDynamicScope((x!=null?x.getText():null)).getAttribute((y!=null?y.getText():null)));
+ st.setAttribute("negIndex", (expr!=null?expr.getText():null));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR
+
+ // $ANTLR start DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR
+ public final void mDYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR() throws RecognitionException {
+ try {
+ int _type = DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR;
+ Token x=null;
+ Token expr=null;
+ Token y=null;
+
+ // ActionTranslator.g:623:2: ( '$' x= ID '[' expr= SCOPE_INDEX_EXPR ']' '::' y= ID )
+ // ActionTranslator.g:623:4: '$' x= ID '[' expr= SCOPE_INDEX_EXPR ']' '::' y= ID
+ {
+ match('$'); if (state.failed) return ;
+ int xStart606 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart606, getCharIndex()-1);
+ match('['); if (state.failed) return ;
+ int exprStart612 = getCharIndex();
+ mSCOPE_INDEX_EXPR(); if (state.failed) return ;
+ expr = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, exprStart612, getCharIndex()-1);
+ match(']'); if (state.failed) return ;
+ match("::"); if (state.failed) return ;
+
+ int yStart620 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart620, getCharIndex()-1);
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st = template("scopeAttributeRef");
+ st.setAttribute("scope", (x!=null?x.getText():null));
+ st.setAttribute("attr", resolveDynamicScope((x!=null?x.getText():null)).getAttribute((y!=null?y.getText():null)));
+ st.setAttribute("index", (expr!=null?expr.getText():null));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR
+
+ // $ANTLR start SCOPE_INDEX_EXPR
+ public final void mSCOPE_INDEX_EXPR() throws RecognitionException {
+ try {
+ // ActionTranslator.g:635:2: ( (~ ']' )+ )
+ // ActionTranslator.g:635:4: (~ ']' )+
+ {
+ // ActionTranslator.g:635:4: (~ ']' )+
+ int cnt6=0;
+ loop6:
+ do {
+ int alt6=2;
+ int LA6_0 = input.LA(1);
+
+ if ( ((LA6_0>='\u0000' && LA6_0<='\\')||(LA6_0>='^' && LA6_0<='\uFFFE')) ) {
+ alt6=1;
+ }
+
+
+ switch (alt6) {
+ case 1 :
+ // ActionTranslator.g:635:5: ~ ']'
+ {
+ if ( (input.LA(1)>='\u0000' && input.LA(1)<='\\')||(input.LA(1)>='^' && input.LA(1)<='\uFFFE') ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+
+
+ }
+ break;
+
+ default :
+ if ( cnt6 >= 1 ) break loop6;
+ if (state.backtracking>0) {state.failed=true; return ;}
+ EarlyExitException eee =
+ new EarlyExitException(6, input);
+ throw eee;
+ }
+ cnt6++;
+ } while (true);
+
+
+
+ }
+
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SCOPE_INDEX_EXPR
+
+ // $ANTLR start ISOLATED_DYNAMIC_SCOPE
+ public final void mISOLATED_DYNAMIC_SCOPE() throws RecognitionException {
+ try {
+ int _type = ISOLATED_DYNAMIC_SCOPE;
+ Token ID6=null;
+
+ // ActionTranslator.g:644:2: ( '$' ID {...}?)
+ // ActionTranslator.g:644:4: '$' ID {...}?
+ {
+ match('$'); if (state.failed) return ;
+ int ID6Start663 = getCharIndex();
+ mID(); if (state.failed) return ;
+ ID6 = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, ID6Start663, getCharIndex()-1);
+ if ( !(resolveDynamicScope((ID6!=null?ID6.getText():null))!=null) ) {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ throw new FailedPredicateException(input, "ISOLATED_DYNAMIC_SCOPE", "resolveDynamicScope($ID.text)!=null");
+ }
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st = template("isolatedDynamicScopeRef");
+ st.setAttribute("scope", (ID6!=null?ID6.getText():null));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ISOLATED_DYNAMIC_SCOPE
+
+ // $ANTLR start TEMPLATE_INSTANCE
+ public final void mTEMPLATE_INSTANCE() throws RecognitionException {
+ try {
+ int _type = TEMPLATE_INSTANCE;
+ // ActionTranslator.g:657:2: ( '%' ID '(' ( ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )? )? ')' )
+ // ActionTranslator.g:657:4: '%' ID '(' ( ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )? )? ')'
+ {
+ match('%'); if (state.failed) return ;
+ mID(); if (state.failed) return ;
+ match('('); if (state.failed) return ;
+ // ActionTranslator.g:657:15: ( ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )? )?
+ int alt11=2;
+ int LA11_0 = input.LA(1);
+
+ if ( ((LA11_0>='\t' && LA11_0<='\n')||LA11_0=='\r'||LA11_0==' '||(LA11_0>='A' && LA11_0<='Z')||LA11_0=='_'||(LA11_0>='a' && LA11_0<='z')) ) {
+ alt11=1;
+ }
+ switch (alt11) {
+ case 1 :
+ // ActionTranslator.g:657:17: ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )?
+ {
+ // ActionTranslator.g:657:17: ( WS )?
+ int alt7=2;
+ int LA7_0 = input.LA(1);
+
+ if ( ((LA7_0>='\t' && LA7_0<='\n')||LA7_0=='\r'||LA7_0==' ') ) {
+ alt7=1;
+ }
+ switch (alt7) {
+ case 1 :
+ // ActionTranslator.g:657:17: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ mARG(); if (state.failed) return ;
+ // ActionTranslator.g:657:25: ( ',' ( WS )? ARG )*
+ loop9:
+ do {
+ int alt9=2;
+ int LA9_0 = input.LA(1);
+
+ if ( (LA9_0==',') ) {
+ alt9=1;
+ }
+
+
+ switch (alt9) {
+ case 1 :
+ // ActionTranslator.g:657:26: ',' ( WS )? ARG
+ {
+ match(','); if (state.failed) return ;
+ // ActionTranslator.g:657:30: ( WS )?
+ int alt8=2;
+ int LA8_0 = input.LA(1);
+
+ if ( ((LA8_0>='\t' && LA8_0<='\n')||LA8_0=='\r'||LA8_0==' ') ) {
+ alt8=1;
+ }
+ switch (alt8) {
+ case 1 :
+ // ActionTranslator.g:657:30: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ mARG(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ default :
+ break loop9;
+ }
+ } while (true);
+
+ // ActionTranslator.g:657:40: ( WS )?
+ int alt10=2;
+ int LA10_0 = input.LA(1);
+
+ if ( ((LA10_0>='\t' && LA10_0<='\n')||LA10_0=='\r'||LA10_0==' ') ) {
+ alt10=1;
+ }
+ switch (alt10) {
+ case 1 :
+ // ActionTranslator.g:657:40: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+
+
+ }
+ break;
+
+ }
+
+ match(')'); if (state.failed) return ;
+ if ( state.backtracking==1 ) {
+
+ String action = getText().substring(1,getText().length());
+ String ruleName = "";
+ if ( enclosingRule!=null ) {
+ ruleName = enclosingRule.name;
+ }
+ StringTemplate st =
+ generator.translateTemplateConstructor(ruleName,
+ outerAltNum,
+ actionToken,
+ action);
+ if ( st!=null ) {
+ chunks.add(st);
+ }
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end TEMPLATE_INSTANCE
+
+ // $ANTLR start INDIRECT_TEMPLATE_INSTANCE
+ public final void mINDIRECT_TEMPLATE_INSTANCE() throws RecognitionException {
+ try {
+ int _type = INDIRECT_TEMPLATE_INSTANCE;
+ // ActionTranslator.g:678:2: ( '%' '(' ACTION ')' '(' ( ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )? )? ')' )
+ // ActionTranslator.g:678:4: '%' '(' ACTION ')' '(' ( ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )? )? ')'
+ {
+ match('%'); if (state.failed) return ;
+ match('('); if (state.failed) return ;
+ mACTION(); if (state.failed) return ;
+ match(')'); if (state.failed) return ;
+ match('('); if (state.failed) return ;
+ // ActionTranslator.g:678:27: ( ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )? )?
+ int alt16=2;
+ int LA16_0 = input.LA(1);
+
+ if ( ((LA16_0>='\t' && LA16_0<='\n')||LA16_0=='\r'||LA16_0==' '||(LA16_0>='A' && LA16_0<='Z')||LA16_0=='_'||(LA16_0>='a' && LA16_0<='z')) ) {
+ alt16=1;
+ }
+ switch (alt16) {
+ case 1 :
+ // ActionTranslator.g:678:29: ( WS )? ARG ( ',' ( WS )? ARG )* ( WS )?
+ {
+ // ActionTranslator.g:678:29: ( WS )?
+ int alt12=2;
+ int LA12_0 = input.LA(1);
+
+ if ( ((LA12_0>='\t' && LA12_0<='\n')||LA12_0=='\r'||LA12_0==' ') ) {
+ alt12=1;
+ }
+ switch (alt12) {
+ case 1 :
+ // ActionTranslator.g:678:29: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ mARG(); if (state.failed) return ;
+ // ActionTranslator.g:678:37: ( ',' ( WS )? ARG )*
+ loop14:
+ do {
+ int alt14=2;
+ int LA14_0 = input.LA(1);
+
+ if ( (LA14_0==',') ) {
+ alt14=1;
+ }
+
+
+ switch (alt14) {
+ case 1 :
+ // ActionTranslator.g:678:38: ',' ( WS )? ARG
+ {
+ match(','); if (state.failed) return ;
+ // ActionTranslator.g:678:42: ( WS )?
+ int alt13=2;
+ int LA13_0 = input.LA(1);
+
+ if ( ((LA13_0>='\t' && LA13_0<='\n')||LA13_0=='\r'||LA13_0==' ') ) {
+ alt13=1;
+ }
+ switch (alt13) {
+ case 1 :
+ // ActionTranslator.g:678:42: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ mARG(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ default :
+ break loop14;
+ }
+ } while (true);
+
+ // ActionTranslator.g:678:52: ( WS )?
+ int alt15=2;
+ int LA15_0 = input.LA(1);
+
+ if ( ((LA15_0>='\t' && LA15_0<='\n')||LA15_0=='\r'||LA15_0==' ') ) {
+ alt15=1;
+ }
+ switch (alt15) {
+ case 1 :
+ // ActionTranslator.g:678:52: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+
+
+ }
+ break;
+
+ }
+
+ match(')'); if (state.failed) return ;
+ if ( state.backtracking==1 ) {
+
+ String action = getText().substring(1,getText().length());
+ StringTemplate st =
+ generator.translateTemplateConstructor(enclosingRule.name,
+ outerAltNum,
+ actionToken,
+ action);
+ chunks.add(st);
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end INDIRECT_TEMPLATE_INSTANCE
+
+ // $ANTLR start ARG
+ public final void mARG() throws RecognitionException {
+ try {
+ // ActionTranslator.g:692:5: ( ID '=' ACTION )
+ // ActionTranslator.g:692:7: ID '=' ACTION
+ {
+ mID(); if (state.failed) return ;
+ match('='); if (state.failed) return ;
+ mACTION(); if (state.failed) return ;
+
+
+ }
+
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ARG
+
+ // $ANTLR start SET_EXPR_ATTRIBUTE
+ public final void mSET_EXPR_ATTRIBUTE() throws RecognitionException {
+ try {
+ int _type = SET_EXPR_ATTRIBUTE;
+ Token a=null;
+ Token expr=null;
+ Token ID7=null;
+
+ // ActionTranslator.g:697:2: ( '%' a= ACTION '.' ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' )
+ // ActionTranslator.g:697:4: '%' a= ACTION '.' ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';'
+ {
+ match('%'); if (state.failed) return ;
+ int aStart813 = getCharIndex();
+ mACTION(); if (state.failed) return ;
+ a = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, aStart813, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int ID7Start817 = getCharIndex();
+ mID(); if (state.failed) return ;
+ ID7 = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, ID7Start817, getCharIndex()-1);
+ // ActionTranslator.g:697:24: ( WS )?
+ int alt17=2;
+ int LA17_0 = input.LA(1);
+
+ if ( ((LA17_0>='\t' && LA17_0<='\n')||LA17_0=='\r'||LA17_0==' ') ) {
+ alt17=1;
+ }
+ switch (alt17) {
+ case 1 :
+ // ActionTranslator.g:697:24: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ match('='); if (state.failed) return ;
+ int exprStart826 = getCharIndex();
+ mATTR_VALUE_EXPR(); if (state.failed) return ;
+ expr = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, exprStart826, getCharIndex()-1);
+ match(';'); if (state.failed) return ;
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st = template("actionSetAttribute");
+ String action = (a!=null?a.getText():null);
+ action = action.substring(1,action.length()-1); // stuff inside {...}
+ st.setAttribute("st", translateAction(action));
+ st.setAttribute("attrName", (ID7!=null?ID7.getText():null));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SET_EXPR_ATTRIBUTE
+
+ // $ANTLR start SET_ATTRIBUTE
+ public final void mSET_ATTRIBUTE() throws RecognitionException {
+ try {
+ int _type = SET_ATTRIBUTE;
+ Token x=null;
+ Token y=null;
+ Token expr=null;
+
+ // ActionTranslator.g:714:2: ( '%' x= ID '.' y= ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';' )
+ // ActionTranslator.g:714:4: '%' x= ID '.' y= ID ( WS )? '=' expr= ATTR_VALUE_EXPR ';'
+ {
+ match('%'); if (state.failed) return ;
+ int xStart853 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart853, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart859 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart859, getCharIndex()-1);
+ // ActionTranslator.g:714:22: ( WS )?
+ int alt18=2;
+ int LA18_0 = input.LA(1);
+
+ if ( ((LA18_0>='\t' && LA18_0<='\n')||LA18_0=='\r'||LA18_0==' ') ) {
+ alt18=1;
+ }
+ switch (alt18) {
+ case 1 :
+ // ActionTranslator.g:714:22: WS
+ {
+ mWS(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ match('='); if (state.failed) return ;
+ int exprStart868 = getCharIndex();
+ mATTR_VALUE_EXPR(); if (state.failed) return ;
+ expr = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, exprStart868, getCharIndex()-1);
+ match(';'); if (state.failed) return ;
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st = template("actionSetAttribute");
+ st.setAttribute("st", (x!=null?x.getText():null));
+ st.setAttribute("attrName", (y!=null?y.getText():null));
+ st.setAttribute("expr", translateAction((expr!=null?expr.getText():null)));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end SET_ATTRIBUTE
+
+ // $ANTLR start ATTR_VALUE_EXPR
+ public final void mATTR_VALUE_EXPR() throws RecognitionException {
+ try {
+ // ActionTranslator.g:727:2: (~ '=' (~ ';' )* )
+ // ActionTranslator.g:727:4: ~ '=' (~ ';' )*
+ {
+ if ( (input.LA(1)>='\u0000' && input.LA(1)<='<')||(input.LA(1)>='>' && input.LA(1)<='\uFFFE') ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+ // ActionTranslator.g:727:9: (~ ';' )*
+ loop19:
+ do {
+ int alt19=2;
+ int LA19_0 = input.LA(1);
+
+ if ( ((LA19_0>='\u0000' && LA19_0<=':')||(LA19_0>='<' && LA19_0<='\uFFFE')) ) {
+ alt19=1;
+ }
+
+
+ switch (alt19) {
+ case 1 :
+ // ActionTranslator.g:727:10: ~ ';'
+ {
+ if ( (input.LA(1)>='\u0000' && input.LA(1)<=':')||(input.LA(1)>='<' && input.LA(1)<='\uFFFE') ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+
+
+ }
+ break;
+
+ default :
+ break loop19;
+ }
+ } while (true);
+
+
+
+ }
+
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ATTR_VALUE_EXPR
+
+ // $ANTLR start TEMPLATE_EXPR
+ public final void mTEMPLATE_EXPR() throws RecognitionException {
+ try {
+ int _type = TEMPLATE_EXPR;
+ Token a=null;
+
+ // ActionTranslator.g:732:2: ( '%' a= ACTION )
+ // ActionTranslator.g:732:4: '%' a= ACTION
+ {
+ match('%'); if (state.failed) return ;
+ int aStart917 = getCharIndex();
+ mACTION(); if (state.failed) return ;
+ a = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, aStart917, getCharIndex()-1);
+ if ( state.backtracking==1 ) {
+
+ StringTemplate st = template("actionStringConstructor");
+ String action = (a!=null?a.getText():null);
+ action = action.substring(1,action.length()-1); // stuff inside {...}
+ st.setAttribute("stringExpr", translateAction(action));
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end TEMPLATE_EXPR
+
+ // $ANTLR start ACTION
+ public final void mACTION() throws RecognitionException {
+ try {
+ // ActionTranslator.g:744:2: ( '{' ( options {greedy=false; } : . )* '}' )
+ // ActionTranslator.g:744:4: '{' ( options {greedy=false; } : . )* '}'
+ {
+ match('{'); if (state.failed) return ;
+ // ActionTranslator.g:744:8: ( options {greedy=false; } : . )*
+ loop20:
+ do {
+ int alt20=2;
+ int LA20_0 = input.LA(1);
+
+ if ( (LA20_0=='}') ) {
+ alt20=2;
+ }
+ else if ( ((LA20_0>='\u0000' && LA20_0<='|')||(LA20_0>='~' && LA20_0<='\uFFFE')) ) {
+ alt20=1;
+ }
+
+
+ switch (alt20) {
+ case 1 :
+ // ActionTranslator.g:744:33: .
+ {
+ matchAny(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ default :
+ break loop20;
+ }
+ } while (true);
+
+ match('}'); if (state.failed) return ;
+
+
+ }
+
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ACTION
+
+ // $ANTLR start ESC
+ public final void mESC() throws RecognitionException {
+ try {
+ int _type = ESC;
+ // ActionTranslator.g:747:5: ( '\\\\' '$' | '\\\\' '%' | '\\\\' ~ ( '$' | '%' ) )
+ int alt21=3;
+ int LA21_0 = input.LA(1);
+
+ if ( (LA21_0=='\\') ) {
+ int LA21_1 = input.LA(2);
+
+ if ( (LA21_1=='$') ) {
+ alt21=1;
+ }
+ else if ( (LA21_1=='%') ) {
+ alt21=2;
+ }
+ else if ( ((LA21_1>='\u0000' && LA21_1<='#')||(LA21_1>='&' && LA21_1<='\uFFFE')) ) {
+ alt21=3;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ NoViableAltException nvae =
+ new NoViableAltException("", 21, 1, input);
+
+ throw nvae;
+ }
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ NoViableAltException nvae =
+ new NoViableAltException("", 21, 0, input);
+
+ throw nvae;
+ }
+ switch (alt21) {
+ case 1 :
+ // ActionTranslator.g:747:9: '\\\\' '$'
+ {
+ match('\\'); if (state.failed) return ;
+ match('$'); if (state.failed) return ;
+ if ( state.backtracking==1 ) {
+ chunks.add("$");
+ }
+
+
+ }
+ break;
+ case 2 :
+ // ActionTranslator.g:748:4: '\\\\' '%'
+ {
+ match('\\'); if (state.failed) return ;
+ match('%'); if (state.failed) return ;
+ if ( state.backtracking==1 ) {
+ chunks.add("%");
+ }
+
+
+ }
+ break;
+ case 3 :
+ // ActionTranslator.g:749:4: '\\\\' ~ ( '$' | '%' )
+ {
+ match('\\'); if (state.failed) return ;
+ if ( (input.LA(1)>='\u0000' && input.LA(1)<='#')||(input.LA(1)>='&' && input.LA(1)<='\uFFFE') ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+ if ( state.backtracking==1 ) {
+ chunks.add(getText());
+ }
+
+
+ }
+ break;
+
+ }
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ESC
+
+ // $ANTLR start ERROR_XY
+ public final void mERROR_XY() throws RecognitionException {
+ try {
+ int _type = ERROR_XY;
+ Token x=null;
+ Token y=null;
+
+ // ActionTranslator.g:753:2: ( '$' x= ID '.' y= ID )
+ // ActionTranslator.g:753:4: '$' x= ID '.' y= ID
+ {
+ match('$'); if (state.failed) return ;
+ int xStart1017 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart1017, getCharIndex()-1);
+ match('.'); if (state.failed) return ;
+ int yStart1023 = getCharIndex();
+ mID(); if (state.failed) return ;
+ y = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, yStart1023, getCharIndex()-1);
+ if ( state.backtracking==1 ) {
+
+ chunks.add(getText());
+ generator.issueInvalidAttributeError((x!=null?x.getText():null),(y!=null?y.getText():null),
+ enclosingRule,actionToken,
+ outerAltNum);
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ERROR_XY
+
+ // $ANTLR start ERROR_X
+ public final void mERROR_X() throws RecognitionException {
+ try {
+ int _type = ERROR_X;
+ Token x=null;
+
+ // ActionTranslator.g:763:2: ( '$' x= ID )
+ // ActionTranslator.g:763:4: '$' x= ID
+ {
+ match('$'); if (state.failed) return ;
+ int xStart1043 = getCharIndex();
+ mID(); if (state.failed) return ;
+ x = new CommonToken(input, Token.INVALID_TOKEN_TYPE, Token.DEFAULT_CHANNEL, xStart1043, getCharIndex()-1);
+ if ( state.backtracking==1 ) {
+
+ chunks.add(getText());
+ generator.issueInvalidAttributeError((x!=null?x.getText():null),
+ enclosingRule,actionToken,
+ outerAltNum);
+
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ERROR_X
+
+ // $ANTLR start UNKNOWN_SYNTAX
+ public final void mUNKNOWN_SYNTAX() throws RecognitionException {
+ try {
+ int _type = UNKNOWN_SYNTAX;
+ // ActionTranslator.g:773:2: ( '$' | '%' ( ID | '.' | '(' | ')' | ',' | '{' | '}' | '\"' )* )
+ int alt23=2;
+ int LA23_0 = input.LA(1);
+
+ if ( (LA23_0=='$') ) {
+ alt23=1;
+ }
+ else if ( (LA23_0=='%') ) {
+ alt23=2;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ NoViableAltException nvae =
+ new NoViableAltException("", 23, 0, input);
+
+ throw nvae;
+ }
+ switch (alt23) {
+ case 1 :
+ // ActionTranslator.g:773:4: '$'
+ {
+ match('$'); if (state.failed) return ;
+ if ( state.backtracking==1 ) {
+
+ chunks.add(getText());
+ // shouldn't need an error here. Just accept $ if it doesn't look like anything
+
+ }
+
+
+ }
+ break;
+ case 2 :
+ // ActionTranslator.g:778:4: '%' ( ID | '.' | '(' | ')' | ',' | '{' | '}' | '\"' )*
+ {
+ match('%'); if (state.failed) return ;
+ // ActionTranslator.g:778:8: ( ID | '.' | '(' | ')' | ',' | '{' | '}' | '\"' )*
+ loop22:
+ do {
+ int alt22=9;
+ alt22 = dfa22.predict(input);
+ switch (alt22) {
+ case 1 :
+ // ActionTranslator.g:778:9: ID
+ {
+ mID(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 2 :
+ // ActionTranslator.g:778:12: '.'
+ {
+ match('.'); if (state.failed) return ;
+
+
+ }
+ break;
+ case 3 :
+ // ActionTranslator.g:778:16: '('
+ {
+ match('('); if (state.failed) return ;
+
+
+ }
+ break;
+ case 4 :
+ // ActionTranslator.g:778:20: ')'
+ {
+ match(')'); if (state.failed) return ;
+
+
+ }
+ break;
+ case 5 :
+ // ActionTranslator.g:778:24: ','
+ {
+ match(','); if (state.failed) return ;
+
+
+ }
+ break;
+ case 6 :
+ // ActionTranslator.g:778:28: '{'
+ {
+ match('{'); if (state.failed) return ;
+
+
+ }
+ break;
+ case 7 :
+ // ActionTranslator.g:778:32: '}'
+ {
+ match('}'); if (state.failed) return ;
+
+
+ }
+ break;
+ case 8 :
+ // ActionTranslator.g:778:36: '\"'
+ {
+ match('\"'); if (state.failed) return ;
+
+
+ }
+ break;
+
+ default :
+ break loop22;
+ }
+ } while (true);
+
+ if ( state.backtracking==1 ) {
+
+ chunks.add(getText());
+ ErrorManager.grammarError(ErrorManager.MSG_INVALID_TEMPLATE_ACTION,
+ grammar,
+ actionToken,
+ getText());
+
+ }
+
+
+ }
+ break;
+
+ }
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end UNKNOWN_SYNTAX
+
+ // $ANTLR start TEXT
+ public final void mTEXT() throws RecognitionException {
+ try {
+ int _type = TEXT;
+ // ActionTranslator.g:788:5: ( (~ ( '$' | '%' | '\\\\' ) )+ )
+ // ActionTranslator.g:788:7: (~ ( '$' | '%' | '\\\\' ) )+
+ {
+ // ActionTranslator.g:788:7: (~ ( '$' | '%' | '\\\\' ) )+
+ int cnt24=0;
+ loop24:
+ do {
+ int alt24=2;
+ int LA24_0 = input.LA(1);
+
+ if ( ((LA24_0>='\u0000' && LA24_0<='#')||(LA24_0>='&' && LA24_0<='[')||(LA24_0>=']' && LA24_0<='\uFFFE')) ) {
+ alt24=1;
+ }
+
+
+ switch (alt24) {
+ case 1 :
+ // ActionTranslator.g:788:7: ~ ( '$' | '%' | '\\\\' )
+ {
+ if ( (input.LA(1)>='\u0000' && input.LA(1)<='#')||(input.LA(1)>='&' && input.LA(1)<='[')||(input.LA(1)>=']' && input.LA(1)<='\uFFFE') ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+
+
+ }
+ break;
+
+ default :
+ if ( cnt24 >= 1 ) break loop24;
+ if (state.backtracking>0) {state.failed=true; return ;}
+ EarlyExitException eee =
+ new EarlyExitException(24, input);
+ throw eee;
+ }
+ cnt24++;
+ } while (true);
+
+ if ( state.backtracking==1 ) {
+ chunks.add(getText());
+ }
+
+
+ }
+
+ state.type = _type;
+ }
+ finally {
+ }
+ }
+ // $ANTLR end TEXT
+
+ // $ANTLR start ID
+ public final void mID() throws RecognitionException {
+ try {
+ // ActionTranslator.g:792:5: ( ( 'a' .. 'z' | 'A' .. 'Z' | '_' ) ( 'a' .. 'z' | 'A' .. 'Z' | '_' | '0' .. '9' )* )
+ // ActionTranslator.g:792:9: ( 'a' .. 'z' | 'A' .. 'Z' | '_' ) ( 'a' .. 'z' | 'A' .. 'Z' | '_' | '0' .. '9' )*
+ {
+ if ( (input.LA(1)>='A' && input.LA(1)<='Z')||input.LA(1)=='_'||(input.LA(1)>='a' && input.LA(1)<='z') ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+ // ActionTranslator.g:792:33: ( 'a' .. 'z' | 'A' .. 'Z' | '_' | '0' .. '9' )*
+ loop25:
+ do {
+ int alt25=2;
+ int LA25_0 = input.LA(1);
+
+ if ( ((LA25_0>='0' && LA25_0<='9')||(LA25_0>='A' && LA25_0<='Z')||LA25_0=='_'||(LA25_0>='a' && LA25_0<='z')) ) {
+ alt25=1;
+ }
+
+
+ switch (alt25) {
+ case 1 :
+ // ActionTranslator.g:
+ {
+ if ( (input.LA(1)>='0' && input.LA(1)<='9')||(input.LA(1)>='A' && input.LA(1)<='Z')||input.LA(1)=='_'||(input.LA(1)>='a' && input.LA(1)<='z') ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+
+ }
+ break;
+
+ default :
+ break loop25;
+ }
+ } while (true);
+
+
+
+ }
+
+ }
+ finally {
+ }
+ }
+ // $ANTLR end ID
+
+ // $ANTLR start INT
+ public final void mINT() throws RecognitionException {
+ try {
+ // ActionTranslator.g:796:5: ( ( '0' .. '9' )+ )
+ // ActionTranslator.g:796:7: ( '0' .. '9' )+
+ {
+ // ActionTranslator.g:796:7: ( '0' .. '9' )+
+ int cnt26=0;
+ loop26:
+ do {
+ int alt26=2;
+ int LA26_0 = input.LA(1);
+
+ if ( ((LA26_0>='0' && LA26_0<='9')) ) {
+ alt26=1;
+ }
+
+
+ switch (alt26) {
+ case 1 :
+ // ActionTranslator.g:796:7: '0' .. '9'
+ {
+ matchRange('0','9'); if (state.failed) return ;
+
+
+ }
+ break;
+
+ default :
+ if ( cnt26 >= 1 ) break loop26;
+ if (state.backtracking>0) {state.failed=true; return ;}
+ EarlyExitException eee =
+ new EarlyExitException(26, input);
+ throw eee;
+ }
+ cnt26++;
+ } while (true);
+
+
+
+ }
+
+ }
+ finally {
+ }
+ }
+ // $ANTLR end INT
+
+ // $ANTLR start WS
+ public final void mWS() throws RecognitionException {
+ try {
+ // ActionTranslator.g:800:4: ( ( ' ' | '\\t' | '\\n' | '\\r' )+ )
+ // ActionTranslator.g:800:6: ( ' ' | '\\t' | '\\n' | '\\r' )+
+ {
+ // ActionTranslator.g:800:6: ( ' ' | '\\t' | '\\n' | '\\r' )+
+ int cnt27=0;
+ loop27:
+ do {
+ int alt27=2;
+ int LA27_0 = input.LA(1);
+
+ if ( ((LA27_0>='\t' && LA27_0<='\n')||LA27_0=='\r'||LA27_0==' ') ) {
+ alt27=1;
+ }
+
+
+ switch (alt27) {
+ case 1 :
+ // ActionTranslator.g:
+ {
+ if ( (input.LA(1)>='\t' && input.LA(1)<='\n')||input.LA(1)=='\r'||input.LA(1)==' ' ) {
+ input.consume();
+ state.failed=false;
+ }
+ else {
+ if (state.backtracking>0) {state.failed=true; return ;}
+ MismatchedSetException mse = new MismatchedSetException(null,input);
+ recover(mse);
+ throw mse;}
+
+
+ }
+ break;
+
+ default :
+ if ( cnt27 >= 1 ) break loop27;
+ if (state.backtracking>0) {state.failed=true; return ;}
+ EarlyExitException eee =
+ new EarlyExitException(27, input);
+ throw eee;
+ }
+ cnt27++;
+ } while (true);
+
+
+
+ }
+
+ }
+ finally {
+ }
+ }
+ // $ANTLR end WS
+
+ public void mTokens() throws RecognitionException {
+ // ActionTranslator.g:1:39: ( SET_ENCLOSING_RULE_SCOPE_ATTR | ENCLOSING_RULE_SCOPE_ATTR | SET_TOKEN_SCOPE_ATTR | TOKEN_SCOPE_ATTR | SET_RULE_SCOPE_ATTR | RULE_SCOPE_ATTR | LABEL_REF | ISOLATED_TOKEN_REF | ISOLATED_LEXER_RULE_REF | SET_LOCAL_ATTR | LOCAL_ATTR | SET_DYNAMIC_SCOPE_ATTR | DYNAMIC_SCOPE_ATTR | ERROR_SCOPED_XY | DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR | DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR | ISOLATED_DYNAMIC_SCOPE | TEMPLATE_INSTANCE | INDIRECT_TEMPLATE_INSTANCE | SET_EXPR_ATTRIBUTE | SET_ATTRIBUTE | TEMPLATE_EXPR | ESC | ERROR_XY | ERROR_X | UNKNOWN_SYNTAX | TEXT )
+ int alt28=27;
+ alt28 = dfa28.predict(input);
+ switch (alt28) {
+ case 1 :
+ // ActionTranslator.g:1:41: SET_ENCLOSING_RULE_SCOPE_ATTR
+ {
+ mSET_ENCLOSING_RULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 2 :
+ // ActionTranslator.g:1:71: ENCLOSING_RULE_SCOPE_ATTR
+ {
+ mENCLOSING_RULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 3 :
+ // ActionTranslator.g:1:97: SET_TOKEN_SCOPE_ATTR
+ {
+ mSET_TOKEN_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 4 :
+ // ActionTranslator.g:1:118: TOKEN_SCOPE_ATTR
+ {
+ mTOKEN_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 5 :
+ // ActionTranslator.g:1:135: SET_RULE_SCOPE_ATTR
+ {
+ mSET_RULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 6 :
+ // ActionTranslator.g:1:155: RULE_SCOPE_ATTR
+ {
+ mRULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 7 :
+ // ActionTranslator.g:1:171: LABEL_REF
+ {
+ mLABEL_REF(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 8 :
+ // ActionTranslator.g:1:181: ISOLATED_TOKEN_REF
+ {
+ mISOLATED_TOKEN_REF(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 9 :
+ // ActionTranslator.g:1:200: ISOLATED_LEXER_RULE_REF
+ {
+ mISOLATED_LEXER_RULE_REF(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 10 :
+ // ActionTranslator.g:1:224: SET_LOCAL_ATTR
+ {
+ mSET_LOCAL_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 11 :
+ // ActionTranslator.g:1:239: LOCAL_ATTR
+ {
+ mLOCAL_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 12 :
+ // ActionTranslator.g:1:250: SET_DYNAMIC_SCOPE_ATTR
+ {
+ mSET_DYNAMIC_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 13 :
+ // ActionTranslator.g:1:273: DYNAMIC_SCOPE_ATTR
+ {
+ mDYNAMIC_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 14 :
+ // ActionTranslator.g:1:292: ERROR_SCOPED_XY
+ {
+ mERROR_SCOPED_XY(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 15 :
+ // ActionTranslator.g:1:308: DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR
+ {
+ mDYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 16 :
+ // ActionTranslator.g:1:344: DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR
+ {
+ mDYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 17 :
+ // ActionTranslator.g:1:380: ISOLATED_DYNAMIC_SCOPE
+ {
+ mISOLATED_DYNAMIC_SCOPE(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 18 :
+ // ActionTranslator.g:1:403: TEMPLATE_INSTANCE
+ {
+ mTEMPLATE_INSTANCE(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 19 :
+ // ActionTranslator.g:1:421: INDIRECT_TEMPLATE_INSTANCE
+ {
+ mINDIRECT_TEMPLATE_INSTANCE(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 20 :
+ // ActionTranslator.g:1:448: SET_EXPR_ATTRIBUTE
+ {
+ mSET_EXPR_ATTRIBUTE(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 21 :
+ // ActionTranslator.g:1:467: SET_ATTRIBUTE
+ {
+ mSET_ATTRIBUTE(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 22 :
+ // ActionTranslator.g:1:481: TEMPLATE_EXPR
+ {
+ mTEMPLATE_EXPR(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 23 :
+ // ActionTranslator.g:1:495: ESC
+ {
+ mESC(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 24 :
+ // ActionTranslator.g:1:499: ERROR_XY
+ {
+ mERROR_XY(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 25 :
+ // ActionTranslator.g:1:508: ERROR_X
+ {
+ mERROR_X(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 26 :
+ // ActionTranslator.g:1:516: UNKNOWN_SYNTAX
+ {
+ mUNKNOWN_SYNTAX(); if (state.failed) return ;
+
+
+ }
+ break;
+ case 27 :
+ // ActionTranslator.g:1:531: TEXT
+ {
+ mTEXT(); if (state.failed) return ;
+
+
+ }
+ break;
+
+ }
+
+ }
+
+ // $ANTLR start synpred1_ActionTranslator
+ public final void synpred1_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:41: ( SET_ENCLOSING_RULE_SCOPE_ATTR )
+ // ActionTranslator.g:1:41: SET_ENCLOSING_RULE_SCOPE_ATTR
+ {
+ mSET_ENCLOSING_RULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred1_ActionTranslator
+
+ // $ANTLR start synpred2_ActionTranslator
+ public final void synpred2_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:71: ( ENCLOSING_RULE_SCOPE_ATTR )
+ // ActionTranslator.g:1:71: ENCLOSING_RULE_SCOPE_ATTR
+ {
+ mENCLOSING_RULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred2_ActionTranslator
+
+ // $ANTLR start synpred3_ActionTranslator
+ public final void synpred3_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:97: ( SET_TOKEN_SCOPE_ATTR )
+ // ActionTranslator.g:1:97: SET_TOKEN_SCOPE_ATTR
+ {
+ mSET_TOKEN_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred3_ActionTranslator
+
+ // $ANTLR start synpred4_ActionTranslator
+ public final void synpred4_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:118: ( TOKEN_SCOPE_ATTR )
+ // ActionTranslator.g:1:118: TOKEN_SCOPE_ATTR
+ {
+ mTOKEN_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred4_ActionTranslator
+
+ // $ANTLR start synpred5_ActionTranslator
+ public final void synpred5_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:135: ( SET_RULE_SCOPE_ATTR )
+ // ActionTranslator.g:1:135: SET_RULE_SCOPE_ATTR
+ {
+ mSET_RULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred5_ActionTranslator
+
+ // $ANTLR start synpred6_ActionTranslator
+ public final void synpred6_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:155: ( RULE_SCOPE_ATTR )
+ // ActionTranslator.g:1:155: RULE_SCOPE_ATTR
+ {
+ mRULE_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred6_ActionTranslator
+
+ // $ANTLR start synpred7_ActionTranslator
+ public final void synpred7_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:171: ( LABEL_REF )
+ // ActionTranslator.g:1:171: LABEL_REF
+ {
+ mLABEL_REF(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred7_ActionTranslator
+
+ // $ANTLR start synpred8_ActionTranslator
+ public final void synpred8_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:181: ( ISOLATED_TOKEN_REF )
+ // ActionTranslator.g:1:181: ISOLATED_TOKEN_REF
+ {
+ mISOLATED_TOKEN_REF(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred8_ActionTranslator
+
+ // $ANTLR start synpred9_ActionTranslator
+ public final void synpred9_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:200: ( ISOLATED_LEXER_RULE_REF )
+ // ActionTranslator.g:1:200: ISOLATED_LEXER_RULE_REF
+ {
+ mISOLATED_LEXER_RULE_REF(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred9_ActionTranslator
+
+ // $ANTLR start synpred10_ActionTranslator
+ public final void synpred10_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:224: ( SET_LOCAL_ATTR )
+ // ActionTranslator.g:1:224: SET_LOCAL_ATTR
+ {
+ mSET_LOCAL_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred10_ActionTranslator
+
+ // $ANTLR start synpred11_ActionTranslator
+ public final void synpred11_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:239: ( LOCAL_ATTR )
+ // ActionTranslator.g:1:239: LOCAL_ATTR
+ {
+ mLOCAL_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred11_ActionTranslator
+
+ // $ANTLR start synpred12_ActionTranslator
+ public final void synpred12_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:250: ( SET_DYNAMIC_SCOPE_ATTR )
+ // ActionTranslator.g:1:250: SET_DYNAMIC_SCOPE_ATTR
+ {
+ mSET_DYNAMIC_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred12_ActionTranslator
+
+ // $ANTLR start synpred13_ActionTranslator
+ public final void synpred13_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:273: ( DYNAMIC_SCOPE_ATTR )
+ // ActionTranslator.g:1:273: DYNAMIC_SCOPE_ATTR
+ {
+ mDYNAMIC_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred13_ActionTranslator
+
+ // $ANTLR start synpred14_ActionTranslator
+ public final void synpred14_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:292: ( ERROR_SCOPED_XY )
+ // ActionTranslator.g:1:292: ERROR_SCOPED_XY
+ {
+ mERROR_SCOPED_XY(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred14_ActionTranslator
+
+ // $ANTLR start synpred15_ActionTranslator
+ public final void synpred15_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:308: ( DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR )
+ // ActionTranslator.g:1:308: DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR
+ {
+ mDYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred15_ActionTranslator
+
+ // $ANTLR start synpred16_ActionTranslator
+ public final void synpred16_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:344: ( DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR )
+ // ActionTranslator.g:1:344: DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR
+ {
+ mDYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred16_ActionTranslator
+
+ // $ANTLR start synpred17_ActionTranslator
+ public final void synpred17_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:380: ( ISOLATED_DYNAMIC_SCOPE )
+ // ActionTranslator.g:1:380: ISOLATED_DYNAMIC_SCOPE
+ {
+ mISOLATED_DYNAMIC_SCOPE(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred17_ActionTranslator
+
+ // $ANTLR start synpred18_ActionTranslator
+ public final void synpred18_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:403: ( TEMPLATE_INSTANCE )
+ // ActionTranslator.g:1:403: TEMPLATE_INSTANCE
+ {
+ mTEMPLATE_INSTANCE(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred18_ActionTranslator
+
+ // $ANTLR start synpred19_ActionTranslator
+ public final void synpred19_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:421: ( INDIRECT_TEMPLATE_INSTANCE )
+ // ActionTranslator.g:1:421: INDIRECT_TEMPLATE_INSTANCE
+ {
+ mINDIRECT_TEMPLATE_INSTANCE(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred19_ActionTranslator
+
+ // $ANTLR start synpred20_ActionTranslator
+ public final void synpred20_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:448: ( SET_EXPR_ATTRIBUTE )
+ // ActionTranslator.g:1:448: SET_EXPR_ATTRIBUTE
+ {
+ mSET_EXPR_ATTRIBUTE(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred20_ActionTranslator
+
+ // $ANTLR start synpred21_ActionTranslator
+ public final void synpred21_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:467: ( SET_ATTRIBUTE )
+ // ActionTranslator.g:1:467: SET_ATTRIBUTE
+ {
+ mSET_ATTRIBUTE(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred21_ActionTranslator
+
+ // $ANTLR start synpred22_ActionTranslator
+ public final void synpred22_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:481: ( TEMPLATE_EXPR )
+ // ActionTranslator.g:1:481: TEMPLATE_EXPR
+ {
+ mTEMPLATE_EXPR(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred22_ActionTranslator
+
+ // $ANTLR start synpred24_ActionTranslator
+ public final void synpred24_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:499: ( ERROR_XY )
+ // ActionTranslator.g:1:499: ERROR_XY
+ {
+ mERROR_XY(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred24_ActionTranslator
+
+ // $ANTLR start synpred25_ActionTranslator
+ public final void synpred25_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:508: ( ERROR_X )
+ // ActionTranslator.g:1:508: ERROR_X
+ {
+ mERROR_X(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred25_ActionTranslator
+
+ // $ANTLR start synpred26_ActionTranslator
+ public final void synpred26_ActionTranslator_fragment() throws RecognitionException {
+ // ActionTranslator.g:1:516: ( UNKNOWN_SYNTAX )
+ // ActionTranslator.g:1:516: UNKNOWN_SYNTAX
+ {
+ mUNKNOWN_SYNTAX(); if (state.failed) return ;
+
+
+ }
+ }
+ // $ANTLR end synpred26_ActionTranslator
+
+ public final boolean synpred19_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred19_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred16_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred16_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred25_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred25_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred17_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred17_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred1_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred1_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred10_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred10_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred24_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred24_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred15_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred15_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred11_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred11_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred18_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred18_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred21_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred21_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred3_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred3_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred26_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred26_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred9_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred9_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred2_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred2_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred4_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred4_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred22_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred22_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred5_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred5_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred6_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred6_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred7_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred7_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred12_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred12_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred8_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred8_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred13_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred13_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred20_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred20_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+ public final boolean synpred14_ActionTranslator() {
+ state.backtracking++;
+ int start = input.mark();
+ try {
+ synpred14_ActionTranslator_fragment(); // can never throw exception
+ } catch (RecognitionException re) {
+ System.err.println("impossible: "+re);
+ }
+ boolean success = !state.failed;
+ input.rewind(start);
+ state.backtracking--;
+ state.failed=false;
+ return success;
+ }
+
+
+ protected DFA22 dfa22 = new DFA22(this);
+ protected DFA28 dfa28 = new DFA28(this);
+ static final String DFA22_eotS =
+ "\1\1\11\uffff";
+ static final String DFA22_eofS =
+ "\12\uffff";
+ static final String DFA22_minS =
+ "\1\42\11\uffff";
+ static final String DFA22_maxS =
+ "\1\175\11\uffff";
+ static final String DFA22_acceptS =
+ "\1\uffff\1\11\1\1\1\2\1\3\1\4\1\5\1\6\1\7\1\10";
+ static final String DFA22_specialS =
+ "\12\uffff}>";
+ static final String[] DFA22_transitionS = {
+ "\1\11\5\uffff\1\4\1\5\2\uffff\1\6\1\uffff\1\3\22\uffff\32\2"+
+ "\4\uffff\1\2\1\uffff\32\2\1\7\1\uffff\1\10",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ ""
+ };
+
+ static final short[] DFA22_eot = DFA.unpackEncodedString(DFA22_eotS);
+ static final short[] DFA22_eof = DFA.unpackEncodedString(DFA22_eofS);
+ static final char[] DFA22_min = DFA.unpackEncodedStringToUnsignedChars(DFA22_minS);
+ static final char[] DFA22_max = DFA.unpackEncodedStringToUnsignedChars(DFA22_maxS);
+ static final short[] DFA22_accept = DFA.unpackEncodedString(DFA22_acceptS);
+ static final short[] DFA22_special = DFA.unpackEncodedString(DFA22_specialS);
+ static final short[][] DFA22_transition;
+
+ static {
+ int numStates = DFA22_transitionS.length;
+ DFA22_transition = new short[numStates][];
+ for (int i=0; i";
+ static final String[] DFA28_transitionS = {
+ "\44\10\1\12\1\1\66\10\1\11\uffa2\10",
+ "\1\uffff",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "\1\uffff",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ "",
+ ""
+ };
+
+ static final short[] DFA28_eot = DFA.unpackEncodedString(DFA28_eotS);
+ static final short[] DFA28_eof = DFA.unpackEncodedString(DFA28_eofS);
+ static final char[] DFA28_min = DFA.unpackEncodedStringToUnsignedChars(DFA28_minS);
+ static final char[] DFA28_max = DFA.unpackEncodedStringToUnsignedChars(DFA28_maxS);
+ static final short[] DFA28_accept = DFA.unpackEncodedString(DFA28_acceptS);
+ static final short[] DFA28_special = DFA.unpackEncodedString(DFA28_specialS);
+ static final short[][] DFA28_transition;
+
+ static {
+ int numStates = DFA28_transitionS.length;
+ DFA28_transition = new short[numStates][];
+ for (int i=0; i=0 ) return s;
+ break;
+ case 1 :
+ int LA28_10 = input.LA(1);
+
+
+ int index28_10 = input.index();
+ input.rewind();
+ s = -1;
+ if ( (synpred1_ActionTranslator()) ) {s = 11;}
+
+ else if ( (synpred2_ActionTranslator()) ) {s = 12;}
+
+ else if ( (synpred3_ActionTranslator()) ) {s = 13;}
+
+ else if ( (synpred4_ActionTranslator()) ) {s = 14;}
+
+ else if ( (synpred5_ActionTranslator()) ) {s = 15;}
+
+ else if ( (synpred6_ActionTranslator()) ) {s = 16;}
+
+ else if ( (synpred7_ActionTranslator()) ) {s = 17;}
+
+ else if ( (synpred8_ActionTranslator()) ) {s = 18;}
+
+ else if ( (synpred9_ActionTranslator()) ) {s = 19;}
+
+ else if ( (synpred10_ActionTranslator()) ) {s = 20;}
+
+ else if ( (synpred11_ActionTranslator()) ) {s = 21;}
+
+ else if ( (synpred12_ActionTranslator()) ) {s = 22;}
+
+ else if ( (synpred13_ActionTranslator()) ) {s = 23;}
+
+ else if ( (synpred14_ActionTranslator()) ) {s = 24;}
+
+ else if ( (synpred15_ActionTranslator()) ) {s = 25;}
+
+ else if ( (synpred16_ActionTranslator()) ) {s = 26;}
+
+ else if ( (synpred17_ActionTranslator()) ) {s = 27;}
+
+ else if ( (synpred24_ActionTranslator()) ) {s = 28;}
+
+ else if ( (synpred25_ActionTranslator()) ) {s = 29;}
+
+ else if ( (synpred26_ActionTranslator()) ) {s = 7;}
+
+
+ input.seek(index28_10);
+ if ( s>=0 ) return s;
+ break;
+ }
+ if (state.backtracking>0) {state.failed=true; return -1;}
+ NoViableAltException nvae =
+ new NoViableAltException(getDescription(), 28, _s, input);
+ error(nvae);
+ throw nvae;
+ }
+ }
+
+
+}
\ No newline at end of file
diff --git a/antlr_3_1_source/codegen/ActionTranslator.tokens b/antlr_3_1_source/codegen/ActionTranslator.tokens
new file mode 100644
index 0000000..6f4cb83
--- /dev/null
+++ b/antlr_3_1_source/codegen/ActionTranslator.tokens
@@ -0,0 +1,34 @@
+LOCAL_ATTR=17
+SET_DYNAMIC_SCOPE_ATTR=18
+ISOLATED_DYNAMIC_SCOPE=24
+WS=5
+UNKNOWN_SYNTAX=35
+DYNAMIC_ABSOLUTE_INDEXED_SCOPE_ATTR=23
+SCOPE_INDEX_EXPR=21
+DYNAMIC_SCOPE_ATTR=19
+ISOLATED_TOKEN_REF=14
+SET_ATTRIBUTE=30
+SET_EXPR_ATTRIBUTE=29
+ACTION=27
+ERROR_X=34
+TEMPLATE_INSTANCE=26
+TOKEN_SCOPE_ATTR=10
+ISOLATED_LEXER_RULE_REF=15
+ESC=32
+SET_ENCLOSING_RULE_SCOPE_ATTR=7
+ATTR_VALUE_EXPR=6
+RULE_SCOPE_ATTR=12
+LABEL_REF=13
+INT=37
+ARG=25
+SET_LOCAL_ATTR=16
+TEXT=36
+DYNAMIC_NEGATIVE_INDEXED_SCOPE_ATTR=22
+SET_TOKEN_SCOPE_ATTR=9
+ERROR_SCOPED_XY=20
+SET_RULE_SCOPE_ATTR=11
+ENCLOSING_RULE_SCOPE_ATTR=8
+ERROR_XY=33
+TEMPLATE_EXPR=31
+INDIRECT_TEMPLATE_INSTANCE=28
+ID=4
diff --git a/antlr_3_1_source/codegen/CPPTarget.java b/antlr_3_1_source/codegen/CPPTarget.java
new file mode 100644
index 0000000..2bfafbd
--- /dev/null
+++ b/antlr_3_1_source/codegen/CPPTarget.java
@@ -0,0 +1,140 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.stringtemplate.StringTemplateGroup;
+import org.antlr.tool.Grammar;
+import org.antlr.Tool;
+
+import java.io.IOException;
+
+public class CPPTarget extends Target {
+
+ public String escapeChar( int c ) {
+ // System.out.println("CPPTarget.escapeChar("+c+")");
+ switch (c) {
+ case '\n' : return "\\n";
+ case '\t' : return "\\t";
+ case '\r' : return "\\r";
+ case '\\' : return "\\\\";
+ case '\'' : return "\\'";
+ case '"' : return "\\\"";
+ default :
+ if ( c < ' ' || c > 126 )
+ {
+ if (c > 255)
+ {
+ String s = Integer.toString(c,16);
+ // put leading zeroes in front of the thing..
+ while( s.length() < 4 )
+ s = '0' + s;
+ return "\\u" + s;
+ }
+ else {
+ return "\\" + Integer.toString(c,8);
+ }
+ }
+ else {
+ return String.valueOf((char)c);
+ }
+ }
+ }
+
+ /** Converts a String into a representation that can be use as a literal
+ * when surrounded by double-quotes.
+ *
+ * Used for escaping semantic predicate strings for exceptions.
+ *
+ * @param s The String to be changed into a literal
+ */
+ public String escapeString(String s)
+ {
+ StringBuffer retval = new StringBuffer();
+ for (int i = 0; i < s.length(); i++) {
+ retval.append(escapeChar(s.charAt(i)));
+ }
+
+ return retval.toString();
+ }
+
+ protected void genRecognizerHeaderFile(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate headerFileST,
+ String extName)
+ throws IOException
+ {
+ StringTemplateGroup templates = generator.getTemplates();
+ generator.write(headerFileST, grammar.name+extName);
+ }
+
+ /** Convert from an ANTLR char literal found in a grammar file to
+ * an equivalent char literal in the target language. For Java, this
+ * is the identify translation; i.e., '\n' -> '\n'. Most languages
+ * will be able to use this 1-to-1 mapping. Expect single quotes
+ * around the incoming literal.
+ * Depending on the charvocabulary the charliteral should be prefixed with a 'L'
+ */
+ public String getTargetCharLiteralFromANTLRCharLiteral( CodeGenerator codegen, String literal) {
+ int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
+ String prefix = "'";
+ if( codegen.grammar.getMaxCharValue() > 255 )
+ prefix = "L'";
+ else if( (c & 0x80) != 0 ) // if in char mode prevent sign extensions
+ return ""+c;
+ return prefix+escapeChar(c)+"'";
+ }
+
+ /** Convert from an ANTLR string literal found in a grammar file to
+ * an equivalent string literal in the target language. For Java, this
+ * is the identify translation; i.e., "\"\n" -> "\"\n". Most languages
+ * will be able to use this 1-to-1 mapping. Expect double quotes
+ * around the incoming literal.
+ * Depending on the charvocabulary the string should be prefixed with a 'L'
+ */
+ public String getTargetStringLiteralFromANTLRStringLiteral( CodeGenerator codegen, String literal) {
+ StringBuffer buf = Grammar.getUnescapedStringFromGrammarStringLiteral(literal);
+ String prefix = "\"";
+ if( codegen.grammar.getMaxCharValue() > 255 )
+ prefix = "L\"";
+ return prefix+escapeString(buf.toString())+"\"";
+ }
+ /** Character constants get truncated to this value.
+ * TODO: This should be derived from the charVocabulary. Depending on it
+ * being 255 or 0xFFFF the templates should generate normal character
+ * constants or multibyte ones.
+ */
+ public int getMaxCharValue( CodeGenerator codegen ) {
+ int maxval = 255; // codegen.grammar.get????();
+ if ( maxval <= 255 )
+ return 255;
+ else
+ return maxval;
+ }
+}
diff --git a/antlr_3_1_source/codegen/CSharp2Target.java b/antlr_3_1_source/codegen/CSharp2Target.java
new file mode 100644
index 0000000..05e4fd8
--- /dev/null
+++ b/antlr_3_1_source/codegen/CSharp2Target.java
@@ -0,0 +1,57 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2006 Kunle Odutola
+ Copyright (c) 2005 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.Tool;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.Grammar;
+
+public class CSharp2Target extends Target
+{
+ protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate recognizerST,
+ StringTemplate cyclicDFAST)
+ {
+ return recognizerST;
+ }
+
+ public String encodeIntAsCharEscape(int v)
+ {
+ if (v <= 127)
+ {
+ String hex1 = Integer.toHexString(v | 0x10000).substring(3, 5);
+ return "\\x" + hex1;
+ }
+ String hex = Integer.toHexString(v | 0x10000).substring(1, 5);
+ return "\\u" + hex;
+ }
+}
+
diff --git a/antlr_3_1_source/codegen/CSharpTarget.java b/antlr_3_1_source/codegen/CSharpTarget.java
new file mode 100644
index 0000000..ffcf2d9
--- /dev/null
+++ b/antlr_3_1_source/codegen/CSharpTarget.java
@@ -0,0 +1,57 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2006 Kunle Odutola
+ Copyright (c) 2005 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.Tool;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.Grammar;
+
+public class CSharpTarget extends Target
+{
+ protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate recognizerST,
+ StringTemplate cyclicDFAST)
+ {
+ return recognizerST;
+ }
+
+ public String encodeIntAsCharEscape(int v)
+ {
+ if (v <= 127)
+ {
+ String hex1 = Integer.toHexString(v | 0x10000).substring(3, 5);
+ return "\\x" + hex1;
+ }
+ String hex = Integer.toHexString(v | 0x10000).substring(1, 5);
+ return "\\u" + hex;
+ }
+}
+
diff --git a/antlr_3_1_source/codegen/CTarget.java b/antlr_3_1_source/codegen/CTarget.java
new file mode 100644
index 0000000..c10b3da
--- /dev/null
+++ b/antlr_3_1_source/codegen/CTarget.java
@@ -0,0 +1,247 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.Tool;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.Grammar;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+public class CTarget extends Target {
+
+ ArrayList strings = new ArrayList();
+
+ protected void genRecognizerFile(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate outputFileST)
+ throws IOException
+ {
+
+ // Before we write this, and cause it to generate its string,
+ // we need to add all the string literals that we are going to match
+ //
+ outputFileST.setAttribute("literals", strings);
+ String fileName = generator.getRecognizerFileName(grammar.name, grammar.type);
+ System.out.println("Generating " + fileName);
+ generator.write(outputFileST, fileName);
+ }
+
+ protected void genRecognizerHeaderFile(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate headerFileST,
+ String extName)
+ throws IOException
+ {
+ // Pick up the file name we are generating. This method will return a
+ // a file suffixed with .c, so we must substring and add the extName
+ // to it as we cannot assign into strings in Java.
+ ///
+ String fileName = generator.getRecognizerFileName(grammar.name, grammar.type);
+ fileName = fileName.substring(0, fileName.length()-2) + extName;
+
+ System.out.println("Generating " + fileName);
+ generator.write(headerFileST, fileName);
+ }
+
+ protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate recognizerST,
+ StringTemplate cyclicDFAST)
+ {
+ return recognizerST;
+ }
+
+ /** Is scope in @scope::name {action} valid for this kind of grammar?
+ * Targets like C++ may want to allow new scopes like headerfile or
+ * some such. The action names themselves are not policed at the
+ * moment so targets can add template actions w/o having to recompile
+ * ANTLR.
+ */
+ public boolean isValidActionScope(int grammarType, String scope) {
+ switch (grammarType) {
+ case Grammar.LEXER :
+ if ( scope.equals("lexer") ) {return true;}
+ if ( scope.equals("header") ) {return true;}
+ if ( scope.equals("includes") ) {return true;}
+ if ( scope.equals("preincludes") ) {return true;}
+ if ( scope.equals("overrides") ) {return true;}
+ break;
+ case Grammar.PARSER :
+ if ( scope.equals("parser") ) {return true;}
+ if ( scope.equals("header") ) {return true;}
+ if ( scope.equals("includes") ) {return true;}
+ if ( scope.equals("preincludes") ) {return true;}
+ if ( scope.equals("overrides") ) {return true;}
+ break;
+ case Grammar.COMBINED :
+ if ( scope.equals("parser") ) {return true;}
+ if ( scope.equals("lexer") ) {return true;}
+ if ( scope.equals("header") ) {return true;}
+ if ( scope.equals("includes") ) {return true;}
+ if ( scope.equals("preincludes") ) {return true;}
+ if ( scope.equals("overrides") ) {return true;}
+ break;
+ case Grammar.TREE_PARSER :
+ if ( scope.equals("treeparser") ) {return true;}
+ if ( scope.equals("header") ) {return true;}
+ if ( scope.equals("includes") ) {return true;}
+ if ( scope.equals("preincludes") ) {return true;}
+ if ( scope.equals("overrides") ) {return true;}
+ break;
+ }
+ return false;
+ }
+
+ public String getTargetCharLiteralFromANTLRCharLiteral(
+ CodeGenerator generator,
+ String literal)
+ {
+
+ if (literal.startsWith("'\\u") )
+ {
+ literal = "0x" +literal.substring(3, 7);
+ }
+ else
+ {
+ int c = literal.charAt(1);
+
+ if (c < 32 || c > 127) {
+ literal = "0x" + Integer.toHexString(c);
+ }
+ }
+
+ return literal;
+ }
+
+ /** Convert from an ANTLR string literal found in a grammar file to
+ * an equivalent string literal in the C target.
+ * Because we msut support Unicode character sets and have chosen
+ * to have the lexer match UTF32 characters, then we must encode
+ * string matches to use 32 bit character arrays. Here then we
+ * must produce the C array and cater for the case where the
+ * lexer has been eoncded with a string such as "xyz\n", which looks
+ * slightly incogrous to me but is not incorrect.
+ */
+ public String getTargetStringLiteralFromANTLRStringLiteral(
+ CodeGenerator generator,
+ String literal)
+ {
+ int index;
+ int outc;
+ String bytes;
+ StringBuffer buf = new StringBuffer();
+
+ buf.append("{ ");
+
+ // We need ot lose any escaped characters of the form \x and just
+ // replace them with their actual values as well as lose the surrounding
+ // quote marks.
+ //
+ for (int i = 1; i< literal.length()-1; i++)
+ {
+ buf.append("0x");
+
+ if (literal.charAt(i) == '\\')
+ {
+ i++; // Assume that there is a next character, this will just yield
+ // invalid strings if not, which is what the input would be of course - invalid
+ switch (literal.charAt(i))
+ {
+ case 'u':
+ case 'U':
+ buf.append(literal.substring(i+1, i+5)); // Already a hex string
+ i = i + 5; // Move to next string/char/escape
+ break;
+
+ case 'n':
+ case 'N':
+
+ buf.append("0A");
+ break;
+
+ case 'r':
+ case 'R':
+
+ buf.append("0D");
+ break;
+
+ case 't':
+ case 'T':
+
+ buf.append("09");
+ break;
+
+ case 'b':
+ case 'B':
+
+ buf.append("08");
+ break;
+
+ case 'f':
+ case 'F':
+
+ buf.append("0C");
+ break;
+
+ default:
+
+ // Anything else is what it is!
+ //
+ buf.append(Integer.toHexString((int)literal.charAt(i)).toUpperCase());
+ break;
+ }
+ }
+ else
+ {
+ buf.append(Integer.toHexString((int)literal.charAt(i)).toUpperCase());
+ }
+ buf.append(", ");
+ }
+ buf.append(" ANTLR3_STRING_TERMINATOR}");
+
+ bytes = buf.toString();
+ index = strings.indexOf(bytes);
+
+ if (index == -1)
+ {
+ strings.add(bytes);
+ index = strings.indexOf(bytes);
+ }
+
+ String strref = "lit_" + String.valueOf(index+1);
+
+ return strref;
+ }
+
+}
+
diff --git a/antlr_3_1_source/codegen/CodeGenTreeWalker.java b/antlr_3_1_source/codegen/CodeGenTreeWalker.java
new file mode 100644
index 0000000..efc0103
--- /dev/null
+++ b/antlr_3_1_source/codegen/CodeGenTreeWalker.java
@@ -0,0 +1,3316 @@
+// $ANTLR 2.7.7 (2006-01-29): "codegen.g" -> "CodeGenTreeWalker.java"$
+
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+ package org.antlr.codegen;
+ import org.antlr.tool.*;
+ import org.antlr.analysis.*;
+ import org.antlr.misc.*;
+ import java.util.*;
+ import org.antlr.stringtemplate.*;
+ import antlr.TokenWithIndex;
+ import antlr.CommonToken;
+
+import antlr.TreeParser;
+import antlr.Token;
+import antlr.collections.AST;
+import antlr.RecognitionException;
+import antlr.ANTLRException;
+import antlr.NoViableAltException;
+import antlr.MismatchedTokenException;
+import antlr.SemanticException;
+import antlr.collections.impl.BitSet;
+import antlr.ASTPair;
+import antlr.collections.impl.ASTArray;
+
+
+/** Walk a grammar and generate code by gradually building up
+ * a bigger and bigger StringTemplate.
+ *
+ * Terence Parr
+ * University of San Francisco
+ * June 15, 2004
+ */
+public class CodeGenTreeWalker extends antlr.TreeParser implements CodeGenTreeWalkerTokenTypes
+ {
+
+ protected static final int RULE_BLOCK_NESTING_LEVEL = 0;
+ protected static final int OUTER_REWRITE_NESTING_LEVEL = 0;
+
+ protected String currentRuleName = null;
+ protected int blockNestingLevel = 0;
+ protected int rewriteBlockNestingLevel = 0;
+ protected int outerAltNum = 0;
+ protected StringTemplate currentBlockST = null;
+ protected boolean currentAltHasASTRewrite = false;
+ protected int rewriteTreeNestingLevel = 0;
+ protected Set rewriteRuleRefs = null;
+
+ public void reportError(RecognitionException ex) {
+ Token token = null;
+ if ( ex instanceof MismatchedTokenException ) {
+ token = ((MismatchedTokenException)ex).token;
+ }
+ else if ( ex instanceof NoViableAltException ) {
+ token = ((NoViableAltException)ex).token;
+ }
+ ErrorManager.syntaxError(
+ ErrorManager.MSG_SYNTAX_ERROR,
+ grammar,
+ token,
+ "codegen: "+ex.toString(),
+ ex);
+ }
+
+ public void reportError(String s) {
+ System.out.println("codegen: error: " + s);
+ }
+
+ protected CodeGenerator generator;
+ protected Grammar grammar;
+ protected StringTemplateGroup templates;
+
+ /** The overall lexer/parser template; simulate dynamically scoped
+ * attributes by making this an instance var of the walker.
+ */
+ protected StringTemplate recognizerST;
+
+ protected StringTemplate outputFileST;
+ protected StringTemplate headerFileST;
+
+ protected String outputOption = "";
+
+ protected StringTemplate getWildcardST(GrammarAST elementAST, GrammarAST ast_suffix, String label) {
+ String name = "wildcard";
+ if ( grammar.type==Grammar.LEXER ) {
+ name = "wildcardChar";
+ }
+ return getTokenElementST(name, name, elementAST, ast_suffix, label);
+ }
+
+ protected StringTemplate getRuleElementST(String name,
+ String ruleTargetName,
+ GrammarAST elementAST,
+ GrammarAST ast_suffix,
+ String label)
+ {
+ String suffix = getSTSuffix(ast_suffix,label);
+ name += suffix;
+ // if we're building trees and there is no label, gen a label
+ // unless we're in a synpred rule.
+ Rule r = grammar.getRule(currentRuleName);
+ if ( (grammar.buildAST()||suffix.length()>0) && label==null &&
+ (r==null || !r.isSynPred) )
+ {
+ // we will need a label to do the AST or tracking, make one
+ label = generator.createUniqueLabel(ruleTargetName);
+ CommonToken labelTok = new CommonToken(ANTLRParser.ID, label);
+ grammar.defineRuleRefLabel(currentRuleName, labelTok, elementAST);
+ }
+ StringTemplate elementST = templates.getInstanceOf(name);
+ if ( label!=null ) {
+ elementST.setAttribute("label", label);
+ }
+ return elementST;
+ }
+
+ protected StringTemplate getTokenElementST(String name,
+ String elementName,
+ GrammarAST elementAST,
+ GrammarAST ast_suffix,
+ String label)
+ {
+ String suffix = getSTSuffix(ast_suffix,label);
+ name += suffix;
+ // if we're building trees and there is no label, gen a label
+ // unless we're in a synpred rule.
+ Rule r = grammar.getRule(currentRuleName);
+ if ( (grammar.buildAST()||suffix.length()>0) && label==null &&
+ (r==null || !r.isSynPred) )
+ {
+ label = generator.createUniqueLabel(elementName);
+ CommonToken labelTok = new CommonToken(ANTLRParser.ID, label);
+ grammar.defineTokenRefLabel(currentRuleName, labelTok, elementAST);
+ }
+ StringTemplate elementST = templates.getInstanceOf(name);
+ if ( label!=null ) {
+ elementST.setAttribute("label", label);
+ }
+ return elementST;
+ }
+
+ public boolean isListLabel(String label) {
+ boolean hasListLabel=false;
+ if ( label!=null ) {
+ Rule r = grammar.getRule(currentRuleName);
+ String stName = null;
+ if ( r!=null ) {
+ Grammar.LabelElementPair pair = r.getLabel(label);
+ if ( pair!=null &&
+ (pair.type==Grammar.TOKEN_LIST_LABEL||
+ pair.type==Grammar.RULE_LIST_LABEL) )
+ {
+ hasListLabel=true;
+ }
+ }
+ }
+ return hasListLabel;
+ }
+
+ /** Return a non-empty template name suffix if the token is to be
+ * tracked, added to a tree, or both.
+ */
+ protected String getSTSuffix(GrammarAST ast_suffix, String label) {
+ if ( grammar.type==Grammar.LEXER ) {
+ return "";
+ }
+ // handle list label stuff; make element use "Track"
+
+ String operatorPart = "";
+ String rewritePart = "";
+ String listLabelPart = "";
+ Rule ruleDescr = grammar.getRule(currentRuleName);
+ if ( ast_suffix!=null && !ruleDescr.isSynPred ) {
+ if ( ast_suffix.getType()==ANTLRParser.ROOT ) {
+ operatorPart = "RuleRoot";
+ }
+ else if ( ast_suffix.getType()==ANTLRParser.BANG ) {
+ operatorPart = "Bang";
+ }
+ }
+ if ( currentAltHasASTRewrite ) {
+ rewritePart = "Track";
+ }
+ if ( isListLabel(label) ) {
+ listLabelPart = "AndListLabel";
+ }
+ String STsuffix = operatorPart+rewritePart+listLabelPart;
+ //System.out.println("suffix = "+STsuffix);
+
+ return STsuffix;
+ }
+
+ /** Convert rewrite AST lists to target labels list */
+ protected List getTokenTypesAsTargetLabels(Set refs) {
+ if ( refs==null || refs.size()==0 ) {
+ return null;
+ }
+ List labels = new ArrayList(refs.size());
+ for (GrammarAST t : refs) {
+ String label;
+ if ( t.getType()==ANTLRParser.RULE_REF ) {
+ label = t.getText();
+ }
+ else if ( t.getType()==ANTLRParser.LABEL ) {
+ label = t.getText();
+ }
+ else {
+ // must be char or string literal
+ label = generator.getTokenTypeAsTargetLabel(
+ grammar.getTokenType(t.getText()));
+ }
+ labels.add(label);
+ }
+ return labels;
+ }
+
+ public void init(Grammar g) {
+ this.grammar = g;
+ this.generator = grammar.getCodeGenerator();
+ this.templates = generator.getTemplates();
+ }
+public CodeGenTreeWalker() {
+ tokenNames = _tokenNames;
+}
+
+ public final void grammar(AST _t,
+ Grammar g,
+ StringTemplate recognizerST,
+ StringTemplate outputFileST,
+ StringTemplate headerFileST
+ ) throws RecognitionException {
+
+ GrammarAST grammar_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ init(g);
+ this.recognizerST = recognizerST;
+ this.outputFileST = outputFileST;
+ this.headerFileST = headerFileST;
+ String superClass = (String)g.getOption("superClass");
+ outputOption = (String)g.getOption("output");
+ recognizerST.setAttribute("superClass", superClass);
+ if ( g.type!=Grammar.LEXER ) {
+ recognizerST.setAttribute("ASTLabelType", g.getOption("ASTLabelType"));
+ }
+ if ( g.type==Grammar.TREE_PARSER && g.getOption("ASTLabelType")==null ) {
+ ErrorManager.grammarWarning(ErrorManager.MSG_MISSING_AST_TYPE_IN_TREE_GRAMMAR,
+ g,
+ null,
+ g.name);
+ }
+ if ( g.type!=Grammar.TREE_PARSER ) {
+ recognizerST.setAttribute("labelType", g.getOption("TokenLabelType"));
+ }
+ recognizerST.setAttribute("numRules", grammar.getRules().size());
+ outputFileST.setAttribute("numRules", grammar.getRules().size());
+ headerFileST.setAttribute("numRules", grammar.getRules().size());
+
+
+ try { // for error handling
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case LEXER_GRAMMAR:
+ {
+ AST __t3 = _t;
+ GrammarAST tmp1_AST_in = (GrammarAST)_t;
+ match(_t,LEXER_GRAMMAR);
+ _t = _t.getFirstChild();
+ grammarSpec(_t);
+ _t = _retTree;
+ _t = __t3;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case PARSER_GRAMMAR:
+ {
+ AST __t4 = _t;
+ GrammarAST tmp2_AST_in = (GrammarAST)_t;
+ match(_t,PARSER_GRAMMAR);
+ _t = _t.getFirstChild();
+ grammarSpec(_t);
+ _t = _retTree;
+ _t = __t4;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case TREE_GRAMMAR:
+ {
+ AST __t5 = _t;
+ GrammarAST tmp3_AST_in = (GrammarAST)_t;
+ match(_t,TREE_GRAMMAR);
+ _t = _t.getFirstChild();
+ grammarSpec(_t);
+ _t = _retTree;
+ _t = __t5;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case COMBINED_GRAMMAR:
+ {
+ AST __t6 = _t;
+ GrammarAST tmp4_AST_in = (GrammarAST)_t;
+ match(_t,COMBINED_GRAMMAR);
+ _t = _t.getFirstChild();
+ grammarSpec(_t);
+ _t = _retTree;
+ _t = __t6;
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final void grammarSpec(AST _t) throws RecognitionException {
+
+ GrammarAST grammarSpec_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST name = null;
+ GrammarAST cmt = null;
+
+ try { // for error handling
+ name = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case DOC_COMMENT:
+ {
+ cmt = (GrammarAST)_t;
+ match(_t,DOC_COMMENT);
+ _t = _t.getNextSibling();
+
+ outputFileST.setAttribute("docComment", cmt.getText());
+ headerFileST.setAttribute("docComment", cmt.getText());
+
+ break;
+ }
+ case OPTIONS:
+ case TOKENS:
+ case RULE:
+ case SCOPE:
+ case IMPORT:
+ case AMPERSAND:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+
+ recognizerST.setAttribute("name", grammar.getRecognizerName());
+ outputFileST.setAttribute("name", grammar.getRecognizerName());
+ headerFileST.setAttribute("name", grammar.getRecognizerName());
+ recognizerST.setAttribute("scopes", grammar.getGlobalScopes());
+ headerFileST.setAttribute("scopes", grammar.getGlobalScopes());
+
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case OPTIONS:
+ {
+ AST __t12 = _t;
+ GrammarAST tmp5_AST_in = (GrammarAST)_t;
+ match(_t,OPTIONS);
+ _t = _t.getFirstChild();
+ GrammarAST tmp6_AST_in = (GrammarAST)_t;
+ if ( _t==null ) throw new MismatchedTokenException();
+ _t = _t.getNextSibling();
+ _t = __t12;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case TOKENS:
+ case RULE:
+ case SCOPE:
+ case IMPORT:
+ case AMPERSAND:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case IMPORT:
+ {
+ AST __t14 = _t;
+ GrammarAST tmp7_AST_in = (GrammarAST)_t;
+ match(_t,IMPORT);
+ _t = _t.getFirstChild();
+ GrammarAST tmp8_AST_in = (GrammarAST)_t;
+ if ( _t==null ) throw new MismatchedTokenException();
+ _t = _t.getNextSibling();
+ _t = __t14;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case TOKENS:
+ case RULE:
+ case SCOPE:
+ case AMPERSAND:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case TOKENS:
+ {
+ AST __t16 = _t;
+ GrammarAST tmp9_AST_in = (GrammarAST)_t;
+ match(_t,TOKENS);
+ _t = _t.getFirstChild();
+ GrammarAST tmp10_AST_in = (GrammarAST)_t;
+ if ( _t==null ) throw new MismatchedTokenException();
+ _t = _t.getNextSibling();
+ _t = __t16;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case RULE:
+ case SCOPE:
+ case AMPERSAND:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ {
+ _loop18:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==SCOPE)) {
+ attrScope(_t);
+ _t = _retTree;
+ }
+ else {
+ break _loop18;
+ }
+
+ } while (true);
+ }
+ {
+ _loop20:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==AMPERSAND)) {
+ GrammarAST tmp11_AST_in = (GrammarAST)_t;
+ match(_t,AMPERSAND);
+ _t = _t.getNextSibling();
+ }
+ else {
+ break _loop20;
+ }
+
+ } while (true);
+ }
+ rules(_t,recognizerST);
+ _t = _retTree;
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final void attrScope(AST _t) throws RecognitionException {
+
+ GrammarAST attrScope_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ AST __t8 = _t;
+ GrammarAST tmp12_AST_in = (GrammarAST)_t;
+ match(_t,SCOPE);
+ _t = _t.getFirstChild();
+ GrammarAST tmp13_AST_in = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ GrammarAST tmp14_AST_in = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+ _t = __t8;
+ _t = _t.getNextSibling();
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final void rules(AST _t,
+ StringTemplate recognizerST
+ ) throws RecognitionException {
+
+ GrammarAST rules_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ StringTemplate rST;
+
+
+ try { // for error handling
+ {
+ int _cnt24=0;
+ _loop24:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==RULE)) {
+ {
+
+ String ruleName = _t.getFirstChild().getText();
+ Rule r = grammar.getRule(ruleName);
+
+ if (_t==null) _t=ASTNULL;
+ if (((_t.getType()==RULE))&&(grammar.generateMethodForRule(ruleName))) {
+ rST=rule(_t);
+ _t = _retTree;
+
+ if ( rST!=null ) {
+ recognizerST.setAttribute("rules", rST);
+ outputFileST.setAttribute("rules", rST);
+ headerFileST.setAttribute("rules", rST);
+ }
+
+ }
+ else if ((_t.getType()==RULE)) {
+ GrammarAST tmp15_AST_in = (GrammarAST)_t;
+ match(_t,RULE);
+ _t = _t.getNextSibling();
+ }
+ else {
+ throw new NoViableAltException(_t);
+ }
+
+ }
+ }
+ else {
+ if ( _cnt24>=1 ) { break _loop24; } else {throw new NoViableAltException(_t);}
+ }
+
+ _cnt24++;
+ } while (true);
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final StringTemplate rule(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rule_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST id = null;
+ GrammarAST mod = null;
+
+ String r;
+ String initAction = null;
+ StringTemplate b;
+ // get the dfa for the BLOCK
+ GrammarAST block=rule_AST_in.getFirstChildWithType(BLOCK);
+ DFA dfa=block.getLookaheadDFA();
+ // init blockNestingLevel so it's block level RULE_BLOCK_NESTING_LEVEL
+ // for alts of rule
+ blockNestingLevel = RULE_BLOCK_NESTING_LEVEL-1;
+ Rule ruleDescr = grammar.getRule(rule_AST_in.getFirstChild().getText());
+
+ // For syn preds, we don't want any AST code etc... in there.
+ // Save old templates ptr and restore later. Base templates include Dbg.
+ StringTemplateGroup saveGroup = templates;
+ if ( ruleDescr.isSynPred ) {
+ templates = generator.getBaseTemplates();
+ }
+
+
+ try { // for error handling
+ AST __t26 = _t;
+ GrammarAST tmp16_AST_in = (GrammarAST)_t;
+ match(_t,RULE);
+ _t = _t.getFirstChild();
+ id = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ r=id.getText(); currentRuleName = r;
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case FRAGMENT:
+ case LITERAL_protected:
+ case LITERAL_public:
+ case LITERAL_private:
+ {
+ mod = _t==ASTNULL ? null : (GrammarAST)_t;
+ modifier(_t);
+ _t = _retTree;
+ break;
+ }
+ case ARG:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ AST __t28 = _t;
+ GrammarAST tmp17_AST_in = (GrammarAST)_t;
+ match(_t,ARG);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ARG_ACTION:
+ {
+ GrammarAST tmp18_AST_in = (GrammarAST)_t;
+ match(_t,ARG_ACTION);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case 3:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ _t = __t28;
+ _t = _t.getNextSibling();
+ AST __t30 = _t;
+ GrammarAST tmp19_AST_in = (GrammarAST)_t;
+ match(_t,RET);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ARG_ACTION:
+ {
+ GrammarAST tmp20_AST_in = (GrammarAST)_t;
+ match(_t,ARG_ACTION);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case 3:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ _t = __t30;
+ _t = _t.getNextSibling();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case OPTIONS:
+ {
+ AST __t33 = _t;
+ GrammarAST tmp21_AST_in = (GrammarAST)_t;
+ match(_t,OPTIONS);
+ _t = _t.getFirstChild();
+ GrammarAST tmp22_AST_in = (GrammarAST)_t;
+ if ( _t==null ) throw new MismatchedTokenException();
+ _t = _t.getNextSibling();
+ _t = __t33;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case BLOCK:
+ case SCOPE:
+ case AMPERSAND:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case SCOPE:
+ {
+ ruleScopeSpec(_t);
+ _t = _retTree;
+ break;
+ }
+ case BLOCK:
+ case AMPERSAND:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ {
+ _loop36:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==AMPERSAND)) {
+ GrammarAST tmp23_AST_in = (GrammarAST)_t;
+ match(_t,AMPERSAND);
+ _t = _t.getNextSibling();
+ }
+ else {
+ break _loop36;
+ }
+
+ } while (true);
+ }
+ b=block(_t,"ruleBlock", dfa);
+ _t = _retTree;
+
+ String description =
+ grammar.grammarTreeToString(rule_AST_in.getFirstChildWithType(BLOCK),
+ false);
+ description =
+ generator.target.getTargetStringLiteralFromString(description);
+ b.setAttribute("description", description);
+ // do not generate lexer rules in combined grammar
+ String stName = null;
+ if ( ruleDescr.isSynPred ) {
+ stName = "synpredRule";
+ }
+ else if ( grammar.type==Grammar.LEXER ) {
+ if ( r.equals(Grammar.ARTIFICIAL_TOKENS_RULENAME) ) {
+ stName = "tokensRule";
+ }
+ else {
+ stName = "lexerRule";
+ }
+ }
+ else {
+ if ( !(grammar.type==Grammar.COMBINED &&
+ Character.isUpperCase(r.charAt(0))) )
+ {
+ stName = "rule";
+ }
+ }
+ code = templates.getInstanceOf(stName);
+ if ( code.getName().equals("rule") ) {
+ code.setAttribute("emptyRule",
+ Boolean.valueOf(grammar.isEmptyRule(block)));
+ }
+ code.setAttribute("ruleDescriptor", ruleDescr);
+ String memo = (String)grammar.getBlockOption(rule_AST_in,"memoize");
+ if ( memo==null ) {
+ memo = (String)grammar.getOption("memoize");
+ }
+ if ( memo!=null && memo.equals("true") &&
+ (stName.equals("rule")||stName.equals("lexerRule")) )
+ {
+ code.setAttribute("memoize",
+ Boolean.valueOf(memo!=null && memo.equals("true")));
+ }
+
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case LITERAL_catch:
+ case LITERAL_finally:
+ {
+ exceptionGroup(_t,code);
+ _t = _retTree;
+ break;
+ }
+ case EOR:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ GrammarAST tmp24_AST_in = (GrammarAST)_t;
+ match(_t,EOR);
+ _t = _t.getNextSibling();
+ _t = __t26;
+ _t = _t.getNextSibling();
+
+ if ( code!=null ) {
+ if ( grammar.type==Grammar.LEXER ) {
+ boolean naked =
+ r.equals(Grammar.ARTIFICIAL_TOKENS_RULENAME) ||
+ (mod!=null&&mod.getText().equals(Grammar.FRAGMENT_RULE_MODIFIER));
+ code.setAttribute("nakedBlock", Boolean.valueOf(naked));
+ }
+ else {
+ description =
+ grammar.grammarTreeToString(rule_AST_in,false);
+ description =
+ generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ }
+ Rule theRule = grammar.getRule(r);
+ generator.translateActionAttributeReferencesForSingleScope(
+ theRule,
+ theRule.getActions()
+ );
+ code.setAttribute("ruleName", r);
+ code.setAttribute("block", b);
+ if ( initAction!=null ) {
+ code.setAttribute("initAction", initAction);
+ }
+ }
+ templates = saveGroup;
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final void modifier(AST _t) throws RecognitionException {
+
+ GrammarAST modifier_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case LITERAL_protected:
+ {
+ GrammarAST tmp25_AST_in = (GrammarAST)_t;
+ match(_t,LITERAL_protected);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case LITERAL_public:
+ {
+ GrammarAST tmp26_AST_in = (GrammarAST)_t;
+ match(_t,LITERAL_public);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case LITERAL_private:
+ {
+ GrammarAST tmp27_AST_in = (GrammarAST)_t;
+ match(_t,LITERAL_private);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case FRAGMENT:
+ {
+ GrammarAST tmp28_AST_in = (GrammarAST)_t;
+ match(_t,FRAGMENT);
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final void ruleScopeSpec(AST _t) throws RecognitionException {
+
+ GrammarAST ruleScopeSpec_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ AST __t40 = _t;
+ GrammarAST tmp29_AST_in = (GrammarAST)_t;
+ match(_t,SCOPE);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ACTION:
+ {
+ GrammarAST tmp30_AST_in = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case 3:
+ case ID:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ {
+ _loop43:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==ID)) {
+ GrammarAST tmp31_AST_in = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ }
+ else {
+ break _loop43;
+ }
+
+ } while (true);
+ }
+ _t = __t40;
+ _t = _t.getNextSibling();
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final StringTemplate block(AST _t,
+ String blockTemplateName, DFA dfa
+ ) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST block_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ StringTemplate decision = null;
+ if ( dfa!=null ) {
+ code = templates.getInstanceOf(blockTemplateName);
+ decision = generator.genLookaheadDecision(recognizerST,dfa);
+ code.setAttribute("decision", decision);
+ code.setAttribute("decisionNumber", dfa.getDecisionNumber());
+ code.setAttribute("maxK",dfa.getMaxLookaheadDepth());
+ code.setAttribute("maxAlt",dfa.getNumberOfAlts());
+ }
+ else {
+ code = templates.getInstanceOf(blockTemplateName+"SingleAlt");
+ }
+ blockNestingLevel++;
+ code.setAttribute("blockLevel", blockNestingLevel);
+ code.setAttribute("enclosingBlockLevel", blockNestingLevel-1);
+ StringTemplate alt = null;
+ StringTemplate rew = null;
+ StringTemplate sb = null;
+ GrammarAST r = null;
+ int altNum = 1;
+ if ( this.blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ this.outerAltNum=1;
+ }
+
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ if (((_t.getType()==BLOCK))&&(block_AST_in.getSetValue()!=null)) {
+ sb=setBlock(_t);
+ _t = _retTree;
+
+ code.setAttribute("alts",sb);
+ blockNestingLevel--;
+
+ }
+ else if ((_t.getType()==BLOCK)) {
+ AST __t45 = _t;
+ GrammarAST tmp32_AST_in = (GrammarAST)_t;
+ match(_t,BLOCK);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case OPTIONS:
+ {
+ GrammarAST tmp33_AST_in = (GrammarAST)_t;
+ match(_t,OPTIONS);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case ALT:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ {
+ int _cnt48=0;
+ _loop48:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==ALT)) {
+ alt=alternative(_t);
+ _t = _retTree;
+ r=(GrammarAST)_t;
+ rew=rewrite(_t);
+ _t = _retTree;
+
+ if ( this.blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ this.outerAltNum++;
+ }
+ // add the rewrite code as just another element in the alt :)
+ // (unless it's a " -> ..." rewrite
+ // ( -> ... )
+ boolean etc =
+ r.getType()==REWRITE &&
+ r.getFirstChild()!=null &&
+ r.getFirstChild().getType()==ETC;
+ if ( rew!=null && !etc ) { alt.setAttribute("rew", rew); }
+ // add this alt to the list of alts for this block
+ code.setAttribute("alts",alt);
+ alt.setAttribute("altNum", Utils.integer(altNum));
+ alt.setAttribute("outerAlt",
+ Boolean.valueOf(blockNestingLevel==RULE_BLOCK_NESTING_LEVEL));
+ altNum++;
+
+ }
+ else {
+ if ( _cnt48>=1 ) { break _loop48; } else {throw new NoViableAltException(_t);}
+ }
+
+ _cnt48++;
+ } while (true);
+ }
+ GrammarAST tmp34_AST_in = (GrammarAST)_t;
+ match(_t,EOB);
+ _t = _t.getNextSibling();
+ _t = __t45;
+ _t = _t.getNextSibling();
+ blockNestingLevel--;
+ }
+ else {
+ throw new NoViableAltException(_t);
+ }
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final void exceptionGroup(AST _t,
+ StringTemplate ruleST
+ ) throws RecognitionException {
+
+ GrammarAST exceptionGroup_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case LITERAL_catch:
+ {
+ {
+ int _cnt52=0;
+ _loop52:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==LITERAL_catch)) {
+ exceptionHandler(_t,ruleST);
+ _t = _retTree;
+ }
+ else {
+ if ( _cnt52>=1 ) { break _loop52; } else {throw new NoViableAltException(_t);}
+ }
+
+ _cnt52++;
+ } while (true);
+ }
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case LITERAL_finally:
+ {
+ finallyClause(_t,ruleST);
+ _t = _retTree;
+ break;
+ }
+ case EOR:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ break;
+ }
+ case LITERAL_finally:
+ {
+ finallyClause(_t,ruleST);
+ _t = _retTree;
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final StringTemplate setBlock(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST setBlock_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST s = null;
+
+ StringTemplate setcode = null;
+ if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL && grammar.buildAST() ) {
+ Rule r = grammar.getRule(currentRuleName);
+ currentAltHasASTRewrite = r.hasRewrite(outerAltNum);
+ if ( currentAltHasASTRewrite ) {
+ r.trackTokenReferenceInAlt(setBlock_AST_in, outerAltNum);
+ }
+ }
+
+
+ try { // for error handling
+ s = (GrammarAST)_t;
+ match(_t,BLOCK);
+ _t = _t.getNextSibling();
+
+ int i = ((TokenWithIndex)s.getToken()).getIndex();
+ if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ setcode = getTokenElementST("matchRuleBlockSet", "set", s, null, null);
+ }
+ else {
+ setcode = getTokenElementST("matchSet", "set", s, null, null);
+ }
+ setcode.setAttribute("elementIndex", i);
+ if ( grammar.type!=Grammar.LEXER ) {
+ generator.generateLocalFOLLOW(s,"set",currentRuleName,i);
+ }
+ setcode.setAttribute("s",
+ generator.genSetExpr(templates,s.getSetValue(),1,false));
+ StringTemplate altcode=templates.getInstanceOf("alt");
+ altcode.setAttribute("elements.{el,line,pos}",
+ setcode,
+ Utils.integer(s.getLine()),
+ Utils.integer(s.getColumn())
+ );
+ altcode.setAttribute("altNum", Utils.integer(1));
+ altcode.setAttribute("outerAlt",
+ Boolean.valueOf(blockNestingLevel==RULE_BLOCK_NESTING_LEVEL));
+ if ( !currentAltHasASTRewrite && grammar.buildAST() ) {
+ altcode.setAttribute("autoAST", Boolean.valueOf(true));
+ }
+ altcode.setAttribute("treeLevel", rewriteTreeNestingLevel);
+ code = altcode;
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate alternative(AST _t) throws RecognitionException {
+ StringTemplate code=templates.getInstanceOf("alt");
+
+ GrammarAST alternative_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST a = null;
+
+ /*
+ // TODO: can we use Rule.altsWithRewrites???
+ if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ GrammarAST aRewriteNode = #alternative.findFirstType(REWRITE);
+ if ( grammar.buildAST() &&
+ (aRewriteNode!=null||
+ (#alternative.getNextSibling()!=null &&
+ #alternative.getNextSibling().getType()==REWRITE)) )
+ {
+ currentAltHasASTRewrite = true;
+ }
+ else {
+ currentAltHasASTRewrite = false;
+ }
+ }
+ */
+ if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL && grammar.buildAST() ) {
+ Rule r = grammar.getRule(currentRuleName);
+ currentAltHasASTRewrite = r.hasRewrite(outerAltNum);
+ }
+ String description = grammar.grammarTreeToString(alternative_AST_in, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ code.setAttribute("treeLevel", rewriteTreeNestingLevel);
+ if ( !currentAltHasASTRewrite && grammar.buildAST() ) {
+ code.setAttribute("autoAST", Boolean.valueOf(true));
+ }
+ StringTemplate e;
+
+
+ try { // for error handling
+ AST __t59 = _t;
+ a = _t==ASTNULL ? null :(GrammarAST)_t;
+ match(_t,ALT);
+ _t = _t.getFirstChild();
+ {
+ int _cnt61=0;
+ _loop61:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==BLOCK||_t.getType()==OPTIONAL||_t.getType()==CLOSURE||_t.getType()==POSITIVE_CLOSURE||_t.getType()==CHAR_RANGE||_t.getType()==EPSILON||_t.getType()==FORCED_ACTION||_t.getType()==GATED_SEMPRED||_t.getType()==SYN_SEMPRED||_t.getType()==BACKTRACK_SEMPRED||_t.getType()==DOT||_t.getType()==ACTION||_t.getType()==ASSIGN||_t.getType()==STRING_LITERAL||_t.getType()==CHAR_LITERAL||_t.getType()==TOKEN_REF||_t.getType()==BANG||_t.getType()==PLUS_ASSIGN||_t.getType()==SEMPRED||_t.getType()==ROOT||_t.getType()==WILDCARD||_t.getType()==RULE_REF||_t.getType()==NOT||_t.getType()==TREE_BEGIN)) {
+ GrammarAST elAST=(GrammarAST)_t;
+ e=element(_t,null,null);
+ _t = _retTree;
+
+ if ( e!=null ) {
+ code.setAttribute("elements.{el,line,pos}",
+ e,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+ }
+
+ }
+ else {
+ if ( _cnt61>=1 ) { break _loop61; } else {throw new NoViableAltException(_t);}
+ }
+
+ _cnt61++;
+ } while (true);
+ }
+ GrammarAST tmp35_AST_in = (GrammarAST)_t;
+ match(_t,EOA);
+ _t = _t.getNextSibling();
+ _t = __t59;
+ _t = _t.getNextSibling();
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate rewrite(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rewrite_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST r = null;
+ GrammarAST pred = null;
+
+ StringTemplate alt;
+ if ( rewrite_AST_in.getType()==REWRITE ) {
+ if ( generator.grammar.buildTemplate() ) {
+ code = templates.getInstanceOf("rewriteTemplate");
+ }
+ else {
+ code = templates.getInstanceOf("rewriteCode");
+ code.setAttribute("treeLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+ code.setAttribute("rewriteBlockLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+ code.setAttribute("referencedElementsDeep",
+ getTokenTypesAsTargetLabels(rewrite_AST_in.rewriteRefsDeep));
+ Set tokenLabels =
+ grammar.getLabels(rewrite_AST_in.rewriteRefsDeep, Grammar.TOKEN_LABEL);
+ Set tokenListLabels =
+ grammar.getLabels(rewrite_AST_in.rewriteRefsDeep, Grammar.TOKEN_LIST_LABEL);
+ Set ruleLabels =
+ grammar.getLabels(rewrite_AST_in.rewriteRefsDeep, Grammar.RULE_LABEL);
+ Set ruleListLabels =
+ grammar.getLabels(rewrite_AST_in.rewriteRefsDeep, Grammar.RULE_LIST_LABEL);
+ // just in case they ref $r for "previous value", make a stream
+ // from retval.tree
+ StringTemplate retvalST = templates.getInstanceOf("prevRuleRootRef");
+ ruleLabels.add(retvalST.toString());
+ code.setAttribute("referencedTokenLabels", tokenLabels);
+ code.setAttribute("referencedTokenListLabels", tokenListLabels);
+ code.setAttribute("referencedRuleLabels", ruleLabels);
+ code.setAttribute("referencedRuleListLabels", ruleListLabels);
+ }
+ }
+ else {
+ code = templates.getInstanceOf("noRewrite");
+ code.setAttribute("treeLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+ code.setAttribute("rewriteBlockLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+ }
+
+
+ try { // for error handling
+ {
+ _loop98:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==REWRITE)) {
+ rewriteRuleRefs = new HashSet();
+ AST __t96 = _t;
+ r = _t==ASTNULL ? null :(GrammarAST)_t;
+ match(_t,REWRITE);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case SEMPRED:
+ {
+ pred = (GrammarAST)_t;
+ match(_t,SEMPRED);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case ALT:
+ case TEMPLATE:
+ case ACTION:
+ case ETC:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ alt=rewrite_alternative(_t);
+ _t = _retTree;
+ _t = __t96;
+ _t = _t.getNextSibling();
+
+ rewriteBlockNestingLevel = OUTER_REWRITE_NESTING_LEVEL;
+ List predChunks = null;
+ if ( pred!=null ) {
+ //predText = #pred.getText();
+ predChunks = generator.translateAction(currentRuleName,pred);
+ }
+ String description =
+ grammar.grammarTreeToString(r,false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("alts.{pred,alt,description}",
+ predChunks,
+ alt,
+ description);
+ pred=null;
+
+ }
+ else {
+ break _loop98;
+ }
+
+ } while (true);
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final void exceptionHandler(AST _t,
+ StringTemplate ruleST
+ ) throws RecognitionException {
+
+ GrammarAST exceptionHandler_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ AST __t55 = _t;
+ GrammarAST tmp36_AST_in = (GrammarAST)_t;
+ match(_t,LITERAL_catch);
+ _t = _t.getFirstChild();
+ GrammarAST tmp37_AST_in = (GrammarAST)_t;
+ match(_t,ARG_ACTION);
+ _t = _t.getNextSibling();
+ GrammarAST tmp38_AST_in = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+ _t = __t55;
+ _t = _t.getNextSibling();
+
+ List chunks = generator.translateAction(currentRuleName,tmp38_AST_in);
+ ruleST.setAttribute("exceptions.{decl,action}",tmp37_AST_in.getText(),chunks);
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final void finallyClause(AST _t,
+ StringTemplate ruleST
+ ) throws RecognitionException {
+
+ GrammarAST finallyClause_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ AST __t57 = _t;
+ GrammarAST tmp39_AST_in = (GrammarAST)_t;
+ match(_t,LITERAL_finally);
+ _t = _t.getFirstChild();
+ GrammarAST tmp40_AST_in = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+ _t = __t57;
+ _t = _t.getNextSibling();
+
+ List chunks = generator.translateAction(currentRuleName,tmp40_AST_in);
+ ruleST.setAttribute("finally",chunks);
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final StringTemplate element(AST _t,
+ GrammarAST label, GrammarAST astSuffix
+ ) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST element_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST n = null;
+ GrammarAST alabel = null;
+ GrammarAST label2 = null;
+ GrammarAST a = null;
+ GrammarAST b = null;
+ GrammarAST sp = null;
+ GrammarAST gsp = null;
+
+ IntSet elements=null;
+ GrammarAST ast = null;
+
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ROOT:
+ {
+ AST __t63 = _t;
+ GrammarAST tmp41_AST_in = (GrammarAST)_t;
+ match(_t,ROOT);
+ _t = _t.getFirstChild();
+ code=element(_t,label,tmp41_AST_in);
+ _t = _retTree;
+ _t = __t63;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case BANG:
+ {
+ AST __t64 = _t;
+ GrammarAST tmp42_AST_in = (GrammarAST)_t;
+ match(_t,BANG);
+ _t = _t.getFirstChild();
+ code=element(_t,label,tmp42_AST_in);
+ _t = _retTree;
+ _t = __t64;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case NOT:
+ {
+ AST __t65 = _t;
+ n = _t==ASTNULL ? null :(GrammarAST)_t;
+ match(_t,NOT);
+ _t = _t.getFirstChild();
+ code=notElement(_t,n, label, astSuffix);
+ _t = _retTree;
+ _t = __t65;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case ASSIGN:
+ {
+ AST __t66 = _t;
+ GrammarAST tmp43_AST_in = (GrammarAST)_t;
+ match(_t,ASSIGN);
+ _t = _t.getFirstChild();
+ alabel = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ code=element(_t,alabel,astSuffix);
+ _t = _retTree;
+ _t = __t66;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case PLUS_ASSIGN:
+ {
+ AST __t67 = _t;
+ GrammarAST tmp44_AST_in = (GrammarAST)_t;
+ match(_t,PLUS_ASSIGN);
+ _t = _t.getFirstChild();
+ label2 = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ code=element(_t,label2,astSuffix);
+ _t = _retTree;
+ _t = __t67;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case CHAR_RANGE:
+ {
+ AST __t68 = _t;
+ GrammarAST tmp45_AST_in = (GrammarAST)_t;
+ match(_t,CHAR_RANGE);
+ _t = _t.getFirstChild();
+ a = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+ b = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+ _t = __t68;
+ _t = _t.getNextSibling();
+ code = templates.getInstanceOf("charRangeRef");
+ String low =
+ generator.target.getTargetCharLiteralFromANTLRCharLiteral(generator,a.getText());
+ String high =
+ generator.target.getTargetCharLiteralFromANTLRCharLiteral(generator,b.getText());
+ code.setAttribute("a", low);
+ code.setAttribute("b", high);
+ if ( label!=null ) {
+ code.setAttribute("label", label.getText());
+ }
+
+ break;
+ }
+ case TREE_BEGIN:
+ {
+ code=tree(_t);
+ _t = _retTree;
+ break;
+ }
+ case FORCED_ACTION:
+ case ACTION:
+ {
+ code=element_action(_t);
+ _t = _retTree;
+ break;
+ }
+ case GATED_SEMPRED:
+ case SEMPRED:
+ {
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case SEMPRED:
+ {
+ sp = (GrammarAST)_t;
+ match(_t,SEMPRED);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case GATED_SEMPRED:
+ {
+ gsp = (GrammarAST)_t;
+ match(_t,GATED_SEMPRED);
+ _t = _t.getNextSibling();
+ sp=gsp;
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+
+ code = templates.getInstanceOf("validateSemanticPredicate");
+ code.setAttribute("pred", generator.translateAction(currentRuleName,sp));
+ String description =
+ generator.target.getTargetStringLiteralFromString(sp.getText());
+ code.setAttribute("description", description);
+
+ break;
+ }
+ case SYN_SEMPRED:
+ {
+ GrammarAST tmp46_AST_in = (GrammarAST)_t;
+ match(_t,SYN_SEMPRED);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case BACKTRACK_SEMPRED:
+ {
+ GrammarAST tmp47_AST_in = (GrammarAST)_t;
+ match(_t,BACKTRACK_SEMPRED);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case EPSILON:
+ {
+ GrammarAST tmp48_AST_in = (GrammarAST)_t;
+ match(_t,EPSILON);
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ if (_t==null) _t=ASTNULL;
+ if ((((_t.getType() >= BLOCK && _t.getType() <= POSITIVE_CLOSURE)))&&(element_AST_in.getSetValue()==null)) {
+ code=ebnf(_t);
+ _t = _retTree;
+ }
+ else if ((_t.getType()==BLOCK||_t.getType()==DOT||_t.getType()==STRING_LITERAL||_t.getType()==CHAR_LITERAL||_t.getType()==TOKEN_REF||_t.getType()==WILDCARD||_t.getType()==RULE_REF)) {
+ code=atom(_t,null, label, astSuffix);
+ _t = _retTree;
+ }
+ else {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate notElement(AST _t,
+ GrammarAST n, GrammarAST label, GrammarAST astSuffix
+ ) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST notElement_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST assign_c = null;
+ GrammarAST assign_s = null;
+ GrammarAST assign_t = null;
+ GrammarAST assign_st = null;
+
+ IntSet elements=null;
+ String labelText = null;
+ if ( label!=null ) {
+ labelText = label.getText();
+ }
+
+
+ try { // for error handling
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case CHAR_LITERAL:
+ {
+ assign_c = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+
+ int ttype=0;
+ if ( grammar.type==Grammar.LEXER ) {
+ ttype = Grammar.getCharValueFromGrammarCharLiteral(assign_c.getText());
+ }
+ else {
+ ttype = grammar.getTokenType(assign_c.getText());
+ }
+ elements = grammar.complement(ttype);
+
+ break;
+ }
+ case STRING_LITERAL:
+ {
+ assign_s = (GrammarAST)_t;
+ match(_t,STRING_LITERAL);
+ _t = _t.getNextSibling();
+
+ int ttype=0;
+ if ( grammar.type==Grammar.LEXER ) {
+ // TODO: error!
+ }
+ else {
+ ttype = grammar.getTokenType(assign_s.getText());
+ }
+ elements = grammar.complement(ttype);
+
+ break;
+ }
+ case TOKEN_REF:
+ {
+ assign_t = (GrammarAST)_t;
+ match(_t,TOKEN_REF);
+ _t = _t.getNextSibling();
+
+ int ttype = grammar.getTokenType(assign_t.getText());
+ elements = grammar.complement(ttype);
+
+ break;
+ }
+ case BLOCK:
+ {
+ assign_st = (GrammarAST)_t;
+ match(_t,BLOCK);
+ _t = _t.getNextSibling();
+
+ elements = assign_st.getSetValue();
+ elements = grammar.complement(elements);
+
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+
+ code = getTokenElementST("matchSet",
+ "set",
+ (GrammarAST)n.getFirstChild(),
+ astSuffix,
+ labelText);
+ code.setAttribute("s",generator.genSetExpr(templates,elements,1,false));
+ int i = ((TokenWithIndex)n.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ if ( grammar.type!=Grammar.LEXER ) {
+ generator.generateLocalFOLLOW(n,"set",currentRuleName,i);
+ }
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate ebnf(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST ebnf_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ DFA dfa=null;
+ GrammarAST b = (GrammarAST)ebnf_AST_in.getFirstChild();
+ GrammarAST eob = (GrammarAST)b.getLastChild(); // loops will use EOB DFA
+
+
+ try { // for error handling
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case BLOCK:
+ {
+ dfa = ebnf_AST_in.getLookaheadDFA();
+ code=block(_t,"block", dfa);
+ _t = _retTree;
+ break;
+ }
+ case OPTIONAL:
+ {
+ dfa = ebnf_AST_in.getLookaheadDFA();
+ AST __t75 = _t;
+ GrammarAST tmp49_AST_in = (GrammarAST)_t;
+ match(_t,OPTIONAL);
+ _t = _t.getFirstChild();
+ code=block(_t,"optionalBlock", dfa);
+ _t = _retTree;
+ _t = __t75;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case CLOSURE:
+ {
+ dfa = eob.getLookaheadDFA();
+ AST __t76 = _t;
+ GrammarAST tmp50_AST_in = (GrammarAST)_t;
+ match(_t,CLOSURE);
+ _t = _t.getFirstChild();
+ code=block(_t,"closureBlock", dfa);
+ _t = _retTree;
+ _t = __t76;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case POSITIVE_CLOSURE:
+ {
+ dfa = eob.getLookaheadDFA();
+ AST __t77 = _t;
+ GrammarAST tmp51_AST_in = (GrammarAST)_t;
+ match(_t,POSITIVE_CLOSURE);
+ _t = _t.getFirstChild();
+ code=block(_t,"positiveClosureBlock", dfa);
+ _t = _retTree;
+ _t = __t77;
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+
+ String description = grammar.grammarTreeToString(ebnf_AST_in, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate atom(AST _t,
+ GrammarAST scope, GrammarAST label, GrammarAST astSuffix
+ ) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST atom_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST r = null;
+ GrammarAST rarg = null;
+ GrammarAST t = null;
+ GrammarAST targ = null;
+ GrammarAST c = null;
+ GrammarAST s = null;
+ GrammarAST w = null;
+
+ String labelText=null;
+ if ( label!=null ) {
+ labelText = label.getText();
+ }
+ if ( grammar.type!=Grammar.LEXER &&
+ (atom_AST_in.getType()==RULE_REF||atom_AST_in.getType()==TOKEN_REF||
+ atom_AST_in.getType()==CHAR_LITERAL||atom_AST_in.getType()==STRING_LITERAL) )
+ {
+ Rule encRule = grammar.getRule(((GrammarAST)atom_AST_in).enclosingRuleName);
+ if ( encRule!=null && encRule.hasRewrite(outerAltNum) && astSuffix!=null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_AST_OP_IN_ALT_WITH_REWRITE,
+ grammar,
+ ((GrammarAST)atom_AST_in).getToken(),
+ ((GrammarAST)atom_AST_in).enclosingRuleName,
+ new Integer(outerAltNum));
+ astSuffix = null;
+ }
+ }
+
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case RULE_REF:
+ {
+ AST __t85 = _t;
+ r = _t==ASTNULL ? null :(GrammarAST)_t;
+ match(_t,RULE_REF);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ARG_ACTION:
+ {
+ rarg = (GrammarAST)_t;
+ match(_t,ARG_ACTION);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case 3:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ _t = __t85;
+ _t = _t.getNextSibling();
+
+ grammar.checkRuleReference(scope, r, rarg, currentRuleName);
+ String scopeName = null;
+ if ( scope!=null ) {
+ scopeName = scope.getText();
+ }
+ Rule rdef = grammar.getRule(scopeName, r.getText());
+ // don't insert label=r() if $label.attr not used, no ret value, ...
+ if ( !rdef.getHasReturnValue() ) {
+ labelText = null;
+ }
+ code = getRuleElementST("ruleRef", r.getText(), r, astSuffix, labelText);
+ code.setAttribute("rule", rdef);
+ if ( scope!=null ) { // scoped rule ref
+ Grammar scopeG = grammar.composite.getGrammar(scope.getText());
+ code.setAttribute("scope", scopeG);
+ }
+ else if ( rdef.grammar != this.grammar ) { // nonlocal
+ // if rule definition is not in this grammar, it's nonlocal
+ List rdefDelegates = rdef.grammar.getDelegates();
+ if ( rdefDelegates.contains(this.grammar) ) {
+ code.setAttribute("scope", rdef.grammar);
+ }
+ else {
+ // defining grammar is not a delegate, scope all the
+ // back to root, which has delegate methods for all
+ // rules. Don't use scope if we are root.
+ if ( this.grammar != rdef.grammar.composite.delegateGrammarTreeRoot.grammar ) {
+ code.setAttribute("scope",
+ rdef.grammar.composite.delegateGrammarTreeRoot.grammar);
+ }
+ }
+ }
+
+ if ( rarg!=null ) {
+ List args = generator.translateAction(currentRuleName,rarg);
+ code.setAttribute("args", args);
+ }
+ int i = ((TokenWithIndex)r.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(r,r.getText(),currentRuleName,i);
+ r.code = code;
+
+ break;
+ }
+ case TOKEN_REF:
+ {
+ AST __t87 = _t;
+ t = _t==ASTNULL ? null :(GrammarAST)_t;
+ match(_t,TOKEN_REF);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ARG_ACTION:
+ {
+ targ = (GrammarAST)_t;
+ match(_t,ARG_ACTION);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case 3:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ _t = __t87;
+ _t = _t.getNextSibling();
+
+ if ( currentAltHasASTRewrite && t.terminalOptions!=null &&
+ t.terminalOptions.get(Grammar.defaultTokenOption)!=null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_HETERO_ILLEGAL_IN_REWRITE_ALT,
+ grammar,
+ ((GrammarAST)(t)).getToken(),
+ t.getText());
+ }
+ grammar.checkRuleReference(scope, t, targ, currentRuleName);
+ if ( grammar.type==Grammar.LEXER ) {
+ if ( grammar.getTokenType(t.getText())==Label.EOF ) {
+ code = templates.getInstanceOf("lexerMatchEOF");
+ }
+ else {
+ code = templates.getInstanceOf("lexerRuleRef");
+ if ( isListLabel(labelText) ) {
+ code = templates.getInstanceOf("lexerRuleRefAndListLabel");
+ }
+ String scopeName = null;
+ if ( scope!=null ) {
+ scopeName = scope.getText();
+ }
+ Rule rdef2 = grammar.getRule(scopeName, t.getText());
+ code.setAttribute("rule", rdef2);
+ if ( scope!=null ) { // scoped rule ref
+ Grammar scopeG = grammar.composite.getGrammar(scope.getText());
+ code.setAttribute("scope", scopeG);
+ }
+ else if ( rdef2.grammar != this.grammar ) { // nonlocal
+ // if rule definition is not in this grammar, it's nonlocal
+ code.setAttribute("scope", rdef2.grammar);
+ }
+ if ( targ!=null ) {
+ List args = generator.translateAction(currentRuleName,targ);
+ code.setAttribute("args", args);
+ }
+ }
+ int i = ((TokenWithIndex)t.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ if ( label!=null ) code.setAttribute("label", labelText);
+ }
+ else {
+ code = getTokenElementST("tokenRef", t.getText(), t, astSuffix, labelText);
+ String tokenLabel =
+ generator.getTokenTypeAsTargetLabel(grammar.getTokenType(t.getText()));
+ code.setAttribute("token",tokenLabel);
+ if ( !currentAltHasASTRewrite && t.terminalOptions!=null ) {
+ code.setAttribute("hetero",t.terminalOptions.get(Grammar.defaultTokenOption));
+ }
+ int i = ((TokenWithIndex)t.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(t,tokenLabel,currentRuleName,i);
+ }
+ t.code = code;
+
+ break;
+ }
+ case CHAR_LITERAL:
+ {
+ c = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+
+ if ( grammar.type==Grammar.LEXER ) {
+ code = templates.getInstanceOf("charRef");
+ code.setAttribute("char",
+ generator.target.getTargetCharLiteralFromANTLRCharLiteral(generator,c.getText()));
+ if ( label!=null ) {
+ code.setAttribute("label", labelText);
+ }
+ }
+ else { // else it's a token type reference
+ code = getTokenElementST("tokenRef", "char_literal", c, astSuffix, labelText);
+ String tokenLabel = generator.getTokenTypeAsTargetLabel(grammar.getTokenType(c.getText()));
+ code.setAttribute("token",tokenLabel);
+ if ( c.terminalOptions!=null ) {
+ code.setAttribute("hetero",c.terminalOptions.get(Grammar.defaultTokenOption));
+ }
+ int i = ((TokenWithIndex)c.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(c,tokenLabel,currentRuleName,i);
+ }
+
+ break;
+ }
+ case STRING_LITERAL:
+ {
+ s = (GrammarAST)_t;
+ match(_t,STRING_LITERAL);
+ _t = _t.getNextSibling();
+
+ if ( grammar.type==Grammar.LEXER ) {
+ code = templates.getInstanceOf("lexerStringRef");
+ code.setAttribute("string",
+ generator.target.getTargetStringLiteralFromANTLRStringLiteral(generator,s.getText()));
+ if ( label!=null ) {
+ code.setAttribute("label", labelText);
+ }
+ }
+ else { // else it's a token type reference
+ code = getTokenElementST("tokenRef", "string_literal", s, astSuffix, labelText);
+ String tokenLabel =
+ generator.getTokenTypeAsTargetLabel(grammar.getTokenType(s.getText()));
+ code.setAttribute("token",tokenLabel);
+ if ( s.terminalOptions!=null ) {
+ code.setAttribute("hetero",s.terminalOptions.get(Grammar.defaultTokenOption));
+ }
+ int i = ((TokenWithIndex)s.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(s,tokenLabel,currentRuleName,i);
+ }
+
+ break;
+ }
+ case WILDCARD:
+ {
+ w = (GrammarAST)_t;
+ match(_t,WILDCARD);
+ _t = _t.getNextSibling();
+
+ code = getWildcardST(w,astSuffix,labelText);
+ code.setAttribute("elementIndex", ((TokenWithIndex)w.getToken()).getIndex());
+
+ break;
+ }
+ case DOT:
+ {
+ AST __t89 = _t;
+ GrammarAST tmp52_AST_in = (GrammarAST)_t;
+ match(_t,DOT);
+ _t = _t.getFirstChild();
+ GrammarAST tmp53_AST_in = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ code=atom(_t,tmp53_AST_in, label, astSuffix);
+ _t = _retTree;
+ _t = __t89;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case BLOCK:
+ {
+ code=set(_t,label,astSuffix);
+ _t = _retTree;
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate tree(AST _t) throws RecognitionException {
+ StringTemplate code=templates.getInstanceOf("tree");
+
+ GrammarAST tree_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ StringTemplate el=null, act=null;
+ GrammarAST elAST=null, actAST=null;
+ NFAState afterDOWN = (NFAState)tree_AST_in.NFATreeDownState.transition(0).target;
+ LookaheadSet s = grammar.LOOK(afterDOWN);
+ if ( s.member(Label.UP) ) {
+ // nullable child list if we can see the UP as the next token
+ // we need an "if ( input.LA(1)==Token.DOWN )" gate around
+ // the child list.
+ code.setAttribute("nullableChildList", "true");
+ }
+ rewriteTreeNestingLevel++;
+ code.setAttribute("enclosingTreeLevel", rewriteTreeNestingLevel-1);
+ code.setAttribute("treeLevel", rewriteTreeNestingLevel);
+ Rule r = grammar.getRule(currentRuleName);
+ GrammarAST rootSuffix = null;
+ if ( grammar.buildAST() && !r.hasRewrite(outerAltNum) ) {
+ rootSuffix = new GrammarAST(ROOT,"ROOT");
+ }
+
+
+ try { // for error handling
+ AST __t79 = _t;
+ GrammarAST tmp54_AST_in = (GrammarAST)_t;
+ match(_t,TREE_BEGIN);
+ _t = _t.getFirstChild();
+ elAST=(GrammarAST)_t;
+ el=element(_t,null,rootSuffix);
+ _t = _retTree;
+
+ code.setAttribute("root.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+
+ {
+ _loop81:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==FORCED_ACTION||_t.getType()==ACTION)) {
+ actAST=(GrammarAST)_t;
+ act=element_action(_t);
+ _t = _retTree;
+
+ code.setAttribute("actionsAfterRoot.{el,line,pos}",
+ act,
+ Utils.integer(actAST.getLine()),
+ Utils.integer(actAST.getColumn())
+ );
+
+ }
+ else {
+ break _loop81;
+ }
+
+ } while (true);
+ }
+ {
+ _loop83:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==BLOCK||_t.getType()==OPTIONAL||_t.getType()==CLOSURE||_t.getType()==POSITIVE_CLOSURE||_t.getType()==CHAR_RANGE||_t.getType()==EPSILON||_t.getType()==FORCED_ACTION||_t.getType()==GATED_SEMPRED||_t.getType()==SYN_SEMPRED||_t.getType()==BACKTRACK_SEMPRED||_t.getType()==DOT||_t.getType()==ACTION||_t.getType()==ASSIGN||_t.getType()==STRING_LITERAL||_t.getType()==CHAR_LITERAL||_t.getType()==TOKEN_REF||_t.getType()==BANG||_t.getType()==PLUS_ASSIGN||_t.getType()==SEMPRED||_t.getType()==ROOT||_t.getType()==WILDCARD||_t.getType()==RULE_REF||_t.getType()==NOT||_t.getType()==TREE_BEGIN)) {
+ elAST=(GrammarAST)_t;
+ el=element(_t,null,null);
+ _t = _retTree;
+
+ code.setAttribute("children.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+
+ }
+ else {
+ break _loop83;
+ }
+
+ } while (true);
+ }
+ _t = __t79;
+ _t = _t.getNextSibling();
+ rewriteTreeNestingLevel--;
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate element_action(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST element_action_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST act = null;
+ GrammarAST act2 = null;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ACTION:
+ {
+ act = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+
+ code = templates.getInstanceOf("execAction");
+ code.setAttribute("action", generator.translateAction(currentRuleName,act));
+
+ break;
+ }
+ case FORCED_ACTION:
+ {
+ act2 = (GrammarAST)_t;
+ match(_t,FORCED_ACTION);
+ _t = _t.getNextSibling();
+
+ code = templates.getInstanceOf("execForcedAction");
+ code.setAttribute("action", generator.translateAction(currentRuleName,act2));
+
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate set(AST _t,
+ GrammarAST label, GrammarAST astSuffix
+ ) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST set_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST s = null;
+
+ String labelText=null;
+ if ( label!=null ) {
+ labelText = label.getText();
+ }
+
+
+ try { // for error handling
+ s = (GrammarAST)_t;
+ match(_t,BLOCK);
+ _t = _t.getNextSibling();
+
+ code = getTokenElementST("matchSet", "set", s, astSuffix, labelText);
+ int i = ((TokenWithIndex)s.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ if ( grammar.type!=Grammar.LEXER ) {
+ generator.generateLocalFOLLOW(s,"set",currentRuleName,i);
+ }
+ code.setAttribute("s", generator.genSetExpr(templates,s.getSetValue(),1,false));
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final void ast_suffix(AST _t) throws RecognitionException {
+
+ GrammarAST ast_suffix_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ROOT:
+ {
+ GrammarAST tmp55_AST_in = (GrammarAST)_t;
+ match(_t,ROOT);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case BANG:
+ {
+ GrammarAST tmp56_AST_in = (GrammarAST)_t;
+ match(_t,BANG);
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final void setElement(AST _t) throws RecognitionException {
+
+ GrammarAST setElement_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST c = null;
+ GrammarAST t = null;
+ GrammarAST s = null;
+ GrammarAST c1 = null;
+ GrammarAST c2 = null;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case CHAR_LITERAL:
+ {
+ c = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case TOKEN_REF:
+ {
+ t = (GrammarAST)_t;
+ match(_t,TOKEN_REF);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case STRING_LITERAL:
+ {
+ s = (GrammarAST)_t;
+ match(_t,STRING_LITERAL);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case CHAR_RANGE:
+ {
+ AST __t93 = _t;
+ GrammarAST tmp57_AST_in = (GrammarAST)_t;
+ match(_t,CHAR_RANGE);
+ _t = _t.getFirstChild();
+ c1 = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+ c2 = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+ _t = __t93;
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ }
+
+ public final StringTemplate rewrite_alternative(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rewrite_alternative_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST a = null;
+
+ StringTemplate el,st;
+
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ if (((_t.getType()==ALT))&&(generator.grammar.buildAST())) {
+ AST __t102 = _t;
+ a = _t==ASTNULL ? null :(GrammarAST)_t;
+ match(_t,ALT);
+ _t = _t.getFirstChild();
+ code=templates.getInstanceOf("rewriteElementList");
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case OPTIONAL:
+ case CLOSURE:
+ case POSITIVE_CLOSURE:
+ case LABEL:
+ case ACTION:
+ case STRING_LITERAL:
+ case CHAR_LITERAL:
+ case TOKEN_REF:
+ case RULE_REF:
+ case TREE_BEGIN:
+ {
+ {
+ int _cnt105=0;
+ _loop105:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==OPTIONAL||_t.getType()==CLOSURE||_t.getType()==POSITIVE_CLOSURE||_t.getType()==LABEL||_t.getType()==ACTION||_t.getType()==STRING_LITERAL||_t.getType()==CHAR_LITERAL||_t.getType()==TOKEN_REF||_t.getType()==RULE_REF||_t.getType()==TREE_BEGIN)) {
+ GrammarAST elAST=(GrammarAST)_t;
+ el=rewrite_element(_t);
+ _t = _retTree;
+ code.setAttribute("elements.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+
+ }
+ else {
+ if ( _cnt105>=1 ) { break _loop105; } else {throw new NoViableAltException(_t);}
+ }
+
+ _cnt105++;
+ } while (true);
+ }
+ break;
+ }
+ case EPSILON:
+ {
+ GrammarAST tmp58_AST_in = (GrammarAST)_t;
+ match(_t,EPSILON);
+ _t = _t.getNextSibling();
+ code.setAttribute("elements.{el,line,pos}",
+ templates.getInstanceOf("rewriteEmptyAlt"),
+ Utils.integer(a.getLine()),
+ Utils.integer(a.getColumn())
+ );
+
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ GrammarAST tmp59_AST_in = (GrammarAST)_t;
+ match(_t,EOA);
+ _t = _t.getNextSibling();
+ _t = __t102;
+ _t = _t.getNextSibling();
+ }
+ else if (((_t.getType()==ALT||_t.getType()==TEMPLATE||_t.getType()==ACTION))&&(generator.grammar.buildTemplate())) {
+ code=rewrite_template(_t);
+ _t = _retTree;
+ }
+ else if ((_t.getType()==ETC)) {
+ GrammarAST tmp60_AST_in = (GrammarAST)_t;
+ match(_t,ETC);
+ _t = _t.getNextSibling();
+ }
+ else {
+ throw new NoViableAltException(_t);
+ }
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate rewrite_block(AST _t,
+ String blockTemplateName
+ ) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rewrite_block_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ rewriteBlockNestingLevel++;
+ code = templates.getInstanceOf(blockTemplateName);
+ StringTemplate save_currentBlockST = currentBlockST;
+ currentBlockST = code;
+ code.setAttribute("rewriteBlockLevel", rewriteBlockNestingLevel);
+ StringTemplate alt=null;
+
+
+ try { // for error handling
+ AST __t100 = _t;
+ GrammarAST tmp61_AST_in = (GrammarAST)_t;
+ match(_t,BLOCK);
+ _t = _t.getFirstChild();
+
+ currentBlockST.setAttribute("referencedElementsDeep",
+ getTokenTypesAsTargetLabels(tmp61_AST_in.rewriteRefsDeep));
+ currentBlockST.setAttribute("referencedElements",
+ getTokenTypesAsTargetLabels(tmp61_AST_in.rewriteRefsShallow));
+
+ alt=rewrite_alternative(_t);
+ _t = _retTree;
+ GrammarAST tmp62_AST_in = (GrammarAST)_t;
+ match(_t,EOB);
+ _t = _t.getNextSibling();
+ _t = __t100;
+ _t = _t.getNextSibling();
+
+ code.setAttribute("alt", alt);
+ rewriteBlockNestingLevel--;
+ currentBlockST = save_currentBlockST;
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate rewrite_element(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rewrite_element_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ IntSet elements=null;
+ GrammarAST ast = null;
+
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case LABEL:
+ case ACTION:
+ case STRING_LITERAL:
+ case CHAR_LITERAL:
+ case TOKEN_REF:
+ case RULE_REF:
+ {
+ code=rewrite_atom(_t,false);
+ _t = _retTree;
+ break;
+ }
+ case OPTIONAL:
+ case CLOSURE:
+ case POSITIVE_CLOSURE:
+ {
+ code=rewrite_ebnf(_t);
+ _t = _retTree;
+ break;
+ }
+ case TREE_BEGIN:
+ {
+ code=rewrite_tree(_t);
+ _t = _retTree;
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate rewrite_template(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rewrite_template_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST id = null;
+ GrammarAST ind = null;
+ GrammarAST arg = null;
+ GrammarAST a = null;
+ GrammarAST act = null;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ALT:
+ {
+ AST __t120 = _t;
+ GrammarAST tmp63_AST_in = (GrammarAST)_t;
+ match(_t,ALT);
+ _t = _t.getFirstChild();
+ GrammarAST tmp64_AST_in = (GrammarAST)_t;
+ match(_t,EPSILON);
+ _t = _t.getNextSibling();
+ GrammarAST tmp65_AST_in = (GrammarAST)_t;
+ match(_t,EOA);
+ _t = _t.getNextSibling();
+ _t = __t120;
+ _t = _t.getNextSibling();
+ code=templates.getInstanceOf("rewriteEmptyTemplate");
+ break;
+ }
+ case TEMPLATE:
+ {
+ AST __t121 = _t;
+ GrammarAST tmp66_AST_in = (GrammarAST)_t;
+ match(_t,TEMPLATE);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ID:
+ {
+ id = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case ACTION:
+ {
+ ind = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+
+ if ( id!=null && id.getText().equals("template") ) {
+ code = templates.getInstanceOf("rewriteInlineTemplate");
+ }
+ else if ( id!=null ) {
+ code = templates.getInstanceOf("rewriteExternalTemplate");
+ code.setAttribute("name", id.getText());
+ }
+ else if ( ind!=null ) { // must be %({expr})(args)
+ code = templates.getInstanceOf("rewriteIndirectTemplate");
+ List chunks=generator.translateAction(currentRuleName,ind);
+ code.setAttribute("expr", chunks);
+ }
+
+ AST __t123 = _t;
+ GrammarAST tmp67_AST_in = (GrammarAST)_t;
+ match(_t,ARGLIST);
+ _t = _t.getFirstChild();
+ {
+ _loop126:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==ARG)) {
+ AST __t125 = _t;
+ GrammarAST tmp68_AST_in = (GrammarAST)_t;
+ match(_t,ARG);
+ _t = _t.getFirstChild();
+ arg = (GrammarAST)_t;
+ match(_t,ID);
+ _t = _t.getNextSibling();
+ a = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+
+ // must set alt num here rather than in define.g
+ // because actions like %foo(name={$ID.text}) aren't
+ // broken up yet into trees.
+ a.outerAltNum = this.outerAltNum;
+ List chunks = generator.translateAction(currentRuleName,a);
+ code.setAttribute("args.{name,value}", arg.getText(), chunks);
+
+ _t = __t125;
+ _t = _t.getNextSibling();
+ }
+ else {
+ break _loop126;
+ }
+
+ } while (true);
+ }
+ _t = __t123;
+ _t = _t.getNextSibling();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case DOUBLE_QUOTE_STRING_LITERAL:
+ {
+ GrammarAST tmp69_AST_in = (GrammarAST)_t;
+ match(_t,DOUBLE_QUOTE_STRING_LITERAL);
+ _t = _t.getNextSibling();
+
+ String sl = tmp69_AST_in.getText();
+ String t = sl.substring(1,sl.length()-1); // strip quotes
+ t = generator.target.getTargetStringLiteralFromString(t);
+ code.setAttribute("template",t);
+
+ break;
+ }
+ case DOUBLE_ANGLE_STRING_LITERAL:
+ {
+ GrammarAST tmp70_AST_in = (GrammarAST)_t;
+ match(_t,DOUBLE_ANGLE_STRING_LITERAL);
+ _t = _t.getNextSibling();
+
+ String sl = tmp70_AST_in.getText();
+ String t = sl.substring(2,sl.length()-2); // strip double angle quotes
+ t = generator.target.getTargetStringLiteralFromString(t);
+ code.setAttribute("template",t);
+
+ break;
+ }
+ case 3:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ _t = __t121;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case ACTION:
+ {
+ act = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+
+ // set alt num for same reason as ARGLIST above
+ act.outerAltNum = this.outerAltNum;
+ code=templates.getInstanceOf("rewriteAction");
+ code.setAttribute("action",
+ generator.translateAction(currentRuleName,act));
+
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate rewrite_atom(AST _t,
+ boolean isRoot
+ ) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rewrite_atom_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+ GrammarAST r = null;
+ GrammarAST tk = null;
+ GrammarAST arg = null;
+ GrammarAST cl = null;
+ GrammarAST sl = null;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case RULE_REF:
+ {
+ r = (GrammarAST)_t;
+ match(_t,RULE_REF);
+ _t = _t.getNextSibling();
+
+ String ruleRefName = r.getText();
+ String stName = "rewriteRuleRef";
+ if ( isRoot ) {
+ stName += "Root";
+ }
+ code = templates.getInstanceOf(stName);
+ code.setAttribute("rule", ruleRefName);
+ if ( grammar.getRule(ruleRefName)==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNDEFINED_RULE_REF,
+ grammar,
+ ((GrammarAST)(r)).getToken(),
+ ruleRefName);
+ code = new StringTemplate(); // blank; no code gen
+ }
+ else if ( grammar.getRule(currentRuleName)
+ .getRuleRefsInAlt(ruleRefName,outerAltNum)==null )
+ {
+ ErrorManager.grammarError(ErrorManager.MSG_REWRITE_ELEMENT_NOT_PRESENT_ON_LHS,
+ grammar,
+ ((GrammarAST)(r)).getToken(),
+ ruleRefName);
+ code = new StringTemplate(); // blank; no code gen
+ }
+ else {
+ // track all rule refs as we must copy 2nd ref to rule and beyond
+ if ( !rewriteRuleRefs.contains(ruleRefName) ) {
+ rewriteRuleRefs.add(ruleRefName);
+ }
+ }
+
+ break;
+ }
+ case STRING_LITERAL:
+ case CHAR_LITERAL:
+ case TOKEN_REF:
+ {
+ GrammarAST term=(GrammarAST)_t;
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case TOKEN_REF:
+ {
+ AST __t117 = _t;
+ tk = _t==ASTNULL ? null :(GrammarAST)_t;
+ match(_t,TOKEN_REF);
+ _t = _t.getFirstChild();
+ {
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case ARG_ACTION:
+ {
+ arg = (GrammarAST)_t;
+ match(_t,ARG_ACTION);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case 3:
+ {
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ _t = __t117;
+ _t = _t.getNextSibling();
+ break;
+ }
+ case CHAR_LITERAL:
+ {
+ cl = (GrammarAST)_t;
+ match(_t,CHAR_LITERAL);
+ _t = _t.getNextSibling();
+ break;
+ }
+ case STRING_LITERAL:
+ {
+ sl = (GrammarAST)_t;
+ match(_t,STRING_LITERAL);
+ _t = _t.getNextSibling();
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+
+ String tokenName = rewrite_atom_AST_in.getText();
+ String stName = "rewriteTokenRef";
+ Rule rule = grammar.getRule(currentRuleName);
+ Set tokenRefsInAlt = rule.getTokenRefsInAlt(outerAltNum);
+ boolean createNewNode = !tokenRefsInAlt.contains(tokenName) || arg!=null;
+ Object hetero = null;
+ if ( term.terminalOptions!=null ) {
+ hetero = term.terminalOptions.get(Grammar.defaultTokenOption);
+ }
+ if ( createNewNode ) {
+ stName = "rewriteImaginaryTokenRef";
+ }
+ if ( isRoot ) {
+ stName += "Root";
+ }
+ code = templates.getInstanceOf(stName);
+ code.setAttribute("hetero", hetero);
+ if ( arg!=null ) {
+ List args = generator.translateAction(currentRuleName,arg);
+ code.setAttribute("args", args);
+ }
+ code.setAttribute("elementIndex", ((TokenWithIndex)rewrite_atom_AST_in.getToken()).getIndex());
+ int ttype = grammar.getTokenType(tokenName);
+ String tok = generator.getTokenTypeAsTargetLabel(ttype);
+ code.setAttribute("token", tok);
+ if ( grammar.getTokenType(tokenName)==Label.INVALID ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNDEFINED_TOKEN_REF_IN_REWRITE,
+ grammar,
+ ((GrammarAST)(rewrite_atom_AST_in)).getToken(),
+ tokenName);
+ code = new StringTemplate(); // blank; no code gen
+ }
+
+ break;
+ }
+ case LABEL:
+ {
+ GrammarAST tmp71_AST_in = (GrammarAST)_t;
+ match(_t,LABEL);
+ _t = _t.getNextSibling();
+
+ String labelName = tmp71_AST_in.getText();
+ Rule rule = grammar.getRule(currentRuleName);
+ Grammar.LabelElementPair pair = rule.getLabel(labelName);
+ if ( labelName.equals(currentRuleName) ) {
+ // special case; ref to old value via $rule
+ if ( rule.hasRewrite(outerAltNum) &&
+ rule.getRuleRefsInAlt(outerAltNum).contains(labelName) )
+ {
+ ErrorManager.grammarError(ErrorManager.MSG_RULE_REF_AMBIG_WITH_RULE_IN_ALT,
+ grammar,
+ ((GrammarAST)(tmp71_AST_in)).getToken(),
+ labelName);
+ }
+ StringTemplate labelST = templates.getInstanceOf("prevRuleRootRef");
+ code = templates.getInstanceOf("rewriteRuleLabelRef"+(isRoot?"Root":""));
+ code.setAttribute("label", labelST);
+ }
+ else if ( pair==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNDEFINED_LABEL_REF_IN_REWRITE,
+ grammar,
+ ((GrammarAST)(tmp71_AST_in)).getToken(),
+ labelName);
+ code = new StringTemplate();
+ }
+ else {
+ String stName = null;
+ switch ( pair.type ) {
+ case Grammar.TOKEN_LABEL :
+ stName = "rewriteTokenLabelRef";
+ break;
+ case Grammar.RULE_LABEL :
+ stName = "rewriteRuleLabelRef";
+ break;
+ case Grammar.TOKEN_LIST_LABEL :
+ stName = "rewriteTokenListLabelRef";
+ break;
+ case Grammar.RULE_LIST_LABEL :
+ stName = "rewriteRuleListLabelRef";
+ break;
+ }
+ if ( isRoot ) {
+ stName += "Root";
+ }
+ code = templates.getInstanceOf(stName);
+ code.setAttribute("label", labelName);
+ }
+
+ break;
+ }
+ case ACTION:
+ {
+ GrammarAST tmp72_AST_in = (GrammarAST)_t;
+ match(_t,ACTION);
+ _t = _t.getNextSibling();
+
+ // actions in rewrite rules yield a tree object
+ String actText = tmp72_AST_in.getText();
+ List chunks = generator.translateAction(currentRuleName,tmp72_AST_in);
+ code = templates.getInstanceOf("rewriteNodeAction"+(isRoot?"Root":""));
+ code.setAttribute("action", chunks);
+
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate rewrite_ebnf(AST _t) throws RecognitionException {
+ StringTemplate code=null;
+
+ GrammarAST rewrite_ebnf_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ try { // for error handling
+ if (_t==null) _t=ASTNULL;
+ switch ( _t.getType()) {
+ case OPTIONAL:
+ {
+ AST __t108 = _t;
+ GrammarAST tmp73_AST_in = (GrammarAST)_t;
+ match(_t,OPTIONAL);
+ _t = _t.getFirstChild();
+ code=rewrite_block(_t,"rewriteOptionalBlock");
+ _t = _retTree;
+ _t = __t108;
+ _t = _t.getNextSibling();
+
+ String description = grammar.grammarTreeToString(rewrite_ebnf_AST_in, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+
+ break;
+ }
+ case CLOSURE:
+ {
+ AST __t109 = _t;
+ GrammarAST tmp74_AST_in = (GrammarAST)_t;
+ match(_t,CLOSURE);
+ _t = _t.getFirstChild();
+ code=rewrite_block(_t,"rewriteClosureBlock");
+ _t = _retTree;
+ _t = __t109;
+ _t = _t.getNextSibling();
+
+ String description = grammar.grammarTreeToString(rewrite_ebnf_AST_in, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+
+ break;
+ }
+ case POSITIVE_CLOSURE:
+ {
+ AST __t110 = _t;
+ GrammarAST tmp75_AST_in = (GrammarAST)_t;
+ match(_t,POSITIVE_CLOSURE);
+ _t = _t.getFirstChild();
+ code=rewrite_block(_t,"rewritePositiveClosureBlock");
+ _t = _retTree;
+ _t = __t110;
+ _t = _t.getNextSibling();
+
+ String description = grammar.grammarTreeToString(rewrite_ebnf_AST_in, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+
+ break;
+ }
+ default:
+ {
+ throw new NoViableAltException(_t);
+ }
+ }
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+ public final StringTemplate rewrite_tree(AST _t) throws RecognitionException {
+ StringTemplate code=templates.getInstanceOf("rewriteTree");
+
+ GrammarAST rewrite_tree_AST_in = (_t == ASTNULL) ? null : (GrammarAST)_t;
+
+ rewriteTreeNestingLevel++;
+ code.setAttribute("treeLevel", rewriteTreeNestingLevel);
+ code.setAttribute("enclosingTreeLevel", rewriteTreeNestingLevel-1);
+ StringTemplate r, el;
+ GrammarAST elAST=null;
+
+
+ try { // for error handling
+ AST __t112 = _t;
+ GrammarAST tmp76_AST_in = (GrammarAST)_t;
+ match(_t,TREE_BEGIN);
+ _t = _t.getFirstChild();
+ elAST=(GrammarAST)_t;
+ r=rewrite_atom(_t,true);
+ _t = _retTree;
+ code.setAttribute("root.{el,line,pos}",
+ r,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+
+ {
+ _loop114:
+ do {
+ if (_t==null) _t=ASTNULL;
+ if ((_t.getType()==OPTIONAL||_t.getType()==CLOSURE||_t.getType()==POSITIVE_CLOSURE||_t.getType()==LABEL||_t.getType()==ACTION||_t.getType()==STRING_LITERAL||_t.getType()==CHAR_LITERAL||_t.getType()==TOKEN_REF||_t.getType()==RULE_REF||_t.getType()==TREE_BEGIN)) {
+ elAST=(GrammarAST)_t;
+ el=rewrite_element(_t);
+ _t = _retTree;
+
+ code.setAttribute("children.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+
+ }
+ else {
+ break _loop114;
+ }
+
+ } while (true);
+ }
+ _t = __t112;
+ _t = _t.getNextSibling();
+
+ String description = grammar.grammarTreeToString(rewrite_tree_AST_in, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ rewriteTreeNestingLevel--;
+
+ }
+ catch (RecognitionException ex) {
+ reportError(ex);
+ if (_t!=null) {_t = _t.getNextSibling();}
+ }
+ _retTree = _t;
+ return code;
+ }
+
+
+ public static final String[] _tokenNames = {
+ "<0>",
+ "EOF",
+ "<2>",
+ "NULL_TREE_LOOKAHEAD",
+ "\"options\"",
+ "\"tokens\"",
+ "\"parser\"",
+ "LEXER",
+ "RULE",
+ "BLOCK",
+ "OPTIONAL",
+ "CLOSURE",
+ "POSITIVE_CLOSURE",
+ "SYNPRED",
+ "RANGE",
+ "CHAR_RANGE",
+ "EPSILON",
+ "ALT",
+ "EOR",
+ "EOB",
+ "EOA",
+ "ID",
+ "ARG",
+ "ARGLIST",
+ "RET",
+ "LEXER_GRAMMAR",
+ "PARSER_GRAMMAR",
+ "TREE_GRAMMAR",
+ "COMBINED_GRAMMAR",
+ "INITACTION",
+ "FORCED_ACTION",
+ "LABEL",
+ "TEMPLATE",
+ "\"scope\"",
+ "\"import\"",
+ "GATED_SEMPRED",
+ "SYN_SEMPRED",
+ "BACKTRACK_SEMPRED",
+ "\"fragment\"",
+ "DOT",
+ "ACTION",
+ "DOC_COMMENT",
+ "SEMI",
+ "\"lexer\"",
+ "\"tree\"",
+ "\"grammar\"",
+ "AMPERSAND",
+ "COLON",
+ "RCURLY",
+ "ASSIGN",
+ "STRING_LITERAL",
+ "CHAR_LITERAL",
+ "INT",
+ "STAR",
+ "COMMA",
+ "TOKEN_REF",
+ "\"protected\"",
+ "\"public\"",
+ "\"private\"",
+ "BANG",
+ "ARG_ACTION",
+ "\"returns\"",
+ "\"throws\"",
+ "LPAREN",
+ "OR",
+ "RPAREN",
+ "\"catch\"",
+ "\"finally\"",
+ "PLUS_ASSIGN",
+ "SEMPRED",
+ "IMPLIES",
+ "ROOT",
+ "WILDCARD",
+ "RULE_REF",
+ "NOT",
+ "TREE_BEGIN",
+ "QUESTION",
+ "PLUS",
+ "OPEN_ELEMENT_OPTION",
+ "CLOSE_ELEMENT_OPTION",
+ "REWRITE",
+ "ETC",
+ "DOLLAR",
+ "DOUBLE_QUOTE_STRING_LITERAL",
+ "DOUBLE_ANGLE_STRING_LITERAL",
+ "WS",
+ "COMMENT",
+ "SL_COMMENT",
+ "ML_COMMENT",
+ "STRAY_BRACKET",
+ "ESC",
+ "DIGIT",
+ "XDIGIT",
+ "NESTED_ARG_ACTION",
+ "NESTED_ACTION",
+ "ACTION_CHAR_LITERAL",
+ "ACTION_STRING_LITERAL",
+ "ACTION_ESC",
+ "WS_LOOP",
+ "INTERNAL_RULE_REF",
+ "WS_OPT",
+ "SRC"
+ };
+
+ }
+
diff --git a/antlr_3_1_source/codegen/CodeGenTreeWalker.smap b/antlr_3_1_source/codegen/CodeGenTreeWalker.smap
new file mode 100644
index 0000000..99545ab
--- /dev/null
+++ b/antlr_3_1_source/codegen/CodeGenTreeWalker.smap
@@ -0,0 +1,2576 @@
+SMAP
+CodeGenTreeWalker.java
+G
+*S G
+*F
++ 0 codegen.g
+codegen.g
+*L
+1:3
+1:4
+1:5
+1:6
+1:8
+1:9
+1:10
+1:11
+1:12
+1:13
+1:14
+1:15
+1:16
+1:17
+1:19
+1:20
+1:21
+1:22
+1:23
+1:24
+1:25
+1:26
+1:27
+1:28
+1:29
+1:30
+1:31
+1:32
+1:33
+1:34
+1:35
+1:36
+1:37
+58:62
+59:63
+61:65
+62:66
+63:67
+64:68
+65:69
+66:70
+67:71
+68:72
+70:74
+71:75
+72:76
+73:77
+74:78
+75:79
+76:80
+77:81
+78:82
+79:83
+80:84
+81:85
+82:86
+83:87
+84:88
+86:90
+87:91
+88:92
+90:94
+91:95
+92:96
+94:98
+95:99
+96:100
+97:101
+99:103
+100:104
+102:106
+104:108
+105:109
+106:110
+107:111
+108:112
+109:113
+110:114
+112:116
+113:117
+114:118
+115:119
+116:120
+117:121
+118:122
+119:123
+120:124
+121:125
+122:126
+123:127
+124:128
+125:129
+126:130
+127:131
+128:132
+129:133
+130:134
+131:135
+132:136
+133:137
+134:138
+135:139
+136:140
+138:142
+139:143
+140:144
+141:145
+142:146
+143:147
+144:148
+145:149
+146:150
+147:151
+148:152
+149:153
+150:154
+151:155
+152:156
+153:157
+154:158
+155:159
+156:160
+157:161
+158:162
+159:163
+160:164
+161:165
+163:167
+164:168
+165:169
+166:170
+167:171
+168:172
+169:173
+170:174
+171:175
+172:176
+173:177
+174:178
+175:179
+176:180
+177:181
+178:182
+179:183
+181:185
+182:186
+183:187
+184:188
+185:189
+186:190
+187:191
+188:192
+190:194
+191:195
+192:196
+193:197
+194:198
+195:199
+196:200
+197:201
+198:202
+199:203
+200:204
+201:205
+202:206
+203:207
+204:208
+205:209
+206:210
+207:211
+208:212
+209:213
+211:215
+212:216
+214:218
+215:219
+216:220
+217:221
+218:222
+219:223
+220:224
+221:225
+222:226
+223:227
+224:228
+225:229
+226:230
+227:231
+228:232
+229:233
+230:234
+231:235
+232:236
+233:237
+234:238
+235:239
+236:240
+238:242
+239:243
+240:244
+241:245
+242:246
+245:251
+245:252
+245:253
+245:254
+245:255
+245:256
+245:284
+245:342
+245:343
+245:344
+245:345
+245:346
+245:347
+245:348
+246:260
+247:261
+248:262
+249:263
+250:264
+251:265
+252:266
+253:267
+254:268
+255:269
+256:270
+257:271
+258:272
+259:273
+260:274
+261:275
+262:276
+263:277
+264:278
+265:279
+266:280
+267:281
+273:286
+273:287
+273:288
+273:289
+273:290
+273:291
+273:292
+273:293
+273:294
+273:295
+273:296
+273:297
+273:336
+273:337
+273:338
+273:339
+273:340
+274:300
+274:301
+274:302
+274:303
+274:304
+274:305
+274:306
+274:307
+274:308
+274:309
+275:312
+275:313
+275:314
+275:315
+275:316
+275:317
+275:318
+275:319
+275:320
+275:321
+277:324
+277:325
+277:326
+277:327
+277:328
+277:329
+277:330
+277:331
+277:332
+277:333
+281:522
+281:526
+281:539
+281:540
+281:541
+281:542
+281:543
+281:544
+281:545
+282:527
+282:528
+282:529
+282:530
+282:531
+282:532
+282:533
+282:534
+282:535
+282:536
+282:537
+282:538
+285:350
+285:356
+285:514
+285:515
+285:516
+285:517
+285:518
+285:519
+285:520
+286:353
+286:357
+286:358
+286:359
+287:354
+287:361
+287:362
+287:363
+287:364
+287:365
+287:366
+287:367
+287:383
+287:384
+287:385
+287:386
+287:387
+289:369
+290:370
+294:390
+295:391
+296:392
+297:393
+298:394
+300:397
+300:398
+300:399
+300:400
+300:401
+300:402
+300:403
+300:404
+300:405
+300:406
+300:407
+300:408
+300:409
+300:420
+300:421
+300:422
+300:423
+300:424
+301:427
+301:428
+301:429
+301:430
+301:431
+301:432
+301:433
+301:434
+301:435
+301:436
+301:437
+301:438
+301:439
+301:449
+301:450
+301:451
+301:452
+301:453
+302:456
+302:457
+302:458
+302:459
+302:460
+302:461
+302:462
+302:463
+302:464
+302:465
+302:466
+302:467
+302:468
+302:477
+302:478
+302:479
+302:480
+302:481
+303:483
+303:484
+303:485
+303:486
+303:487
+303:488
+303:489
+303:490
+303:491
+303:492
+303:493
+303:495
+303:496
+304:497
+304:498
+304:499
+304:500
+304:501
+304:502
+304:503
+304:504
+304:505
+304:506
+304:507
+304:508
+304:510
+304:511
+305:512
+305:513
+308:547
+308:548
+308:549
+308:556
+308:598
+308:599
+308:600
+308:601
+308:602
+308:603
+308:604
+309:553
+312:558
+312:559
+312:560
+312:561
+312:562
+312:579
+312:584
+312:585
+312:586
+312:587
+312:590
+312:591
+312:592
+312:593
+312:595
+312:596
+312:597
+313:565
+314:566
+318:568
+318:569
+318:570
+318:571
+320:573
+321:574
+322:575
+323:576
+324:577
+326:580
+326:581
+326:582
+326:583
+331:606
+331:607
+331:632
+331:878
+331:879
+331:880
+331:881
+331:882
+331:883
+331:884
+331:885
+332:613
+333:614
+334:615
+335:616
+336:617
+337:618
+338:619
+339:620
+340:621
+341:622
+343:624
+344:625
+345:626
+346:627
+347:628
+348:629
+351:610
+351:633
+351:634
+351:635
+351:636
+351:637
+351:638
+351:639
+351:640
+351:848
+351:849
+352:611
+352:642
+352:643
+352:644
+352:645
+352:646
+352:647
+352:648
+352:649
+352:650
+352:651
+352:658
+352:659
+352:660
+352:661
+352:662
+353:664
+353:665
+353:666
+353:667
+353:669
+353:670
+353:671
+353:672
+353:673
+353:674
+353:675
+353:682
+353:683
+353:684
+353:685
+353:686
+353:688
+353:689
+354:690
+354:691
+354:692
+354:693
+354:695
+354:696
+354:697
+354:698
+354:699
+354:700
+354:701
+354:708
+354:709
+354:710
+354:711
+354:712
+354:714
+354:715
+355:717
+355:718
+355:719
+355:720
+355:721
+355:722
+355:723
+355:724
+355:725
+355:726
+355:727
+355:728
+355:729
+355:738
+355:739
+355:740
+355:741
+355:742
+356:745
+356:746
+356:747
+356:748
+356:749
+356:750
+356:758
+356:759
+356:760
+356:761
+356:762
+357:764
+357:765
+357:766
+357:767
+357:768
+357:769
+357:770
+357:771
+357:772
+357:773
+357:774
+357:775
+357:777
+357:778
+358:779
+358:780
+360:782
+361:783
+362:784
+363:785
+364:786
+365:787
+366:788
+367:789
+368:790
+369:791
+370:792
+371:793
+372:794
+373:795
+374:796
+375:797
+376:798
+377:799
+378:800
+379:801
+380:802
+381:803
+382:804
+383:805
+384:806
+385:807
+386:808
+387:809
+388:810
+389:811
+390:812
+391:813
+392:814
+393:815
+394:816
+395:817
+396:818
+397:819
+398:820
+399:821
+400:822
+401:823
+404:826
+404:827
+404:828
+404:829
+404:830
+404:831
+404:832
+404:839
+404:840
+404:841
+404:842
+404:843
+405:845
+405:846
+405:847
+408:851
+409:852
+410:853
+411:854
+412:855
+413:856
+414:857
+415:858
+416:859
+417:860
+418:861
+419:862
+420:863
+421:864
+422:865
+423:866
+424:867
+425:868
+426:869
+427:870
+428:871
+429:872
+430:873
+431:874
+432:875
+433:876
+437:887
+437:891
+437:892
+437:893
+437:922
+437:923
+437:924
+437:925
+437:926
+437:927
+437:928
+437:929
+437:930
+437:931
+437:932
+437:933
+438:894
+438:895
+438:896
+438:897
+438:898
+439:901
+439:902
+439:903
+439:904
+439:905
+440:908
+440:909
+440:910
+440:911
+440:912
+441:915
+441:916
+441:917
+441:918
+441:919
+444:935
+444:939
+444:982
+444:983
+444:984
+444:985
+444:986
+444:987
+444:988
+445:940
+445:941
+445:942
+445:943
+445:945
+445:946
+445:947
+445:948
+445:949
+445:950
+445:951
+445:959
+445:960
+445:961
+445:962
+445:963
+445:965
+445:966
+445:967
+445:968
+445:969
+445:970
+445:971
+445:972
+445:973
+445:974
+445:975
+445:976
+445:978
+445:979
+445:980
+445:981
+448:990
+448:991
+448:992
+448:993
+448:1022
+448:1031
+448:1101
+448:1102
+448:1103
+448:1104
+448:1106
+448:1107
+448:1108
+448:1109
+448:1110
+448:1111
+448:1112
+448:1113
+449:997
+450:998
+451:999
+452:1000
+453:1001
+454:1002
+455:1003
+456:1004
+457:1005
+458:1006
+459:1007
+460:1008
+461:1009
+462:1010
+463:1011
+464:1012
+465:1013
+466:1014
+467:1015
+468:1016
+469:1017
+470:1018
+471:1019
+475:1023
+475:1024
+475:1025
+475:1026
+477:1028
+478:1029
+481:1032
+481:1033
+481:1034
+481:1035
+481:1036
+481:1098
+481:1099
+482:1038
+482:1039
+482:1040
+482:1041
+482:1042
+482:1043
+482:1044
+482:1051
+482:1052
+482:1053
+482:1054
+482:1055
+483:1058
+483:1059
+483:1060
+483:1061
+483:1062
+483:1063
+483:1064
+483:1065
+483:1066
+483:1067
+483:1087
+483:1088
+483:1089
+483:1090
+483:1092
+483:1093
+483:1094
+485:1069
+486:1070
+487:1071
+488:1072
+489:1073
+490:1074
+491:1075
+492:1076
+493:1077
+494:1078
+495:1079
+496:1080
+497:1081
+498:1082
+499:1083
+500:1084
+501:1085
+504:1095
+504:1096
+504:1097
+506:1100
+509:1182
+509:1183
+509:1198
+509:1231
+509:1232
+509:1233
+509:1234
+509:1235
+509:1236
+509:1237
+509:1238
+510:1188
+511:1189
+512:1190
+513:1191
+514:1192
+515:1193
+516:1194
+517:1195
+520:1186
+520:1199
+520:1200
+520:1201
+522:1203
+523:1204
+524:1205
+525:1206
+526:1207
+527:1208
+528:1209
+529:1210
+530:1211
+531:1212
+532:1213
+533:1214
+534:1215
+535:1216
+536:1217
+537:1218
+538:1219
+539:1220
+540:1221
+541:1222
+542:1223
+543:1224
+544:1225
+545:1226
+546:1227
+547:1228
+548:1229
+552:1115
+552:1116
+552:1117
+552:1121
+552:1122
+552:1123
+552:1169
+552:1170
+552:1171
+552:1172
+552:1173
+552:1174
+552:1175
+552:1176
+552:1177
+552:1178
+552:1179
+552:1180
+553:1124
+553:1125
+553:1127
+553:1128
+553:1129
+553:1130
+553:1131
+553:1132
+553:1133
+553:1134
+553:1135
+553:1136
+553:1137
+553:1139
+553:1140
+553:1141
+553:1143
+553:1144
+553:1145
+553:1146
+553:1147
+553:1148
+553:1155
+553:1156
+553:1157
+553:1158
+553:1159
+554:1163
+554:1164
+554:1165
+554:1166
+557:1434
+557:1435
+557:1436
+557:1440
+557:1457
+557:1458
+557:1459
+557:1460
+557:1461
+557:1462
+557:1463
+558:1441
+558:1442
+558:1443
+558:1444
+558:1445
+558:1446
+558:1447
+558:1448
+558:1449
+558:1450
+558:1451
+558:1452
+560:1454
+561:1455
+565:1465
+565:1466
+565:1467
+565:1471
+565:1485
+565:1486
+565:1487
+565:1488
+565:1489
+565:1490
+565:1491
+566:1472
+566:1473
+566:1474
+566:1475
+566:1476
+566:1477
+566:1478
+566:1479
+566:1480
+568:1482
+569:1483
+573:1240
+573:1241
+573:1276
+573:1312
+573:1313
+573:1314
+573:1315
+573:1316
+573:1317
+573:1318
+573:1319
+574:1246
+575:1247
+576:1248
+577:1249
+578:1250
+579:1251
+580:1252
+581:1253
+582:1254
+583:1255
+584:1256
+585:1257
+586:1258
+587:1259
+588:1260
+589:1261
+590:1262
+591:1263
+592:1264
+593:1265
+594:1266
+595:1267
+596:1268
+597:1269
+598:1270
+599:1271
+600:1272
+601:1273
+604:1244
+604:1277
+604:1278
+604:1279
+604:1280
+604:1310
+604:1311
+605:1282
+605:1283
+605:1284
+605:1285
+605:1286
+605:1287
+605:1299
+605:1300
+605:1301
+605:1302
+605:1304
+605:1305
+605:1306
+606:1288
+606:1289
+608:1291
+609:1292
+610:1293
+611:1294
+612:1295
+613:1296
+614:1297
+617:1307
+617:1308
+617:1309
+621:1493
+621:1494
+621:1495
+621:1496
+621:1511
+621:1512
+621:1513
+621:1677
+621:1682
+621:1686
+621:1687
+621:1688
+621:1689
+621:1690
+621:1691
+621:1692
+621:1693
+621:1694
+621:1695
+621:1696
+621:1697
+621:1698
+622:1507
+623:1508
+626:1514
+626:1515
+626:1516
+626:1517
+626:1518
+626:1519
+626:1520
+626:1521
+626:1522
+626:1523
+628:1526
+628:1527
+628:1528
+628:1529
+628:1530
+628:1531
+628:1532
+628:1533
+628:1534
+628:1535
+630:1499
+630:1538
+630:1539
+630:1540
+630:1541
+630:1542
+630:1543
+630:1544
+630:1545
+630:1546
+630:1547
+632:1500
+632:1550
+632:1551
+632:1552
+632:1553
+632:1554
+632:1555
+632:1556
+632:1557
+632:1558
+632:1559
+632:1560
+632:1561
+632:1562
+634:1501
+634:1565
+634:1566
+634:1567
+634:1568
+634:1569
+634:1570
+634:1571
+634:1572
+634:1573
+634:1574
+634:1575
+634:1576
+634:1577
+636:1502
+636:1503
+636:1580
+636:1581
+636:1582
+636:1583
+636:1584
+636:1585
+636:1586
+636:1587
+636:1588
+636:1589
+636:1590
+636:1591
+636:1592
+636:1593
+637:1594
+638:1595
+639:1596
+640:1597
+641:1598
+642:1599
+643:1600
+644:1601
+645:1602
+646:1603
+649:1678
+649:1679
+649:1680
+649:1681
+651:1683
+651:1684
+651:1685
+653:1607
+653:1608
+653:1609
+653:1610
+655:1613
+655:1614
+655:1615
+655:1616
+655:1617
+657:1504
+657:1505
+657:1620
+657:1621
+657:1622
+657:1624
+657:1625
+657:1626
+657:1627
+657:1628
+657:1629
+657:1630
+657:1633
+657:1634
+657:1635
+657:1636
+657:1637
+657:1638
+657:1641
+657:1642
+657:1643
+657:1644
+657:1645
+659:1648
+660:1649
+661:1650
+662:1651
+663:1652
+666:1656
+666:1657
+666:1658
+666:1659
+666:1660
+668:1663
+668:1664
+668:1665
+668:1666
+668:1667
+670:1670
+670:1671
+670:1672
+670:1673
+670:1674
+673:2280
+673:2281
+673:2287
+673:2288
+673:2289
+673:2312
+673:2313
+673:2314
+673:2315
+673:2316
+673:2317
+673:2318
+673:2319
+673:2320
+673:2321
+673:2322
+673:2323
+673:2324
+674:2284
+674:2290
+674:2291
+674:2292
+674:2293
+674:2294
+676:2296
+677:2297
+679:2285
+679:2301
+679:2302
+679:2303
+679:2304
+679:2305
+681:2307
+682:2308
+686:1700
+686:1701
+686:1702
+686:1703
+686:1718
+686:1797
+686:1798
+686:1799
+686:1800
+686:1801
+686:1802
+686:1803
+686:1804
+687:1711
+688:1712
+689:1713
+690:1714
+691:1715
+695:1706
+695:1720
+695:1721
+695:1722
+695:1723
+695:1724
+695:1725
+695:1726
+695:1778
+695:1779
+695:1780
+695:1781
+695:1782
+697:1728
+698:1729
+699:1730
+700:1731
+701:1732
+702:1733
+703:1734
+704:1735
+706:1707
+706:1739
+706:1740
+706:1741
+706:1742
+706:1743
+708:1745
+709:1746
+710:1747
+711:1748
+712:1749
+713:1750
+714:1751
+715:1752
+717:1708
+717:1756
+717:1757
+717:1758
+717:1759
+717:1760
+719:1762
+720:1763
+722:1709
+722:1767
+722:1768
+722:1769
+722:1770
+722:1771
+724:1773
+725:1774
+729:1785
+730:1786
+731:1787
+732:1788
+733:1789
+734:1790
+735:1791
+736:1792
+737:1793
+738:1794
+739:1795
+743:1806
+743:1807
+743:1816
+743:1877
+743:1878
+743:1879
+743:1880
+743:1881
+743:1882
+743:1883
+743:1884
+744:1811
+745:1812
+746:1813
+749:1818
+749:1819
+749:1820
+749:1821
+749:1822
+749:1866
+749:1867
+749:1868
+749:1869
+749:1870
+750:1823
+750:1824
+751:1827
+751:1828
+751:1829
+752:1830
+752:1831
+752:1832
+752:1833
+752:1834
+752:1835
+752:1836
+752:1837
+753:1840
+753:1841
+753:1842
+754:1843
+754:1844
+754:1845
+754:1846
+754:1847
+754:1848
+754:1849
+754:1850
+755:1853
+755:1854
+755:1855
+756:1856
+756:1857
+756:1858
+756:1859
+756:1860
+756:1861
+756:1862
+756:1863
+759:1873
+760:1874
+761:1875
+765:2184
+765:2185
+765:2209
+765:2271
+765:2272
+765:2273
+765:2274
+765:2275
+765:2276
+765:2277
+765:2278
+766:2189
+767:2190
+768:2191
+769:2192
+770:2193
+771:2194
+772:2195
+773:2196
+774:2197
+775:2198
+776:2199
+777:2200
+778:2201
+779:2202
+780:2203
+781:2204
+782:2205
+783:2206
+786:2210
+786:2211
+786:2212
+786:2213
+786:2214
+786:2268
+786:2269
+787:2215
+787:2216
+789:2218
+790:2219
+791:2220
+792:2221
+793:2222
+798:2224
+798:2225
+798:2226
+798:2239
+798:2240
+798:2241
+798:2242
+798:2244
+798:2245
+799:2227
+799:2228
+799:2229
+800:2230
+800:2231
+802:2233
+803:2234
+804:2235
+805:2236
+806:2237
+809:2246
+809:2247
+809:2248
+809:2249
+809:2250
+809:2251
+809:2261
+809:2262
+809:2263
+809:2264
+809:2266
+809:2267
+810:2252
+810:2253
+812:2255
+813:2256
+814:2257
+815:2258
+816:2259
+820:2270
+823:1886
+823:1887
+823:1888
+823:1889
+823:1920
+823:1921
+823:1922
+823:2170
+823:2171
+823:2172
+823:2173
+823:2174
+823:2175
+823:2176
+823:2177
+823:2178
+823:2179
+823:2180
+823:2181
+823:2182
+824:1900
+825:1901
+826:1902
+827:1903
+828:1904
+829:1905
+830:1906
+831:1907
+832:1908
+833:1909
+834:1910
+835:1911
+836:1912
+837:1913
+838:1914
+839:1915
+840:1916
+841:1917
+845:1892
+845:1893
+845:1923
+845:1924
+845:1925
+845:1926
+845:1927
+845:1928
+845:1930
+845:1931
+845:1932
+845:1933
+845:1934
+845:1935
+845:1936
+845:1943
+845:1944
+845:1945
+845:1946
+845:1947
+845:1949
+845:1950
+847:1952
+848:1953
+849:1954
+850:1955
+851:1956
+852:1957
+853:1958
+854:1959
+855:1960
+856:1961
+857:1962
+858:1963
+859:1964
+860:1965
+861:1966
+862:1967
+863:1968
+864:1969
+865:1970
+866:1971
+867:1972
+868:1973
+869:1974
+870:1975
+871:1976
+872:1977
+873:1978
+874:1979
+875:1980
+876:1981
+877:1982
+878:1983
+880:1985
+881:1986
+882:1987
+883:1988
+884:1989
+885:1990
+886:1991
+887:1992
+890:1894
+890:1895
+890:1996
+890:1997
+890:1998
+890:1999
+890:2000
+890:2001
+890:2003
+890:2004
+890:2005
+890:2006
+890:2007
+890:2008
+890:2009
+890:2016
+890:2017
+890:2018
+890:2019
+890:2020
+890:2022
+890:2023
+892:2025
+893:2026
+894:2027
+895:2028
+896:2029
+897:2030
+898:2031
+899:2032
+900:2033
+901:2034
+902:2035
+903:2036
+904:2037
+905:2038
+906:2039
+907:2040
+908:2041
+909:2042
+910:2043
+911:2044
+912:2045
+913:2046
+914:2047
+915:2048
+916:2049
+917:2050
+918:2051
+919:2052
+920:2053
+921:2054
+922:2055
+923:2056
+924:2057
+925:2058
+926:2059
+927:2060
+928:2061
+929:2062
+930:2063
+931:2064
+932:2065
+933:2066
+934:2067
+935:2068
+936:2069
+937:2070
+938:2071
+939:2072
+940:2073
+941:2074
+942:2075
+943:2076
+944:2077
+947:1896
+947:2081
+947:2082
+947:2083
+947:2084
+947:2085
+949:2087
+950:2088
+951:2089
+952:2090
+953:2091
+954:2092
+955:2093
+956:2094
+957:2095
+958:2096
+959:2097
+960:2098
+961:2099
+962:2100
+963:2101
+964:2102
+965:2103
+966:2104
+967:2105
+970:1897
+970:2109
+970:2110
+970:2111
+970:2112
+970:2113
+972:2115
+973:2116
+974:2117
+975:2118
+976:2119
+977:2120
+978:2121
+979:2122
+980:2123
+981:2124
+982:2125
+983:2126
+984:2127
+985:2128
+986:2129
+987:2130
+988:2131
+989:2132
+990:2133
+991:2134
+994:1898
+994:2138
+994:2139
+994:2140
+994:2141
+994:2142
+996:2144
+997:2145
+1000:2149
+1000:2150
+1000:2151
+1000:2152
+1000:2153
+1000:2154
+1000:2155
+1000:2156
+1000:2157
+1000:2158
+1000:2159
+1000:2160
+1000:2161
+1002:2164
+1002:2165
+1002:2166
+1002:2167
+1005:2362
+1005:2366
+1005:2367
+1005:2368
+1005:2383
+1005:2384
+1005:2385
+1005:2386
+1005:2387
+1005:2388
+1005:2389
+1005:2390
+1005:2391
+1005:2392
+1005:2393
+1005:2394
+1006:2369
+1006:2370
+1006:2371
+1006:2372
+1006:2373
+1007:2376
+1007:2377
+1007:2378
+1007:2379
+1007:2380
+1011:2326
+1011:2327
+1011:2328
+1011:2329
+1011:2340
+1011:2353
+1011:2354
+1011:2355
+1011:2356
+1011:2357
+1011:2358
+1011:2359
+1011:2360
+1012:2334
+1013:2335
+1014:2336
+1015:2337
+1018:2332
+1018:2341
+1018:2342
+1018:2343
+1020:2345
+1021:2346
+1022:2347
+1023:2348
+1024:2349
+1025:2350
+1026:2351
+1030:2396
+1030:2405
+1030:2406
+1030:2407
+1030:2445
+1030:2446
+1030:2447
+1030:2448
+1030:2449
+1030:2450
+1030:2451
+1030:2452
+1030:2453
+1030:2454
+1030:2455
+1030:2456
+1031:2399
+1031:2408
+1031:2409
+1031:2410
+1031:2411
+1031:2412
+1032:2400
+1032:2415
+1032:2416
+1032:2417
+1032:2418
+1032:2419
+1033:2401
+1033:2422
+1033:2423
+1033:2424
+1033:2425
+1033:2426
+1034:2402
+1034:2403
+1034:2429
+1034:2430
+1034:2431
+1034:2432
+1034:2433
+1034:2434
+1034:2435
+1034:2436
+1034:2437
+1034:2438
+1034:2439
+1034:2440
+1034:2441
+1034:2442
+1039:1321
+1039:1322
+1039:1364
+1039:1425
+1039:1426
+1039:1427
+1039:1428
+1039:1429
+1039:1430
+1039:1431
+1039:1432
+1040:1328
+1041:1329
+1042:1330
+1043:1331
+1044:1332
+1045:1333
+1046:1334
+1047:1335
+1048:1336
+1049:1337
+1050:1338
+1051:1339
+1052:1340
+1053:1341
+1054:1342
+1055:1343
+1056:1344
+1057:1345
+1058:1346
+1059:1347
+1060:1348
+1061:1349
+1062:1350
+1063:1351
+1064:1352
+1065:1353
+1066:1354
+1067:1355
+1068:1356
+1069:1357
+1070:1358
+1071:1359
+1072:1360
+1073:1361
+1076:1365
+1076:1366
+1076:1367
+1076:1418
+1076:1419
+1076:1420
+1076:1421
+1076:1423
+1076:1424
+1077:1368
+1077:1369
+1077:1370
+1078:1325
+1078:1326
+1078:1371
+1078:1372
+1078:1373
+1078:1374
+1078:1376
+1078:1377
+1078:1378
+1078:1379
+1078:1380
+1078:1381
+1078:1382
+1078:1392
+1078:1393
+1078:1394
+1078:1395
+1078:1396
+1078:1398
+1078:1399
+1078:1400
+1078:1401
+1080:1403
+1081:1404
+1082:1405
+1083:1406
+1084:1407
+1085:1408
+1086:1409
+1087:1410
+1088:1411
+1089:1412
+1090:1413
+1091:1414
+1092:1415
+1093:1416
+1098:2561
+1098:2562
+1098:2563
+1098:2564
+1098:2576
+1098:2599
+1098:2600
+1098:2601
+1098:2602
+1098:2603
+1098:2604
+1098:2605
+1098:2606
+1099:2568
+1100:2569
+1101:2570
+1102:2571
+1103:2572
+1104:2573
+1107:2577
+1107:2578
+1107:2579
+1107:2580
+1107:2592
+1107:2593
+1109:2582
+1110:2583
+1111:2584
+1112:2585
+1114:2587
+1114:2588
+1115:2589
+1115:2590
+1115:2591
+1118:2595
+1119:2596
+1120:2597
+1124:2458
+1124:2459
+1124:2467
+1124:2538
+1124:2542
+1124:2547
+1124:2548
+1124:2549
+1124:2550
+1124:2552
+1124:2553
+1124:2554
+1124:2555
+1124:2556
+1124:2557
+1124:2558
+1124:2559
+1125:2464
+1130:2462
+1130:2468
+1130:2469
+1130:2470
+1130:2471
+1130:2472
+1130:2473
+1130:2474
+1130:2536
+1130:2537
+1131:2476
+1131:2477
+1131:2478
+1131:2479
+1131:2480
+1131:2481
+1131:2482
+1131:2483
+1131:2484
+1131:2485
+1131:2486
+1131:2487
+1131:2488
+1131:2490
+1131:2491
+1131:2492
+1131:2493
+1131:2494
+1131:2495
+1131:2504
+1131:2505
+1131:2506
+1131:2507
+1131:2509
+1131:2510
+1131:2511
+1131:2527
+1131:2528
+1131:2529
+1131:2530
+1131:2531
+1132:2496
+1132:2497
+1133:2498
+1134:2499
+1135:2500
+1136:2501
+1137:2502
+1140:2514
+1140:2515
+1140:2516
+1140:2517
+1140:2518
+1141:2519
+1142:2520
+1143:2521
+1144:2522
+1145:2523
+1148:2533
+1148:2534
+1148:2535
+1151:2539
+1151:2540
+1151:2541
+1154:2543
+1154:2544
+1154:2545
+1154:2546
+1157:2608
+1157:2609
+1157:2617
+1157:2618
+1157:2619
+1157:2645
+1157:2646
+1157:2647
+1157:2648
+1157:2649
+1157:2650
+1157:2651
+1157:2652
+1157:2653
+1157:2654
+1157:2655
+1157:2656
+1157:2657
+1158:2613
+1159:2614
+1162:2620
+1162:2621
+1162:2622
+1162:2623
+1162:2624
+1162:2625
+1162:2626
+1162:2627
+1162:2628
+1164:2631
+1164:2632
+1164:2633
+1164:2634
+1164:2635
+1164:2636
+1166:2639
+1166:2640
+1166:2641
+1166:2642
+1169:3071
+1169:3072
+1169:3076
+1169:3077
+1169:3078
+1169:3130
+1169:3131
+1169:3132
+1169:3133
+1169:3134
+1169:3135
+1169:3136
+1169:3137
+1169:3138
+1169:3139
+1169:3140
+1169:3141
+1169:3142
+1170:3079
+1170:3080
+1170:3081
+1170:3082
+1170:3083
+1170:3084
+1170:3085
+1170:3086
+1170:3087
+1170:3088
+1172:3090
+1173:3091
+1174:3092
+1176:3096
+1176:3097
+1176:3098
+1176:3099
+1176:3100
+1176:3101
+1176:3102
+1176:3103
+1176:3104
+1176:3105
+1178:3107
+1179:3108
+1180:3109
+1182:3113
+1182:3114
+1182:3115
+1182:3116
+1182:3117
+1182:3118
+1182:3119
+1182:3120
+1182:3121
+1182:3122
+1184:3124
+1185:3125
+1186:3126
+1190:3144
+1190:3145
+1190:3156
+1190:3200
+1190:3201
+1190:3202
+1190:3203
+1190:3204
+1190:3205
+1190:3206
+1190:3207
+1191:3149
+1192:3150
+1193:3151
+1194:3152
+1195:3153
+1198:3157
+1198:3158
+1198:3159
+1198:3160
+1198:3161
+1198:3192
+1198:3193
+1199:3162
+1199:3163
+1200:3164
+1201:3165
+1202:3166
+1203:3167
+1204:3168
+1206:3170
+1206:3171
+1206:3172
+1206:3173
+1206:3174
+1206:3175
+1206:3185
+1206:3186
+1206:3187
+1206:3188
+1206:3190
+1206:3191
+1207:3176
+1207:3177
+1209:3179
+1210:3180
+1211:3181
+1212:3182
+1213:3183
+1218:3195
+1219:3196
+1220:3197
+1221:3198
+1225:2841
+1225:2842
+1225:2843
+1225:2844
+1225:2853
+1225:2854
+1225:2855
+1225:3057
+1225:3058
+1225:3059
+1225:3060
+1225:3061
+1225:3062
+1225:3063
+1225:3064
+1225:3065
+1225:3066
+1225:3067
+1225:3068
+1225:3069
+1226:2847
+1226:2856
+1226:2857
+1226:2858
+1226:2859
+1226:2860
+1228:2862
+1229:2863
+1230:2864
+1231:2865
+1232:2866
+1233:2867
+1234:2868
+1235:2869
+1236:2870
+1237:2871
+1238:2872
+1239:2873
+1240:2874
+1241:2875
+1242:2876
+1243:2877
+1244:2878
+1245:2879
+1246:2880
+1247:2881
+1248:2882
+1249:2883
+1250:2884
+1251:2885
+1252:2886
+1253:2887
+1254:2888
+1255:2889
+1256:2890
+1259:2894
+1259:2895
+1259:2896
+1259:2897
+1259:2898
+1260:2848
+1260:2849
+1260:2900
+1260:2901
+1260:2902
+1260:2903
+1260:2904
+1260:2905
+1260:2906
+1260:2907
+1260:2909
+1260:2910
+1260:2911
+1260:2912
+1260:2913
+1260:2914
+1260:2915
+1260:2922
+1260:2923
+1260:2924
+1260:2925
+1260:2926
+1260:2928
+1260:2929
+1260:2946
+1260:2947
+1260:2948
+1260:2949
+1260:2950
+1261:2850
+1261:2932
+1261:2933
+1261:2934
+1261:2935
+1261:2936
+1262:2851
+1262:2939
+1262:2940
+1262:2941
+1262:2942
+1262:2943
+1265:2953
+1266:2954
+1267:2955
+1268:2956
+1269:2957
+1270:2958
+1271:2959
+1272:2960
+1273:2961
+1274:2962
+1275:2963
+1276:2964
+1277:2965
+1278:2966
+1279:2967
+1280:2968
+1281:2969
+1282:2970
+1283:2971
+1284:2972
+1285:2973
+1286:2974
+1287:2975
+1288:2976
+1289:2977
+1290:2978
+1291:2979
+1292:2980
+1293:2981
+1294:2982
+1295:2983
+1296:2984
+1299:2988
+1299:2989
+1299:2990
+1299:2991
+1299:2992
+1301:2994
+1302:2995
+1303:2996
+1304:2997
+1305:2998
+1306:2999
+1307:3000
+1308:3001
+1309:3002
+1310:3003
+1311:3004
+1312:3005
+1313:3006
+1314:3007
+1315:3008
+1316:3009
+1317:3010
+1318:3011
+1319:3012
+1320:3013
+1321:3014
+1322:3015
+1323:3016
+1324:3017
+1325:3018
+1326:3019
+1327:3020
+1328:3021
+1329:3022
+1330:3023
+1331:3024
+1332:3025
+1333:3026
+1334:3027
+1335:3028
+1336:3029
+1337:3030
+1338:3031
+1339:3032
+1340:3033
+1341:3034
+1342:3035
+1343:3036
+1344:3037
+1345:3038
+1346:3039
+1349:3043
+1349:3044
+1349:3045
+1349:3046
+1349:3047
+1351:3049
+1352:3050
+1353:3051
+1354:3052
+1355:3053
+1359:2659
+1359:2660
+1359:2669
+1359:2670
+1359:2671
+1359:2827
+1359:2828
+1359:2829
+1359:2830
+1359:2831
+1359:2832
+1359:2833
+1359:2834
+1359:2835
+1359:2836
+1359:2837
+1359:2838
+1359:2839
+1360:2672
+1360:2673
+1360:2674
+1360:2675
+1360:2676
+1360:2677
+1360:2678
+1360:2679
+1360:2680
+1360:2681
+1360:2682
+1360:2683
+1360:2684
+1360:2685
+1360:2686
+1361:2663
+1361:2664
+1361:2689
+1361:2690
+1361:2691
+1361:2692
+1361:2693
+1361:2694
+1361:2696
+1361:2697
+1361:2698
+1361:2699
+1361:2700
+1361:2701
+1361:2702
+1361:2705
+1361:2706
+1361:2707
+1361:2708
+1361:2709
+1361:2712
+1361:2713
+1361:2714
+1361:2715
+1361:2716
+1361:2809
+1361:2810
+1363:2719
+1364:2720
+1365:2721
+1366:2722
+1367:2723
+1368:2724
+1369:2725
+1370:2726
+1371:2727
+1372:2728
+1373:2729
+1374:2730
+1376:2732
+1376:2733
+1376:2734
+1376:2735
+1376:2768
+1376:2769
+1377:2665
+1377:2666
+1377:2736
+1377:2737
+1377:2738
+1377:2739
+1377:2740
+1377:2741
+1377:2742
+1377:2743
+1377:2744
+1377:2745
+1377:2746
+1377:2747
+1377:2748
+1377:2749
+1377:2750
+1377:2759
+1377:2760
+1377:2761
+1377:2762
+1377:2763
+1377:2764
+1377:2766
+1377:2767
+1379:2752
+1380:2753
+1381:2754
+1382:2755
+1383:2756
+1384:2757
+1389:2771
+1389:2772
+1389:2773
+1389:2774
+1389:2775
+1389:2776
+1389:2777
+1389:2803
+1389:2804
+1389:2805
+1389:2806
+1389:2807
+1391:2779
+1392:2780
+1393:2781
+1394:2782
+1396:2786
+1396:2787
+1396:2788
+1396:2789
+1396:2790
+1398:2792
+1399:2793
+1400:2794
+1401:2795
+1406:2667
+1406:2813
+1406:2814
+1406:2815
+1406:2816
+1406:2817
+1408:2819
+1409:2820
+1410:2821
+1411:2822
+1412:2823
+*E
diff --git a/antlr_3_1_source/codegen/CodeGenTreeWalkerTokenTypes.java b/antlr_3_1_source/codegen/CodeGenTreeWalkerTokenTypes.java
new file mode 100644
index 0000000..d9ab773
--- /dev/null
+++ b/antlr_3_1_source/codegen/CodeGenTreeWalkerTokenTypes.java
@@ -0,0 +1,140 @@
+// $ANTLR 2.7.7 (2006-01-29): "codegen.g" -> "CodeGenTreeWalker.java"$
+
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2008 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+ package org.antlr.codegen;
+ import org.antlr.tool.*;
+ import org.antlr.analysis.*;
+ import org.antlr.misc.*;
+ import java.util.*;
+ import org.antlr.stringtemplate.*;
+ import antlr.TokenWithIndex;
+ import antlr.CommonToken;
+
+public interface CodeGenTreeWalkerTokenTypes {
+ int EOF = 1;
+ int NULL_TREE_LOOKAHEAD = 3;
+ int OPTIONS = 4;
+ int TOKENS = 5;
+ int PARSER = 6;
+ int LEXER = 7;
+ int RULE = 8;
+ int BLOCK = 9;
+ int OPTIONAL = 10;
+ int CLOSURE = 11;
+ int POSITIVE_CLOSURE = 12;
+ int SYNPRED = 13;
+ int RANGE = 14;
+ int CHAR_RANGE = 15;
+ int EPSILON = 16;
+ int ALT = 17;
+ int EOR = 18;
+ int EOB = 19;
+ int EOA = 20;
+ int ID = 21;
+ int ARG = 22;
+ int ARGLIST = 23;
+ int RET = 24;
+ int LEXER_GRAMMAR = 25;
+ int PARSER_GRAMMAR = 26;
+ int TREE_GRAMMAR = 27;
+ int COMBINED_GRAMMAR = 28;
+ int INITACTION = 29;
+ int FORCED_ACTION = 30;
+ int LABEL = 31;
+ int TEMPLATE = 32;
+ int SCOPE = 33;
+ int IMPORT = 34;
+ int GATED_SEMPRED = 35;
+ int SYN_SEMPRED = 36;
+ int BACKTRACK_SEMPRED = 37;
+ int FRAGMENT = 38;
+ int DOT = 39;
+ int ACTION = 40;
+ int DOC_COMMENT = 41;
+ int SEMI = 42;
+ int LITERAL_lexer = 43;
+ int LITERAL_tree = 44;
+ int LITERAL_grammar = 45;
+ int AMPERSAND = 46;
+ int COLON = 47;
+ int RCURLY = 48;
+ int ASSIGN = 49;
+ int STRING_LITERAL = 50;
+ int CHAR_LITERAL = 51;
+ int INT = 52;
+ int STAR = 53;
+ int COMMA = 54;
+ int TOKEN_REF = 55;
+ int LITERAL_protected = 56;
+ int LITERAL_public = 57;
+ int LITERAL_private = 58;
+ int BANG = 59;
+ int ARG_ACTION = 60;
+ int LITERAL_returns = 61;
+ int LITERAL_throws = 62;
+ int LPAREN = 63;
+ int OR = 64;
+ int RPAREN = 65;
+ int LITERAL_catch = 66;
+ int LITERAL_finally = 67;
+ int PLUS_ASSIGN = 68;
+ int SEMPRED = 69;
+ int IMPLIES = 70;
+ int ROOT = 71;
+ int WILDCARD = 72;
+ int RULE_REF = 73;
+ int NOT = 74;
+ int TREE_BEGIN = 75;
+ int QUESTION = 76;
+ int PLUS = 77;
+ int OPEN_ELEMENT_OPTION = 78;
+ int CLOSE_ELEMENT_OPTION = 79;
+ int REWRITE = 80;
+ int ETC = 81;
+ int DOLLAR = 82;
+ int DOUBLE_QUOTE_STRING_LITERAL = 83;
+ int DOUBLE_ANGLE_STRING_LITERAL = 84;
+ int WS = 85;
+ int COMMENT = 86;
+ int SL_COMMENT = 87;
+ int ML_COMMENT = 88;
+ int STRAY_BRACKET = 89;
+ int ESC = 90;
+ int DIGIT = 91;
+ int XDIGIT = 92;
+ int NESTED_ARG_ACTION = 93;
+ int NESTED_ACTION = 94;
+ int ACTION_CHAR_LITERAL = 95;
+ int ACTION_STRING_LITERAL = 96;
+ int ACTION_ESC = 97;
+ int WS_LOOP = 98;
+ int INTERNAL_RULE_REF = 99;
+ int WS_OPT = 100;
+ int SRC = 101;
+}
diff --git a/antlr_3_1_source/codegen/CodeGenTreeWalkerTokenTypes.txt b/antlr_3_1_source/codegen/CodeGenTreeWalkerTokenTypes.txt
new file mode 100644
index 0000000..dc4f000
--- /dev/null
+++ b/antlr_3_1_source/codegen/CodeGenTreeWalkerTokenTypes.txt
@@ -0,0 +1,100 @@
+// $ANTLR 2.7.7 (2006-01-29): codegen.g -> CodeGenTreeWalkerTokenTypes.txt$
+CodeGenTreeWalker // output token vocab name
+OPTIONS="options"=4
+TOKENS="tokens"=5
+PARSER="parser"=6
+LEXER=7
+RULE=8
+BLOCK=9
+OPTIONAL=10
+CLOSURE=11
+POSITIVE_CLOSURE=12
+SYNPRED=13
+RANGE=14
+CHAR_RANGE=15
+EPSILON=16
+ALT=17
+EOR=18
+EOB=19
+EOA=20
+ID=21
+ARG=22
+ARGLIST=23
+RET=24
+LEXER_GRAMMAR=25
+PARSER_GRAMMAR=26
+TREE_GRAMMAR=27
+COMBINED_GRAMMAR=28
+INITACTION=29
+FORCED_ACTION=30
+LABEL=31
+TEMPLATE=32
+SCOPE="scope"=33
+IMPORT="import"=34
+GATED_SEMPRED=35
+SYN_SEMPRED=36
+BACKTRACK_SEMPRED=37
+FRAGMENT="fragment"=38
+DOT=39
+ACTION=40
+DOC_COMMENT=41
+SEMI=42
+LITERAL_lexer="lexer"=43
+LITERAL_tree="tree"=44
+LITERAL_grammar="grammar"=45
+AMPERSAND=46
+COLON=47
+RCURLY=48
+ASSIGN=49
+STRING_LITERAL=50
+CHAR_LITERAL=51
+INT=52
+STAR=53
+COMMA=54
+TOKEN_REF=55
+LITERAL_protected="protected"=56
+LITERAL_public="public"=57
+LITERAL_private="private"=58
+BANG=59
+ARG_ACTION=60
+LITERAL_returns="returns"=61
+LITERAL_throws="throws"=62
+LPAREN=63
+OR=64
+RPAREN=65
+LITERAL_catch="catch"=66
+LITERAL_finally="finally"=67
+PLUS_ASSIGN=68
+SEMPRED=69
+IMPLIES=70
+ROOT=71
+WILDCARD=72
+RULE_REF=73
+NOT=74
+TREE_BEGIN=75
+QUESTION=76
+PLUS=77
+OPEN_ELEMENT_OPTION=78
+CLOSE_ELEMENT_OPTION=79
+REWRITE=80
+ETC=81
+DOLLAR=82
+DOUBLE_QUOTE_STRING_LITERAL=83
+DOUBLE_ANGLE_STRING_LITERAL=84
+WS=85
+COMMENT=86
+SL_COMMENT=87
+ML_COMMENT=88
+STRAY_BRACKET=89
+ESC=90
+DIGIT=91
+XDIGIT=92
+NESTED_ARG_ACTION=93
+NESTED_ACTION=94
+ACTION_CHAR_LITERAL=95
+ACTION_STRING_LITERAL=96
+ACTION_ESC=97
+WS_LOOP=98
+INTERNAL_RULE_REF=99
+WS_OPT=100
+SRC=101
diff --git a/antlr_3_1_source/codegen/CodeGenerator.java b/antlr_3_1_source/codegen/CodeGenerator.java
new file mode 100644
index 0000000..222b3c4
--- /dev/null
+++ b/antlr_3_1_source/codegen/CodeGenerator.java
@@ -0,0 +1,1330 @@
+/*
+[The "BSD licence"]
+Copyright (c) 2005-2007 Terence Parr
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions
+are met:
+1. Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright
+notice, this list of conditions and the following disclaimer in the
+documentation and/or other materials provided with the distribution.
+3. The name of the author may not be used to endorse or promote products
+derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import antlr.RecognitionException;
+import antlr.TokenStreamRewriteEngine;
+import antlr.collections.AST;
+import org.antlr.Tool;
+import org.antlr.analysis.*;
+import org.antlr.misc.BitSet;
+import org.antlr.misc.*;
+import org.antlr.stringtemplate.*;
+import org.antlr.stringtemplate.language.AngleBracketTemplateLexer;
+import org.antlr.tool.*;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.Writer;
+import java.util.*;
+
+/** ANTLR's code generator.
+ *
+ * Generate recognizers derived from grammars. Language independence
+ * achieved through the use of StringTemplateGroup objects. All output
+ * strings are completely encapsulated in the group files such as Java.stg.
+ * Some computations are done that are unused by a particular language.
+ * This generator just computes and sets the values into the templates;
+ * the templates are free to use or not use the information.
+ *
+ * To make a new code generation target, define X.stg for language X
+ * by copying from existing Y.stg most closely releated to your language;
+ * e.g., to do CSharp.stg copy Java.stg. The template group file has a
+ * bunch of templates that are needed by the code generator. You can add
+ * a new target w/o even recompiling ANTLR itself. The language=X option
+ * in a grammar file dictates which templates get loaded/used.
+ *
+ * Some language like C need both parser files and header files. Java needs
+ * to have a separate file for the cyclic DFA as ANTLR generates bytecodes
+ * directly (which cannot be in the generated parser Java file). To facilitate
+ * this,
+ *
+ * cyclic can be in same file, but header, output must be searpate. recognizer
+ * is in outptufile.
+ */
+public class CodeGenerator {
+ /** When generating SWITCH statements, some targets might need to limit
+ * the size (based upon the number of case labels). Generally, this
+ * limit will be hit only for lexers where wildcard in a UNICODE
+ * vocabulary environment would generate a SWITCH with 65000 labels.
+ */
+ public int MAX_SWITCH_CASE_LABELS = 300;
+ public int MIN_SWITCH_ALTS = 3;
+ public boolean GENERATE_SWITCHES_WHEN_POSSIBLE = true;
+ //public static boolean GEN_ACYCLIC_DFA_INLINE = true;
+ public static boolean EMIT_TEMPLATE_DELIMITERS = false;
+ public static int MAX_ACYCLIC_DFA_STATES_INLINE = 10;
+
+ public String classpathTemplateRootDirectoryName =
+ "org/antlr/codegen/templates";
+
+ /** Which grammar are we generating code for? Each generator
+ * is attached to a specific grammar.
+ */
+ public Grammar grammar;
+
+ /** What language are we generating? */
+ protected String language;
+
+ /** The target specifies how to write out files and do other language
+ * specific actions.
+ */
+ public Target target = null;
+
+ /** Where are the templates this generator should use to generate code? */
+ protected StringTemplateGroup templates;
+
+ /** The basic output templates without AST or templates stuff; this will be
+ * the templates loaded for the language such as Java.stg *and* the Dbg
+ * stuff if turned on. This is used for generating syntactic predicates.
+ */
+ protected StringTemplateGroup baseTemplates;
+
+ protected StringTemplate recognizerST;
+ protected StringTemplate outputFileST;
+ protected StringTemplate headerFileST;
+
+ /** Used to create unique labels */
+ protected int uniqueLabelNumber = 1;
+
+ /** A reference to the ANTLR tool so we can learn about output directories
+ * and such.
+ */
+ protected Tool tool;
+
+ /** Generate debugging event method calls */
+ protected boolean debug;
+
+ /** Create a Tracer object and make the recognizer invoke this. */
+ protected boolean trace;
+
+ /** Track runtime parsing information about decisions etc...
+ * This requires the debugging event mechanism to work.
+ */
+ protected boolean profile;
+
+ protected int lineWidth = 72;
+
+ /** I have factored out the generation of acyclic DFAs to separate class */
+ public ACyclicDFACodeGenerator acyclicDFAGenerator =
+ new ACyclicDFACodeGenerator(this);
+
+ /** I have factored out the generation of cyclic DFAs to separate class */
+ /*
+ public CyclicDFACodeGenerator cyclicDFAGenerator =
+ new CyclicDFACodeGenerator(this);
+ */
+
+ public static final String VOCAB_FILE_EXTENSION = ".tokens";
+ protected final static String vocabFilePattern =
+ "=\n}>" +
+ "=\n}>";
+
+ public CodeGenerator(Tool tool, Grammar grammar, String language) {
+ this.tool = tool;
+ this.grammar = grammar;
+ this.language = language;
+ loadLanguageTarget(language);
+ }
+
+ protected void loadLanguageTarget(String language) {
+ String targetName = "org.antlr.codegen."+language+"Target";
+ try {
+ Class c = Class.forName(targetName);
+ target = (Target)c.newInstance();
+ }
+ catch (ClassNotFoundException cnfe) {
+ target = new Target(); // use default
+ }
+ catch (InstantiationException ie) {
+ ErrorManager.error(ErrorManager.MSG_CANNOT_CREATE_TARGET_GENERATOR,
+ targetName,
+ ie);
+ }
+ catch (IllegalAccessException cnfe) {
+ ErrorManager.error(ErrorManager.MSG_CANNOT_CREATE_TARGET_GENERATOR,
+ targetName,
+ cnfe);
+ }
+ }
+
+ /** load the main language.stg template group file */
+ public void loadTemplates(String language) {
+ // get a group loader containing main templates dir and target subdir
+ String templateDirs =
+ classpathTemplateRootDirectoryName+":"+
+ classpathTemplateRootDirectoryName+"/"+language;
+ //System.out.println("targets="+templateDirs.toString());
+ StringTemplateGroupLoader loader =
+ new CommonGroupLoader(templateDirs,
+ ErrorManager.getStringTemplateErrorListener());
+ StringTemplateGroup.registerGroupLoader(loader);
+ StringTemplateGroup.registerDefaultLexer(AngleBracketTemplateLexer.class);
+
+ // first load main language template
+ StringTemplateGroup coreTemplates =
+ StringTemplateGroup.loadGroup(language);
+ baseTemplates = coreTemplates;
+ if ( coreTemplates ==null ) {
+ ErrorManager.error(ErrorManager.MSG_MISSING_CODE_GEN_TEMPLATES,
+ language);
+ return;
+ }
+
+ // dynamically add subgroups that act like filters to apply to
+ // their supergroup. E.g., Java:Dbg:AST:ASTParser::ASTDbg.
+ String outputOption = (String)grammar.getOption("output");
+ if ( outputOption!=null && outputOption.equals("AST") ) {
+ if ( debug && grammar.type!=Grammar.LEXER ) {
+ StringTemplateGroup dbgTemplates =
+ StringTemplateGroup.loadGroup("Dbg", coreTemplates);
+ baseTemplates = dbgTemplates;
+ StringTemplateGroup astTemplates =
+ StringTemplateGroup.loadGroup("AST",dbgTemplates);
+ StringTemplateGroup astParserTemplates = astTemplates;
+ //if ( !grammar.rewriteMode() ) {
+ if ( grammar.type==Grammar.TREE_PARSER ) {
+ astParserTemplates =
+ StringTemplateGroup.loadGroup("ASTTreeParser", astTemplates);
+ }
+ else {
+ astParserTemplates =
+ StringTemplateGroup.loadGroup("ASTParser", astTemplates);
+ }
+ //}
+ StringTemplateGroup astDbgTemplates =
+ StringTemplateGroup.loadGroup("ASTDbg", astParserTemplates);
+ templates = astDbgTemplates;
+ }
+ else {
+ StringTemplateGroup astTemplates =
+ StringTemplateGroup.loadGroup("AST", coreTemplates);
+ StringTemplateGroup astParserTemplates = astTemplates;
+ //if ( !grammar.rewriteMode() ) {
+ if ( grammar.type==Grammar.TREE_PARSER ) {
+ astParserTemplates =
+ StringTemplateGroup.loadGroup("ASTTreeParser", astTemplates);
+ }
+ else {
+ astParserTemplates =
+ StringTemplateGroup.loadGroup("ASTParser", astTemplates);
+ }
+ //}
+ templates = astParserTemplates;
+ }
+ }
+ else if ( outputOption!=null && outputOption.equals("template") ) {
+ if ( debug && grammar.type!=Grammar.LEXER ) {
+ StringTemplateGroup dbgTemplates =
+ StringTemplateGroup.loadGroup("Dbg", coreTemplates);
+ baseTemplates = dbgTemplates;
+ StringTemplateGroup stTemplates =
+ StringTemplateGroup.loadGroup("ST",dbgTemplates);
+ templates = stTemplates;
+ }
+ else {
+ templates = StringTemplateGroup.loadGroup("ST", coreTemplates);
+ }
+ }
+ else if ( debug && grammar.type!=Grammar.LEXER ) {
+ templates = StringTemplateGroup.loadGroup("Dbg", coreTemplates);
+ baseTemplates = templates;
+ }
+ else {
+ templates = coreTemplates;
+ }
+
+ if ( EMIT_TEMPLATE_DELIMITERS ) {
+ templates.emitDebugStartStopStrings(true);
+ templates.doNotEmitDebugStringsForTemplate("codeFileExtension");
+ templates.doNotEmitDebugStringsForTemplate("headerFileExtension");
+ }
+ }
+
+ /** Given the grammar to which we are attached, walk the AST associated
+ * with that grammar to create NFAs. Then create the DFAs for all
+ * decision points in the grammar by converting the NFAs to DFAs.
+ * Finally, walk the AST again to generate code.
+ *
+ * Either 1 or 2 files are written:
+ *
+ * recognizer: the main parser/lexer/treewalker item
+ * header file: language like C/C++ need extern definitions
+ *
+ * The target, such as JavaTarget, dictates which files get written.
+ */
+ public StringTemplate genRecognizer() {
+ //System.out.println("### generate "+grammar.name+" recognizer");
+ // LOAD OUTPUT TEMPLATES
+ loadTemplates(language);
+ if ( templates==null ) {
+ return null;
+ }
+
+ // CREATE NFA FROM GRAMMAR, CREATE DFA FROM NFA
+ if ( ErrorManager.doNotAttemptAnalysis() ) {
+ return null;
+ }
+ target.performGrammarAnalysis(this, grammar);
+
+
+ // some grammar analysis errors will not yield reliable DFA
+ if ( ErrorManager.doNotAttemptCodeGen() ) {
+ return null;
+ }
+
+ // OPTIMIZE DFA
+ DFAOptimizer optimizer = new DFAOptimizer(grammar);
+ optimizer.optimize();
+
+ // OUTPUT FILE (contains recognizerST)
+ outputFileST = templates.getInstanceOf("outputFile");
+
+ // HEADER FILE
+ if ( templates.isDefined("headerFile") ) {
+ headerFileST = templates.getInstanceOf("headerFile");
+ }
+ else {
+ // create a dummy to avoid null-checks all over code generator
+ headerFileST = new StringTemplate(templates,"");
+ headerFileST.setName("dummy-header-file");
+ }
+
+ boolean filterMode = grammar.getOption("filter")!=null &&
+ grammar.getOption("filter").equals("true");
+ boolean canBacktrack = grammar.getSyntacticPredicates()!=null ||
+ filterMode;
+
+ // TODO: move this down further because generating the recognizer
+ // alters the model with info on who uses predefined properties etc...
+ // The actions here might refer to something.
+
+ // The only two possible output files are available at this point.
+ // Verify action scopes are ok for target and dump actions into output
+ // Templates can say for example.
+ Map actions = grammar.getActions();
+ verifyActionScopesOkForTarget(actions);
+ // translate $x::y references
+ translateActionAttributeReferences(actions);
+ Map actionsForGrammarScope =
+ (Map)actions.get(grammar.getDefaultActionScope(grammar.type));
+ if ( filterMode &&
+ (actionsForGrammarScope==null ||
+ !actionsForGrammarScope.containsKey(Grammar.SYNPREDGATE_ACTION_NAME)) )
+ {
+ // if filtering, we need to set actions to execute at backtracking
+ // level 1 not 0. Don't set this action if a user has though
+ StringTemplate gateST = templates.getInstanceOf("filteringActionGate");
+ if ( actionsForGrammarScope==null ) {
+ actionsForGrammarScope=new HashMap();
+ actions.put(grammar.getDefaultActionScope(grammar.type),
+ actionsForGrammarScope);
+ }
+ actionsForGrammarScope.put(Grammar.SYNPREDGATE_ACTION_NAME,
+ gateST);
+ }
+ headerFileST.setAttribute("actions", actions);
+ outputFileST.setAttribute("actions", actions);
+
+ headerFileST.setAttribute("buildTemplate", new Boolean(grammar.buildTemplate()));
+ outputFileST.setAttribute("buildTemplate", new Boolean(grammar.buildTemplate()));
+ headerFileST.setAttribute("buildAST", new Boolean(grammar.buildAST()));
+ outputFileST.setAttribute("buildAST", new Boolean(grammar.buildAST()));
+
+ outputFileST.setAttribute("rewriteMode", Boolean.valueOf(grammar.rewriteMode()));
+ headerFileST.setAttribute("rewriteMode", Boolean.valueOf(grammar.rewriteMode()));
+
+ outputFileST.setAttribute("backtracking", Boolean.valueOf(canBacktrack));
+ headerFileST.setAttribute("backtracking", Boolean.valueOf(canBacktrack));
+ // turn on memoize attribute at grammar level so we can create ruleMemo.
+ // each rule has memoize attr that hides this one, indicating whether
+ // it needs to save results
+ String memoize = (String)grammar.getOption("memoize");
+ outputFileST.setAttribute("memoize",
+ (grammar.atLeastOneRuleMemoizes||
+ Boolean.valueOf(memoize!=null&&memoize.equals("true"))&&
+ canBacktrack));
+ headerFileST.setAttribute("memoize",
+ (grammar.atLeastOneRuleMemoizes||
+ Boolean.valueOf(memoize!=null&&memoize.equals("true"))&&
+ canBacktrack));
+
+
+ outputFileST.setAttribute("trace", Boolean.valueOf(trace));
+ headerFileST.setAttribute("trace", Boolean.valueOf(trace));
+
+ outputFileST.setAttribute("profile", Boolean.valueOf(profile));
+ headerFileST.setAttribute("profile", Boolean.valueOf(profile));
+
+ // RECOGNIZER
+ if ( grammar.type==Grammar.LEXER ) {
+ recognizerST = templates.getInstanceOf("lexer");
+ outputFileST.setAttribute("LEXER", Boolean.valueOf(true));
+ headerFileST.setAttribute("LEXER", Boolean.valueOf(true));
+ recognizerST.setAttribute("filterMode",
+ Boolean.valueOf(filterMode));
+ }
+ else if ( grammar.type==Grammar.PARSER ||
+ grammar.type==Grammar.COMBINED )
+ {
+ recognizerST = templates.getInstanceOf("parser");
+ outputFileST.setAttribute("PARSER", Boolean.valueOf(true));
+ headerFileST.setAttribute("PARSER", Boolean.valueOf(true));
+ }
+ else {
+ recognizerST = templates.getInstanceOf("treeParser");
+ outputFileST.setAttribute("TREE_PARSER", Boolean.valueOf(true));
+ headerFileST.setAttribute("TREE_PARSER", Boolean.valueOf(true));
+ }
+ outputFileST.setAttribute("recognizer", recognizerST);
+ headerFileST.setAttribute("recognizer", recognizerST);
+ outputFileST.setAttribute("actionScope",
+ grammar.getDefaultActionScope(grammar.type));
+ headerFileST.setAttribute("actionScope",
+ grammar.getDefaultActionScope(grammar.type));
+
+ String targetAppropriateFileNameString =
+ target.getTargetStringLiteralFromString(grammar.getFileName());
+ outputFileST.setAttribute("fileName", targetAppropriateFileNameString);
+ headerFileST.setAttribute("fileName", targetAppropriateFileNameString);
+ outputFileST.setAttribute("ANTLRVersion", Tool.VERSION);
+ headerFileST.setAttribute("ANTLRVersion", Tool.VERSION);
+ outputFileST.setAttribute("generatedTimestamp", Tool.getCurrentTimeStamp());
+ headerFileST.setAttribute("generatedTimestamp", Tool.getCurrentTimeStamp());
+
+ // GENERATE RECOGNIZER
+ // Walk the AST holding the input grammar, this time generating code
+ // Decisions are generated by using the precomputed DFAs
+ // Fill in the various templates with data
+ CodeGenTreeWalker gen = new CodeGenTreeWalker();
+ try {
+ gen.grammar((AST)grammar.getGrammarTree(),
+ grammar,
+ recognizerST,
+ outputFileST,
+ headerFileST);
+ }
+ catch (RecognitionException re) {
+ ErrorManager.error(ErrorManager.MSG_BAD_AST_STRUCTURE,
+ re);
+ }
+
+ genTokenTypeConstants(recognizerST);
+ genTokenTypeConstants(outputFileST);
+ genTokenTypeConstants(headerFileST);
+
+ if ( grammar.type!=Grammar.LEXER ) {
+ genTokenTypeNames(recognizerST);
+ genTokenTypeNames(outputFileST);
+ genTokenTypeNames(headerFileST);
+ }
+
+ // Now that we know what synpreds are used, we can set into template
+ Set synpredNames = null;
+ if ( grammar.synPredNamesUsedInDFA.size()>0 ) {
+ synpredNames = grammar.synPredNamesUsedInDFA;
+ }
+ outputFileST.setAttribute("synpreds", synpredNames);
+ headerFileST.setAttribute("synpreds", synpredNames);
+
+ // all recognizers can see Grammar object
+ recognizerST.setAttribute("grammar", grammar);
+
+ // WRITE FILES
+ try {
+ target.genRecognizerFile(tool,this,grammar,outputFileST);
+ if ( templates.isDefined("headerFile") ) {
+ StringTemplate extST = templates.getInstanceOf("headerFileExtension");
+ target.genRecognizerHeaderFile(tool,this,grammar,headerFileST,extST.toString());
+ }
+ // write out the vocab interchange file; used by antlr,
+ // does not change per target
+ StringTemplate tokenVocabSerialization = genTokenVocabOutput();
+ String vocabFileName = getVocabFileName();
+ if ( vocabFileName!=null ) {
+ write(tokenVocabSerialization, vocabFileName);
+ }
+ //System.out.println(outputFileST.getDOTForDependencyGraph(false));
+ }
+ catch (IOException ioe) {
+ ErrorManager.error(ErrorManager.MSG_CANNOT_WRITE_FILE,
+ getVocabFileName(),
+ ioe);
+ }
+ /*
+ System.out.println("num obj.prop refs: "+ ASTExpr.totalObjPropRefs);
+ System.out.println("num reflection lookups: "+ ASTExpr.totalReflectionLookups);
+ */
+
+ return outputFileST;
+ }
+
+ /** Some targets will have some extra scopes like C++ may have
+ * '@headerfile:name {action}' or something. Make sure the
+ * target likes the scopes in action table.
+ */
+ protected void verifyActionScopesOkForTarget(Map actions) {
+ Set actionScopeKeySet = actions.keySet();
+ for (Iterator it = actionScopeKeySet.iterator(); it.hasNext();) {
+ String scope = (String)it.next();
+ if ( !target.isValidActionScope(grammar.type, scope) ) {
+ // get any action from the scope to get error location
+ Map scopeActions = (Map)actions.get(scope);
+ GrammarAST actionAST =
+ (GrammarAST)scopeActions.values().iterator().next();
+ ErrorManager.grammarError(
+ ErrorManager.MSG_INVALID_ACTION_SCOPE,grammar,
+ actionAST.getToken(),scope,
+ grammar.getGrammarTypeString());
+ }
+ }
+ }
+
+ /** Actions may reference $x::y attributes, call translateAction on
+ * each action and replace that action in the Map.
+ */
+ protected void translateActionAttributeReferences(Map actions) {
+ Set actionScopeKeySet = actions.keySet();
+ for (Iterator it = actionScopeKeySet.iterator(); it.hasNext();) {
+ String scope = (String)it.next();
+ Map scopeActions = (Map)actions.get(scope);
+ translateActionAttributeReferencesForSingleScope(null,scopeActions);
+ }
+ }
+
+ /** Use for translating rule @init{...} actions that have no scope */
+ protected void translateActionAttributeReferencesForSingleScope(
+ Rule r,
+ Map scopeActions)
+ {
+ String ruleName=null;
+ if ( r!=null ) {
+ ruleName = r.name;
+ }
+ Set actionNameSet = scopeActions.keySet();
+ for (Iterator nameIT = actionNameSet.iterator(); nameIT.hasNext();) {
+ String name = (String) nameIT.next();
+ GrammarAST actionAST = (GrammarAST)scopeActions.get(name);
+ List chunks = translateAction(ruleName,actionAST);
+ scopeActions.put(name, chunks); // replace with translation
+ }
+ }
+
+ /** Error recovery in ANTLR recognizers.
+ *
+ * Based upon original ideas:
+ *
+ * Algorithms + Data Structures = Programs by Niklaus Wirth
+ *
+ * and
+ *
+ * A note on error recovery in recursive descent parsers:
+ * http://portal.acm.org/citation.cfm?id=947902.947905
+ *
+ * Later, Josef Grosch had some good ideas:
+ * Efficient and Comfortable Error Recovery in Recursive Descent Parsers:
+ * ftp://www.cocolab.com/products/cocktail/doca4.ps/ell.ps.zip
+ *
+ * Like Grosch I implemented local FOLLOW sets that are combined at run-time
+ * upon error to avoid parsing overhead.
+ */
+ public void generateLocalFOLLOW(GrammarAST referencedElementNode,
+ String referencedElementName,
+ String enclosingRuleName,
+ int elementIndex)
+ {
+ /*
+ System.out.println("compute FOLLOW "+grammar.name+"."+referencedElementNode.toString()+
+ " for "+referencedElementName+"#"+elementIndex +" in "+
+ enclosingRuleName+
+ " line="+referencedElementNode.getLine());
+ */
+ NFAState followingNFAState = referencedElementNode.followingNFAState;
+ LookaheadSet follow = null;
+ if ( followingNFAState!=null ) {
+ // compute follow for this element and, as side-effect, track
+ // the rule LOOK sensitivity.
+ follow = grammar.FIRST(followingNFAState);
+ }
+
+ if ( follow==null ) {
+ ErrorManager.internalError("no follow state or cannot compute follow");
+ follow = new LookaheadSet();
+ }
+ if ( follow.member(Label.EOF) ) {
+ // TODO: can we just remove? Seems needed here:
+ // compilation_unit : global_statement* EOF
+ // Actually i guess we resync to EOF regardless
+ follow.remove(Label.EOF);
+ }
+ //System.out.println(" "+follow);
+ //System.out.println("visited rules "+grammar.getRuleNamesVisitedDuringLOOK());
+
+ List tokenTypeList = null;
+ long[] words = null;
+ if ( follow.tokenTypeSet==null ) {
+ words = new long[1];
+ tokenTypeList = new ArrayList();
+ }
+ else {
+ BitSet bits = BitSet.of(follow.tokenTypeSet);
+ words = bits.toPackedArray();
+ tokenTypeList = follow.tokenTypeSet.toList();
+ }
+ // use the target to convert to hex strings (typically)
+ String[] wordStrings = new String[words.length];
+ for (int j = 0; j < words.length; j++) {
+ long w = words[j];
+ wordStrings[j] = target.getTarget64BitStringFromValue(w);
+ }
+ recognizerST.setAttribute("bitsets.{name,inName,bits,tokenTypes,tokenIndex}",
+ referencedElementName,
+ enclosingRuleName,
+ wordStrings,
+ tokenTypeList,
+ Utils.integer(elementIndex));
+ outputFileST.setAttribute("bitsets.{name,inName,bits,tokenTypes,tokenIndex}",
+ referencedElementName,
+ enclosingRuleName,
+ wordStrings,
+ tokenTypeList,
+ Utils.integer(elementIndex));
+ headerFileST.setAttribute("bitsets.{name,inName,bits,tokenTypes,tokenIndex}",
+ referencedElementName,
+ enclosingRuleName,
+ wordStrings,
+ tokenTypeList,
+ Utils.integer(elementIndex));
+ }
+
+ // L O O K A H E A D D E C I S I O N G E N E R A T I O N
+
+ /** Generate code that computes the predicted alt given a DFA. The
+ * recognizerST can be either the main generated recognizerTemplate
+ * for storage in the main parser file or a separate file. It's up to
+ * the code that ultimately invokes the codegen.g grammar rule.
+ *
+ * Regardless, the output file and header file get a copy of the DFAs.
+ */
+ public StringTemplate genLookaheadDecision(StringTemplate recognizerST,
+ DFA dfa)
+ {
+ StringTemplate decisionST;
+ // If we are doing inline DFA and this one is acyclic and LL(*)
+ // I have to check for is-non-LL(*) because if non-LL(*) the cyclic
+ // check is not done by DFA.verify(); that is, verify() avoids
+ // doesStateReachAcceptState() if non-LL(*)
+ if ( dfa.canInlineDecision() ) {
+ decisionST =
+ acyclicDFAGenerator.genFixedLookaheadDecision(getTemplates(), dfa);
+ }
+ else {
+ // generate any kind of DFA here (cyclic or acyclic)
+ dfa.createStateTables(this);
+ outputFileST.setAttribute("cyclicDFAs", dfa);
+ headerFileST.setAttribute("cyclicDFAs", dfa);
+ decisionST = templates.getInstanceOf("dfaDecision");
+ String description = dfa.getNFADecisionStartState().getDescription();
+ description = target.getTargetStringLiteralFromString(description);
+ if ( description!=null ) {
+ decisionST.setAttribute("description", description);
+ }
+ decisionST.setAttribute("decisionNumber",
+ Utils.integer(dfa.getDecisionNumber()));
+ }
+ return decisionST;
+ }
+
+ /** A special state is huge (too big for state tables) or has a predicated
+ * edge. Generate a simple if-then-else. Cannot be an accept state as
+ * they have no emanating edges. Don't worry about switch vs if-then-else
+ * because if you get here, the state is super complicated and needs an
+ * if-then-else. This is used by the new DFA scheme created June 2006.
+ */
+ public StringTemplate generateSpecialState(DFAState s) {
+ StringTemplate stateST;
+ stateST = templates.getInstanceOf("cyclicDFAState");
+ stateST.setAttribute("needErrorClause", Boolean.valueOf(true));
+ stateST.setAttribute("semPredState",
+ Boolean.valueOf(s.isResolvedWithPredicates()));
+ stateST.setAttribute("stateNumber", s.stateNumber);
+ stateST.setAttribute("decisionNumber", s.dfa.decisionNumber);
+
+ boolean foundGatedPred = false;
+ StringTemplate eotST = null;
+ for (int i = 0; i < s.getNumberOfTransitions(); i++) {
+ Transition edge = (Transition) s.transition(i);
+ StringTemplate edgeST;
+ if ( edge.label.getAtom()==Label.EOT ) {
+ // this is the default clause; has to held until last
+ edgeST = templates.getInstanceOf("eotDFAEdge");
+ stateST.removeAttribute("needErrorClause");
+ eotST = edgeST;
+ }
+ else {
+ edgeST = templates.getInstanceOf("cyclicDFAEdge");
+ StringTemplate exprST =
+ genLabelExpr(templates,edge,1);
+ edgeST.setAttribute("labelExpr", exprST);
+ }
+ edgeST.setAttribute("edgeNumber", Utils.integer(i+1));
+ edgeST.setAttribute("targetStateNumber",
+ Utils.integer(edge.target.stateNumber));
+ // stick in any gated predicates for any edge if not already a pred
+ if ( !edge.label.isSemanticPredicate() ) {
+ DFAState t = (DFAState)edge.target;
+ SemanticContext preds = t.getGatedPredicatesInNFAConfigurations();
+ if ( preds!=null ) {
+ foundGatedPred = true;
+ StringTemplate predST = preds.genExpr(this,
+ getTemplates(),
+ t.dfa);
+ edgeST.setAttribute("predicates", predST.toString());
+ }
+ }
+ if ( edge.label.getAtom()!=Label.EOT ) {
+ stateST.setAttribute("edges", edgeST);
+ }
+ }
+ if ( foundGatedPred ) {
+ // state has >= 1 edge with a gated pred (syn or sem)
+ // must rewind input first, set flag.
+ stateST.setAttribute("semPredState", new Boolean(foundGatedPred));
+ }
+ if ( eotST!=null ) {
+ stateST.setAttribute("edges", eotST);
+ }
+ return stateST;
+ }
+
+ /** Generate an expression for traversing an edge. */
+ protected StringTemplate genLabelExpr(StringTemplateGroup templates,
+ Transition edge,
+ int k)
+ {
+ Label label = edge.label;
+ if ( label.isSemanticPredicate() ) {
+ return genSemanticPredicateExpr(templates, edge);
+ }
+ if ( label.isSet() ) {
+ return genSetExpr(templates, label.getSet(), k, true);
+ }
+ // must be simple label
+ StringTemplate eST = templates.getInstanceOf("lookaheadTest");
+ eST.setAttribute("atom", getTokenTypeAsTargetLabel(label.getAtom()));
+ eST.setAttribute("atomAsInt", Utils.integer(label.getAtom()));
+ eST.setAttribute("k", Utils.integer(k));
+ return eST;
+ }
+
+ protected StringTemplate genSemanticPredicateExpr(StringTemplateGroup templates,
+ Transition edge)
+ {
+ DFA dfa = ((DFAState)edge.target).dfa; // which DFA are we in
+ Label label = edge.label;
+ SemanticContext semCtx = label.getSemanticContext();
+ return semCtx.genExpr(this,templates,dfa);
+ }
+
+ /** For intervals such as [3..3, 30..35], generate an expression that
+ * tests the lookahead similar to LA(1)==3 || (LA(1)>=30&&LA(1)<=35)
+ */
+ public StringTemplate genSetExpr(StringTemplateGroup templates,
+ IntSet set,
+ int k,
+ boolean partOfDFA)
+ {
+ if ( !(set instanceof IntervalSet) ) {
+ throw new IllegalArgumentException("unable to generate expressions for non IntervalSet objects");
+ }
+ IntervalSet iset = (IntervalSet)set;
+ if ( iset.getIntervals()==null || iset.getIntervals().size()==0 ) {
+ StringTemplate emptyST = new StringTemplate(templates, "");
+ emptyST.setName("empty-set-expr");
+ return emptyST;
+ }
+ String testSTName = "lookaheadTest";
+ String testRangeSTName = "lookaheadRangeTest";
+ if ( !partOfDFA ) {
+ testSTName = "isolatedLookaheadTest";
+ testRangeSTName = "isolatedLookaheadRangeTest";
+ }
+ StringTemplate setST = templates.getInstanceOf("setTest");
+ Iterator iter = iset.getIntervals().iterator();
+ int rangeNumber = 1;
+ while (iter.hasNext()) {
+ Interval I = (Interval) iter.next();
+ int a = I.a;
+ int b = I.b;
+ StringTemplate eST;
+ if ( a==b ) {
+ eST = templates.getInstanceOf(testSTName);
+ eST.setAttribute("atom", getTokenTypeAsTargetLabel(a));
+ eST.setAttribute("atomAsInt", Utils.integer(a));
+ //eST.setAttribute("k",Utils.integer(k));
+ }
+ else {
+ eST = templates.getInstanceOf(testRangeSTName);
+ eST.setAttribute("lower",getTokenTypeAsTargetLabel(a));
+ eST.setAttribute("lowerAsInt", Utils.integer(a));
+ eST.setAttribute("upper",getTokenTypeAsTargetLabel(b));
+ eST.setAttribute("upperAsInt", Utils.integer(b));
+ eST.setAttribute("rangeNumber",Utils.integer(rangeNumber));
+ }
+ eST.setAttribute("k",Utils.integer(k));
+ setST.setAttribute("ranges", eST);
+ rangeNumber++;
+ }
+ return setST;
+ }
+
+ // T O K E N D E F I N I T I O N G E N E R A T I O N
+
+ /** Set attributes tokens and literals attributes in the incoming
+ * code template. This is not the token vocab interchange file, but
+ * rather a list of token type ID needed by the recognizer.
+ */
+ protected void genTokenTypeConstants(StringTemplate code) {
+ // make constants for the token types
+ Iterator tokenIDs = grammar.getTokenIDs().iterator();
+ while (tokenIDs.hasNext()) {
+ String tokenID = (String) tokenIDs.next();
+ int tokenType = grammar.getTokenType(tokenID);
+ if ( tokenType==Label.EOF ||
+ tokenType>=Label.MIN_TOKEN_TYPE )
+ {
+ // don't do FAUX labels 'cept EOF
+ code.setAttribute("tokens.{name,type}", tokenID, Utils.integer(tokenType));
+ }
+ }
+ }
+
+ /** Generate a token names table that maps token type to a printable
+ * name: either the label like INT or the literal like "begin".
+ */
+ protected void genTokenTypeNames(StringTemplate code) {
+ for (int t=Label.MIN_TOKEN_TYPE; t<=grammar.getMaxTokenType(); t++) {
+ String tokenName = grammar.getTokenDisplayName(t);
+ if ( tokenName!=null ) {
+ tokenName=target.getTargetStringLiteralFromString(tokenName, true);
+ code.setAttribute("tokenNames", tokenName);
+ }
+ }
+ }
+
+ /** Get a meaningful name for a token type useful during code generation.
+ * Literals without associated names are converted to the string equivalent
+ * of their integer values. Used to generate x==ID and x==34 type comparisons
+ * etc... Essentially we are looking for the most obvious way to refer
+ * to a token type in the generated code. If in the lexer, return the
+ * char literal translated to the target language. For example, ttype=10
+ * will yield '\n' from the getTokenDisplayName method. That must
+ * be converted to the target languages literals. For most C-derived
+ * languages no translation is needed.
+ */
+ public String getTokenTypeAsTargetLabel(int ttype) {
+ if ( grammar.type==Grammar.LEXER ) {
+ String name = grammar.getTokenDisplayName(ttype);
+ return target.getTargetCharLiteralFromANTLRCharLiteral(this,name);
+ }
+ return target.getTokenTypeAsTargetLabel(this,ttype);
+ }
+
+ /** Generate a token vocab file with all the token names/types. For example:
+ * ID=7
+ * FOR=8
+ * 'for'=8
+ *
+ * This is independent of the target language; used by antlr internally
+ */
+ protected StringTemplate genTokenVocabOutput() {
+ StringTemplate vocabFileST =
+ new StringTemplate(vocabFilePattern,
+ AngleBracketTemplateLexer.class);
+ vocabFileST.setName("vocab-file");
+ // make constants for the token names
+ Iterator tokenIDs = grammar.getTokenIDs().iterator();
+ while (tokenIDs.hasNext()) {
+ String tokenID = (String) tokenIDs.next();
+ int tokenType = grammar.getTokenType(tokenID);
+ if ( tokenType>=Label.MIN_TOKEN_TYPE ) {
+ vocabFileST.setAttribute("tokens.{name,type}", tokenID, Utils.integer(tokenType));
+ }
+ }
+
+ // now dump the strings
+ Iterator literals = grammar.getStringLiterals().iterator();
+ while (literals.hasNext()) {
+ String literal = (String) literals.next();
+ int tokenType = grammar.getTokenType(literal);
+ if ( tokenType>=Label.MIN_TOKEN_TYPE ) {
+ vocabFileST.setAttribute("tokens.{name,type}", literal, Utils.integer(tokenType));
+ }
+ }
+
+ return vocabFileST;
+ }
+
+ public List translateAction(String ruleName,
+ GrammarAST actionTree)
+ {
+ if ( actionTree.getType()==ANTLRParser.ARG_ACTION ) {
+ return translateArgAction(ruleName, actionTree);
+ }
+ ActionTranslator translator = new ActionTranslator(this,ruleName,actionTree);
+ List chunks = translator.translateToChunks();
+ chunks = target.postProcessAction(chunks, actionTree.token);
+ return chunks;
+ }
+
+ /** Translate an action like [3,"foo",a[3]] and return a List of the
+ * translated actions. Because actions are themselves translated to a list
+ * of chunks, must cat together into a StringTemplate>. Don't translate
+ * to strings early as we need to eval templates in context.
+ */
+ public List translateArgAction(String ruleName,
+ GrammarAST actionTree)
+ {
+ String actionText = actionTree.token.getText();
+ List args = getListOfArgumentsFromAction(actionText,',');
+ List translatedArgs = new ArrayList();
+ for (String arg : args) {
+ if ( arg!=null ) {
+ antlr.Token actionToken =
+ new antlr.CommonToken(ANTLRParser.ACTION,arg);
+ ActionTranslator translator =
+ new ActionTranslator(this,ruleName,
+ actionToken,
+ actionTree.outerAltNum);
+ List chunks = translator.translateToChunks();
+ chunks = target.postProcessAction(chunks, actionToken);
+ StringTemplate catST = new StringTemplate(templates, "");
+ catST.setAttribute("chunks", chunks);
+ templates.createStringTemplate();
+ translatedArgs.add(catST);
+ }
+ }
+ if ( translatedArgs.size()==0 ) {
+ return null;
+ }
+ return translatedArgs;
+ }
+
+ public static List getListOfArgumentsFromAction(String actionText,
+ int separatorChar)
+ {
+ List args = new ArrayList();
+ getListOfArgumentsFromAction(actionText, 0, -1, separatorChar, args);
+ return args;
+ }
+
+ /** Given an arg action like
+ *
+ * [x, (*a).foo(21,33), 3.2+1, '\n',
+ * "a,oo\nick", {bl, "fdkj"eck}, ["cat\n,", x, 43]]
+ *
+ * convert to a list of arguments. Allow nested square brackets etc...
+ * Set separatorChar to ';' or ',' or whatever you want.
+ */
+ public static int getListOfArgumentsFromAction(String actionText,
+ int start,
+ int targetChar,
+ int separatorChar,
+ List args)
+ {
+ if ( actionText==null ) {
+ return -1;
+ }
+ actionText = actionText.replaceAll("//.*\n", "");
+ int n = actionText.length();
+ //System.out.println("actionText@"+start+"->"+(char)targetChar+"="+actionText.substring(start,n));
+ int p = start;
+ int last = p;
+ while ( p',p+1)>=p ) {
+ // do we see a matching '>' ahead? if so, hope it's a generic
+ // and not less followed by expr with greater than
+ p = getListOfArgumentsFromAction(actionText,p+1,'>',separatorChar,args);
+ }
+ else {
+ p++; // treat as normal char
+ }
+ break;
+ case '[' :
+ p = getListOfArgumentsFromAction(actionText,p+1,']',separatorChar,args);
+ break;
+ default :
+ if ( c==separatorChar && targetChar==-1 ) {
+ String arg = actionText.substring(last, p);
+ //System.out.println("arg="+arg);
+ args.add(arg.trim());
+ last = p+1;
+ }
+ p++;
+ break;
+ }
+ }
+ if ( targetChar==-1 && p<=n ) {
+ String arg = actionText.substring(last, p).trim();
+ //System.out.println("arg="+arg);
+ if ( arg.length()>0 ) {
+ args.add(arg.trim());
+ }
+ }
+ p++;
+ return p;
+ }
+
+ /** Given a template constructor action like %foo(a={...}) in
+ * an action, translate it to the appropriate template constructor
+ * from the templateLib. This translates a *piece* of the action.
+ */
+ public StringTemplate translateTemplateConstructor(String ruleName,
+ int outerAltNum,
+ antlr.Token actionToken,
+ String templateActionText)
+ {
+ // first, parse with antlr.g
+ //System.out.println("translate template: "+templateActionText);
+ ANTLRLexer lexer = new ANTLRLexer(new StringReader(templateActionText));
+ lexer.setFilename(grammar.getFileName());
+ lexer.setTokenObjectClass("antlr.TokenWithIndex");
+ TokenStreamRewriteEngine tokenBuffer = new TokenStreamRewriteEngine(lexer);
+ tokenBuffer.discard(ANTLRParser.WS);
+ tokenBuffer.discard(ANTLRParser.ML_COMMENT);
+ tokenBuffer.discard(ANTLRParser.COMMENT);
+ tokenBuffer.discard(ANTLRParser.SL_COMMENT);
+ ANTLRParser parser = new ANTLRParser(tokenBuffer);
+ parser.setFilename(grammar.getFileName());
+ parser.setASTNodeClass("org.antlr.tool.GrammarAST");
+ try {
+ parser.rewrite_template();
+ }
+ catch (RecognitionException re) {
+ ErrorManager.grammarError(ErrorManager.MSG_INVALID_TEMPLATE_ACTION,
+ grammar,
+ actionToken,
+ templateActionText);
+ }
+ catch (Exception tse) {
+ ErrorManager.internalError("can't parse template action",tse);
+ }
+ GrammarAST rewriteTree = (GrammarAST)parser.getAST();
+
+ // then translate via codegen.g
+ CodeGenTreeWalker gen = new CodeGenTreeWalker();
+ gen.init(grammar);
+ gen.currentRuleName = ruleName;
+ gen.outerAltNum = outerAltNum;
+ StringTemplate st = null;
+ try {
+ st = gen.rewrite_template((AST)rewriteTree);
+ }
+ catch (RecognitionException re) {
+ ErrorManager.error(ErrorManager.MSG_BAD_AST_STRUCTURE,
+ re);
+ }
+ return st;
+ }
+
+
+ public void issueInvalidScopeError(String x,
+ String y,
+ Rule enclosingRule,
+ antlr.Token actionToken,
+ int outerAltNum)
+ {
+ //System.out.println("error $"+x+"::"+y);
+ Rule r = grammar.getRule(x);
+ AttributeScope scope = grammar.getGlobalScope(x);
+ if ( scope==null ) {
+ if ( r!=null ) {
+ scope = r.ruleScope; // if not global, might be rule scope
+ }
+ }
+ if ( scope==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNKNOWN_DYNAMIC_SCOPE,
+ grammar,
+ actionToken,
+ x);
+ }
+ else if ( scope.getAttribute(y)==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNKNOWN_DYNAMIC_SCOPE_ATTRIBUTE,
+ grammar,
+ actionToken,
+ x,
+ y);
+ }
+ }
+
+ public void issueInvalidAttributeError(String x,
+ String y,
+ Rule enclosingRule,
+ antlr.Token actionToken,
+ int outerAltNum)
+ {
+ //System.out.println("error $"+x+"."+y);
+ if ( enclosingRule==null ) {
+ // action not in a rule
+ ErrorManager.grammarError(ErrorManager.MSG_ATTRIBUTE_REF_NOT_IN_RULE,
+ grammar,
+ actionToken,
+ x,
+ y);
+ return;
+ }
+
+ // action is in a rule
+ Grammar.LabelElementPair label = enclosingRule.getRuleLabel(x);
+
+ if ( label!=null || enclosingRule.getRuleRefsInAlt(x, outerAltNum)!=null ) {
+ // $rulelabel.attr or $ruleref.attr; must be unknown attr
+ String refdRuleName = x;
+ if ( label!=null ) {
+ refdRuleName = enclosingRule.getRuleLabel(x).referencedRuleName;
+ }
+ Rule refdRule = grammar.getRule(refdRuleName);
+ AttributeScope scope = refdRule.getAttributeScope(y);
+ if ( scope==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNKNOWN_RULE_ATTRIBUTE,
+ grammar,
+ actionToken,
+ refdRuleName,
+ y);
+ }
+ else if ( scope.isParameterScope ) {
+ ErrorManager.grammarError(ErrorManager.MSG_INVALID_RULE_PARAMETER_REF,
+ grammar,
+ actionToken,
+ refdRuleName,
+ y);
+ }
+ else if ( scope.isDynamicRuleScope ) {
+ ErrorManager.grammarError(ErrorManager.MSG_INVALID_RULE_SCOPE_ATTRIBUTE_REF,
+ grammar,
+ actionToken,
+ refdRuleName,
+ y);
+ }
+ }
+
+ }
+
+ public void issueInvalidAttributeError(String x,
+ Rule enclosingRule,
+ antlr.Token actionToken,
+ int outerAltNum)
+ {
+ //System.out.println("error $"+x);
+ if ( enclosingRule==null ) {
+ // action not in a rule
+ ErrorManager.grammarError(ErrorManager.MSG_ATTRIBUTE_REF_NOT_IN_RULE,
+ grammar,
+ actionToken,
+ x);
+ return;
+ }
+
+ // action is in a rule
+ Grammar.LabelElementPair label = enclosingRule.getRuleLabel(x);
+ AttributeScope scope = enclosingRule.getAttributeScope(x);
+
+ if ( label!=null ||
+ enclosingRule.getRuleRefsInAlt(x, outerAltNum)!=null ||
+ enclosingRule.name.equals(x) )
+ {
+ ErrorManager.grammarError(ErrorManager.MSG_ISOLATED_RULE_SCOPE,
+ grammar,
+ actionToken,
+ x);
+ }
+ else if ( scope!=null && scope.isDynamicRuleScope ) {
+ ErrorManager.grammarError(ErrorManager.MSG_ISOLATED_RULE_ATTRIBUTE,
+ grammar,
+ actionToken,
+ x);
+ }
+ else {
+ ErrorManager.grammarError(ErrorManager.MSG_UNKNOWN_SIMPLE_ATTRIBUTE,
+ grammar,
+ actionToken,
+ x);
+ }
+ }
+
+ // M I S C
+
+ public StringTemplateGroup getTemplates() {
+ return templates;
+ }
+
+ public StringTemplateGroup getBaseTemplates() {
+ return baseTemplates;
+ }
+
+ public void setDebug(boolean debug) {
+ this.debug = debug;
+ }
+
+ public void setTrace(boolean trace) {
+ this.trace = trace;
+ }
+
+ public void setProfile(boolean profile) {
+ this.profile = profile;
+ if ( profile ) {
+ setDebug(true); // requires debug events
+ }
+ }
+
+ public StringTemplate getRecognizerST() {
+ return outputFileST;
+ }
+
+ /** Generate TParser.java and TLexer.java from T.g if combined, else
+ * just use T.java as output regardless of type.
+ */
+ public String getRecognizerFileName(String name, int type) {
+ StringTemplate extST = templates.getInstanceOf("codeFileExtension");
+ String recognizerName = grammar.getRecognizerName();
+ return recognizerName+extST.toString();
+ /*
+ String suffix = "";
+ if ( type==Grammar.COMBINED ||
+ (type==Grammar.LEXER && !grammar.implicitLexer) )
+ {
+ suffix = Grammar.grammarTypeToFileNameSuffix[type];
+ }
+ return name+suffix+extST.toString();
+ */
+ }
+
+ /** What is the name of the vocab file generated for this grammar?
+ * Returns null if no .tokens file should be generated.
+ */
+ public String getVocabFileName() {
+ if ( grammar.isBuiltFromString() ) {
+ return null;
+ }
+ return grammar.name+VOCAB_FILE_EXTENSION;
+ }
+
+ public void write(StringTemplate code, String fileName) throws IOException {
+ long start = System.currentTimeMillis();
+ Writer w = tool.getOutputFile(grammar, fileName);
+ // Write the output to a StringWriter
+ StringTemplateWriter wr = templates.getStringTemplateWriter(w);
+ wr.setLineWidth(lineWidth);
+ code.write(wr);
+ w.close();
+ long stop = System.currentTimeMillis();
+ //System.out.println("render time for "+fileName+": "+(int)(stop-start)+"ms");
+ }
+
+ /** You can generate a switch rather than if-then-else for a DFA state
+ * if there are no semantic predicates and the number of edge label
+ * values is small enough; e.g., don't generate a switch for a state
+ * containing an edge label such as 20..52330 (the resulting byte codes
+ * would overflow the method 65k limit probably).
+ */
+ protected boolean canGenerateSwitch(DFAState s) {
+ if ( !GENERATE_SWITCHES_WHEN_POSSIBLE ) {
+ return false;
+ }
+ int size = 0;
+ for (int i = 0; i < s.getNumberOfTransitions(); i++) {
+ Transition edge = (Transition) s.transition(i);
+ if ( edge.label.isSemanticPredicate() ) {
+ return false;
+ }
+ // can't do a switch if the edges are going to require predicates
+ if ( edge.label.getAtom()==Label.EOT ) {
+ int EOTPredicts = ((DFAState)edge.target).getUniquelyPredictedAlt();
+ if ( EOTPredicts==NFA.INVALID_ALT_NUMBER ) {
+ // EOT target has to be a predicate then; no unique alt
+ return false;
+ }
+ }
+ // if target is a state with gated preds, we need to use preds on
+ // this edge then to reach it.
+ if ( ((DFAState)edge.target).getGatedPredicatesInNFAConfigurations()!=null ) {
+ return false;
+ }
+ size += edge.label.getSet().size();
+ }
+ if ( s.getNumberOfTransitions()MAX_SWITCH_CASE_LABELS ) {
+ return false;
+ }
+ return true;
+ }
+
+ /** Create a label to track a token / rule reference's result.
+ * Technically, this is a place where I break model-view separation
+ * as I am creating a variable name that could be invalid in a
+ * target language, however, label ::= is probably ok in
+ * all languages we care about.
+ */
+ public String createUniqueLabel(String name) {
+ return new StringBuffer()
+ .append(name).append(uniqueLabelNumber++).toString();
+ }
+}
diff --git a/antlr_3_1_source/codegen/JavaScriptTarget.java b/antlr_3_1_source/codegen/JavaScriptTarget.java
new file mode 100755
index 0000000..7b770fc
--- /dev/null
+++ b/antlr_3_1_source/codegen/JavaScriptTarget.java
@@ -0,0 +1,47 @@
+package org.antlr.codegen;
+import java.util.*;
+
+public class JavaScriptTarget extends Target {
+ /** Convert an int to a JavaScript Unicode character literal.
+ *
+ * The current JavaScript spec (ECMA-262) doesn't provide for octal
+ * notation in String literals, although some implementations support it.
+ * This method overrides the parent class so that characters will always
+ * be encoded as Unicode literals (e.g. \u0011).
+ */
+ public String encodeIntAsCharEscape(int v) {
+ String hex = Integer.toHexString(v|0x10000).substring(1,5);
+ return "\\u"+hex;
+ }
+
+ /** Convert long to two 32-bit numbers separted by a comma.
+ * JavaScript does not support 64-bit numbers, so we need to break
+ * the number into two 32-bit literals to give to the Bit. A number like
+ * 0xHHHHHHHHLLLLLLLL is broken into the following string:
+ * "0xLLLLLLLL, 0xHHHHHHHH"
+ * Note that the low order bits are first, followed by the high order bits.
+ * This is to match how the BitSet constructor works, where the bits are
+ * passed in in 32-bit chunks with low-order bits coming first.
+ *
+ * Note: stole the following two methods from the ActionScript target.
+ */
+ public String getTarget64BitStringFromValue(long word) {
+ StringBuffer buf = new StringBuffer(22); // enough for the two "0x", "," and " "
+ buf.append("0x");
+ writeHexWithPadding(buf, Integer.toHexString((int)(word & 0x00000000ffffffffL)));
+ buf.append(", 0x");
+ writeHexWithPadding(buf, Integer.toHexString((int)(word >> 32)));
+
+ return buf.toString();
+ }
+
+ private void writeHexWithPadding(StringBuffer buf, String digits) {
+ digits = digits.toUpperCase();
+ int padding = 8 - digits.length();
+ // pad left with zeros
+ for (int i=1; i<=padding; i++) {
+ buf.append('0');
+ }
+ buf.append(digits);
+ }
+}
diff --git a/antlr_3_1_source/codegen/JavaTarget.java b/antlr_3_1_source/codegen/JavaTarget.java
new file mode 100644
index 0000000..b7eee8c
--- /dev/null
+++ b/antlr_3_1_source/codegen/JavaTarget.java
@@ -0,0 +1,44 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.Tool;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.Grammar;
+
+public class JavaTarget extends Target {
+ protected StringTemplate chooseWhereCyclicDFAsGo(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate recognizerST,
+ StringTemplate cyclicDFAST)
+ {
+ return recognizerST;
+ }
+}
+
diff --git a/antlr_3_1_source/codegen/ObjCTarget.java b/antlr_3_1_source/codegen/ObjCTarget.java
new file mode 100644
index 0000000..9a87b30
--- /dev/null
+++ b/antlr_3_1_source/codegen/ObjCTarget.java
@@ -0,0 +1,109 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005 Terence Parr
+ Copyright (c) 2006 Kay Roepke (Objective-C runtime)
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+package org.antlr.codegen;
+
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.Grammar;
+import org.antlr.Tool;
+import org.antlr.misc.Utils;
+
+import java.io.IOException;
+
+public class ObjCTarget extends Target {
+ protected void genRecognizerHeaderFile(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate headerFileST,
+ String extName)
+ throws IOException
+ {
+ generator.write(headerFileST, grammar.name + Grammar.grammarTypeToFileNameSuffix[grammar.type] + extName);
+ }
+
+ public String getTargetCharLiteralFromANTLRCharLiteral(CodeGenerator generator,
+ String literal)
+ {
+ if (literal.startsWith("'\\u") ) {
+ literal = "0x" +literal.substring(3, 7);
+ } else {
+ int c = literal.charAt(1); // TJP
+ if (c < 32 || c > 127) {
+ literal = "0x" + Integer.toHexString(c);
+ }
+ }
+
+ return literal;
+ }
+
+ /** Convert from an ANTLR string literal found in a grammar file to
+ * an equivalent string literal in the target language. For Java, this
+ * is the translation 'a\n"' -> "a\n\"". Expect single quotes
+ * around the incoming literal. Just flip the quotes and replace
+ * double quotes with \"
+ */
+ public String getTargetStringLiteralFromANTLRStringLiteral(CodeGenerator generator,
+ String literal)
+ {
+ literal = Utils.replace(literal,"\"","\\\"");
+ StringBuffer buf = new StringBuffer(literal);
+ buf.setCharAt(0,'"');
+ buf.setCharAt(literal.length()-1,'"');
+ buf.insert(0,'@');
+ return buf.toString();
+ }
+
+ /** If we have a label, prefix it with the recognizer's name */
+ public String getTokenTypeAsTargetLabel(CodeGenerator generator, int ttype) {
+ String name = generator.grammar.getTokenDisplayName(ttype);
+ // If name is a literal, return the token type instead
+ if ( name.charAt(0)=='\'' ) {
+ return String.valueOf(ttype);
+ }
+ return generator.grammar.name + Grammar.grammarTypeToFileNameSuffix[generator.grammar.type] + "_" + name;
+ //return super.getTokenTypeAsTargetLabel(generator, ttype);
+ //return this.getTokenTextAndTypeAsTargetLabel(generator, null, ttype);
+ }
+
+ /** Target must be able to override the labels used for token types. Sometimes also depends on the token text.*/
+ public String getTokenTextAndTypeAsTargetLabel(CodeGenerator generator, String text, int tokenType) {
+ String name = generator.grammar.getTokenDisplayName(tokenType);
+ // If name is a literal, return the token type instead
+ if ( name.charAt(0)=='\'' ) {
+ return String.valueOf(tokenType);
+ }
+ String textEquivalent = text == null ? name : text;
+ if (textEquivalent.charAt(0) >= '0' && textEquivalent.charAt(0) <= '9') {
+ return textEquivalent;
+ } else {
+ return generator.grammar.name + Grammar.grammarTypeToFileNameSuffix[generator.grammar.type] + "_" + textEquivalent;
+ }
+ }
+
+}
+
diff --git a/antlr_3_1_source/codegen/Perl5Target.java b/antlr_3_1_source/codegen/Perl5Target.java
new file mode 100644
index 0000000..c908abb
--- /dev/null
+++ b/antlr_3_1_source/codegen/Perl5Target.java
@@ -0,0 +1,78 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2007 Ronald Blaschke
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.analysis.Label;
+import org.antlr.tool.AttributeScope;
+import org.antlr.tool.Grammar;
+import org.antlr.tool.RuleLabelScope;
+
+public class Perl5Target extends Target {
+ public Perl5Target() {
+ AttributeScope.tokenScope.addAttribute("self", null);
+ RuleLabelScope.predefinedLexerRulePropertiesScope.addAttribute("self", null);
+ }
+
+ public String getTargetCharLiteralFromANTLRCharLiteral(final CodeGenerator generator,
+ final String literal) {
+ final StringBuffer buf = new StringBuffer(10);
+
+ final int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
+ if (c < Label.MIN_CHAR_VALUE) {
+ buf.append("\\x{0000}");
+ } else if (c < targetCharValueEscape.length &&
+ targetCharValueEscape[c] != null) {
+ buf.append(targetCharValueEscape[c]);
+ } else if (Character.UnicodeBlock.of((char) c) ==
+ Character.UnicodeBlock.BASIC_LATIN &&
+ !Character.isISOControl((char) c)) {
+ // normal char
+ buf.append((char) c);
+ } else {
+ // must be something unprintable...use \\uXXXX
+ // turn on the bit above max "\\uFFFF" value so that we pad with zeros
+ // then only take last 4 digits
+ String hex = Integer.toHexString(c | 0x10000).toUpperCase().substring(1, 5);
+ buf.append("\\x{");
+ buf.append(hex);
+ buf.append("}");
+ }
+
+ if (buf.indexOf("\\") == -1) {
+ // no need for interpolation, use single quotes
+ buf.insert(0, '\'');
+ buf.append('\'');
+ } else {
+ // need string interpolation
+ buf.insert(0, '\"');
+ buf.append('\"');
+ }
+
+ return buf.toString();
+ }
+}
diff --git a/antlr_3_1_source/codegen/PythonTarget.java b/antlr_3_1_source/codegen/PythonTarget.java
new file mode 100644
index 0000000..c2a3ffb
--- /dev/null
+++ b/antlr_3_1_source/codegen/PythonTarget.java
@@ -0,0 +1,219 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005 Martin Traverso
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+/*
+
+Please excuse my obvious lack of Java experience. The code here is probably
+full of WTFs - though IMHO Java is the Real WTF(TM) here...
+
+ */
+
+package org.antlr.codegen;
+import org.antlr.tool.Grammar;
+import java.util.*;
+
+public class PythonTarget extends Target {
+ /** Target must be able to override the labels used for token types */
+ public String getTokenTypeAsTargetLabel(CodeGenerator generator,
+ int ttype) {
+ // use ints for predefined types;
+ //
+ if ( ttype >= 0 && ttype <= 3 ) {
+ return String.valueOf(ttype);
+ }
+
+ String name = generator.grammar.getTokenDisplayName(ttype);
+
+ // If name is a literal, return the token type instead
+ if ( name.charAt(0)=='\'' ) {
+ return String.valueOf(ttype);
+ }
+
+ return name;
+ }
+
+ public String getTargetCharLiteralFromANTLRCharLiteral(
+ CodeGenerator generator,
+ String literal) {
+ int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
+ return String.valueOf(c);
+ }
+
+ private List splitLines(String text) {
+ ArrayList l = new ArrayList();
+ int idx = 0;
+
+ while ( true ) {
+ int eol = text.indexOf("\n", idx);
+ if ( eol == -1 ) {
+ l.add(text.substring(idx));
+ break;
+ }
+ else {
+ l.add(text.substring(idx, eol+1));
+ idx = eol+1;
+ }
+ }
+
+ return l;
+ }
+
+ public List postProcessAction(List chunks, antlr.Token actionToken) {
+ /* TODO
+ - check for and report TAB usage
+ */
+
+ //System.out.println("\n*** Action at " + actionToken.getLine() + ":" + actionToken.getColumn());
+
+ /* First I create a new list of chunks. String chunks are splitted into
+ lines and some whitespace my be added at the beginning.
+
+ As a result I get a list of chunks
+ - where the first line starts at column 0
+ - where every LF is at the end of a string chunk
+ */
+
+ List nChunks = new ArrayList();
+ for (int i = 0; i < chunks.size(); i++) {
+ Object chunk = chunks.get(i);
+
+ if ( chunk instanceof String ) {
+ String text = (String)chunks.get(i);
+ if ( nChunks.size() == 0 && actionToken.getColumn() > 0 ) {
+ // first chunk and some 'virtual' WS at beginning
+ // prepend to this chunk
+
+ String ws = "";
+ for ( int j = 0 ; j < actionToken.getColumn() ; j++ ) {
+ ws += " ";
+ }
+ text = ws + text;
+ }
+
+ List parts = splitLines(text);
+ for ( int j = 0 ; j < parts.size() ; j++ ) {
+ chunk = parts.get(j);
+ nChunks.add(chunk);
+ }
+ }
+ else {
+ if ( nChunks.size() == 0 && actionToken.getColumn() > 0 ) {
+ // first chunk and some 'virtual' WS at beginning
+ // add as a chunk of its own
+
+ String ws = "";
+ for ( int j = 0 ; j < actionToken.getColumn() ; j++ ) {
+ ws += " ";
+ }
+ nChunks.add(ws);
+ }
+
+ nChunks.add(chunk);
+ }
+ }
+
+ int lineNo = actionToken.getLine();
+ int col = 0;
+
+ // strip trailing empty lines
+ int lastChunk = nChunks.size() - 1;
+ while ( lastChunk > 0
+ && nChunks.get(lastChunk) instanceof String
+ && ((String)nChunks.get(lastChunk)).trim().length() == 0 )
+ lastChunk--;
+
+ // string leading empty lines
+ int firstChunk = 0;
+ while ( firstChunk <= lastChunk
+ && nChunks.get(firstChunk) instanceof String
+ && ((String)nChunks.get(firstChunk)).trim().length() == 0
+ && ((String)nChunks.get(firstChunk)).endsWith("\n") ) {
+ lineNo++;
+ firstChunk++;
+ }
+
+ int indent = -1;
+ for ( int i = firstChunk ; i <= lastChunk ; i++ ) {
+ Object chunk = nChunks.get(i);
+
+ //System.out.println(lineNo + ":" + col + " " + quote(chunk.toString()));
+
+ if ( chunk instanceof String ) {
+ String text = (String)chunk;
+
+ if ( col == 0 ) {
+ if ( indent == -1 ) {
+ // first non-blank line
+ // count number of leading whitespaces
+
+ indent = 0;
+ for ( int j = 0; j < text.length(); j++ ) {
+ if ( !Character.isWhitespace(text.charAt(j)) )
+ break;
+
+ indent++;
+ }
+ }
+
+ if ( text.length() >= indent ) {
+ int j;
+ for ( j = 0; j < indent ; j++ ) {
+ if ( !Character.isWhitespace(text.charAt(j)) ) {
+ // should do real error reporting here...
+ System.err.println("Warning: badly indented line " + lineNo + " in action:");
+ System.err.println(text);
+ break;
+ }
+ }
+
+ nChunks.set(i, text.substring(j));
+ }
+ else if ( text.trim().length() > 0 ) {
+ // should do real error reporting here...
+ System.err.println("Warning: badly indented line " + lineNo + " in action:");
+ System.err.println(text);
+ }
+ }
+
+ if ( text.endsWith("\n") ) {
+ lineNo++;
+ col = 0;
+ }
+ else {
+ col += text.length();
+ }
+ }
+ else {
+ // not really correct, but all I need is col to increment...
+ col += 1;
+ }
+ }
+
+ return nChunks;
+ }
+}
diff --git a/antlr_3_1_source/codegen/RubyTarget.java b/antlr_3_1_source/codegen/RubyTarget.java
new file mode 100644
index 0000000..d40a74b
--- /dev/null
+++ b/antlr_3_1_source/codegen/RubyTarget.java
@@ -0,0 +1,73 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005 Martin Traverso
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+package org.antlr.codegen;
+
+public class RubyTarget
+ extends Target
+{
+ public String getTargetCharLiteralFromANTLRCharLiteral(
+ CodeGenerator generator,
+ String literal)
+ {
+ literal = literal.substring(1, literal.length() - 1);
+
+ String result = "?";
+
+ if (literal.equals("\\")) {
+ result += "\\\\";
+ }
+ else if (literal.equals(" ")) {
+ result += "\\s";
+ }
+ else if (literal.startsWith("\\u")) {
+ result = "0x" + literal.substring(2);
+ }
+ else {
+ result += literal;
+ }
+
+ return result;
+ }
+
+ public int getMaxCharValue(CodeGenerator generator)
+ {
+ // we don't support unicode, yet.
+ return 0xFF;
+ }
+
+ public String getTokenTypeAsTargetLabel(CodeGenerator generator, int ttype)
+ {
+ String name = generator.grammar.getTokenDisplayName(ttype);
+ // If name is a literal, return the token type instead
+ if ( name.charAt(0)=='\'' ) {
+ return generator.grammar.computeTokenNameFromLiteral(ttype, name);
+ }
+ return name;
+ }
+}
diff --git a/antlr_3_1_source/codegen/Target.java b/antlr_3_1_source/codegen/Target.java
new file mode 100644
index 0000000..06c5eda
--- /dev/null
+++ b/antlr_3_1_source/codegen/Target.java
@@ -0,0 +1,303 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+package org.antlr.codegen;
+
+import org.antlr.Tool;
+import org.antlr.analysis.Label;
+import org.antlr.misc.Utils;
+import org.antlr.stringtemplate.StringTemplate;
+import org.antlr.tool.Grammar;
+
+import java.io.IOException;
+import java.util.List;
+
+/** The code generator for ANTLR can usually be retargeted just by providing
+ * a new X.stg file for language X, however, sometimes the files that must
+ * be generated vary enough that some X-specific functionality is required.
+ * For example, in C, you must generate header files whereas in Java you do not.
+ * Other languages may want to keep DFA separate from the main
+ * generated recognizer file.
+ *
+ * The notion of a Code Generator target abstracts out the creation
+ * of the various files. As new language targets get added to the ANTLR
+ * system, this target class may have to be altered to handle more
+ * functionality. Eventually, just about all language generation issues
+ * will be expressible in terms of these methods.
+ *
+ * If org.antlr.codegen.XTarget class exists, it is used else
+ * Target base class is used. I am using a superclass rather than an
+ * interface for this target concept because I can add functionality
+ * later without breaking previously written targets (extra interface
+ * methods would force adding dummy functions to all code generator
+ * target classes).
+ *
+ */
+public class Target {
+
+ /** For pure strings of Java 16-bit unicode char, how can we display
+ * it in the target language as a literal. Useful for dumping
+ * predicates and such that may refer to chars that need to be escaped
+ * when represented as strings. Also, templates need to be escaped so
+ * that the target language can hold them as a string.
+ *
+ * I have defined (via the constructor) the set of typical escapes,
+ * but your Target subclass is free to alter the translated chars or
+ * add more definitions. This is nonstatic so each target can have
+ * a different set in memory at same time.
+ */
+ protected String[] targetCharValueEscape = new String[255];
+
+ public Target() {
+ targetCharValueEscape['\n'] = "\\n";
+ targetCharValueEscape['\r'] = "\\r";
+ targetCharValueEscape['\t'] = "\\t";
+ targetCharValueEscape['\b'] = "\\b";
+ targetCharValueEscape['\f'] = "\\f";
+ targetCharValueEscape['\\'] = "\\\\";
+ targetCharValueEscape['\''] = "\\'";
+ targetCharValueEscape['"'] = "\\\"";
+ }
+
+ protected void genRecognizerFile(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate outputFileST)
+ throws IOException
+ {
+ String fileName =
+ generator.getRecognizerFileName(grammar.name, grammar.type);
+ generator.write(outputFileST, fileName);
+ }
+
+ protected void genRecognizerHeaderFile(Tool tool,
+ CodeGenerator generator,
+ Grammar grammar,
+ StringTemplate headerFileST,
+ String extName) // e.g., ".h"
+ throws IOException
+ {
+ // no header file by default
+ }
+
+ protected void performGrammarAnalysis(CodeGenerator generator,
+ Grammar grammar)
+ {
+ // Build NFAs from the grammar AST
+ grammar.buildNFA();
+
+ // Create the DFA predictors for each decision
+ grammar.createLookaheadDFAs();
+ }
+
+ /** Is scope in @scope::name {action} valid for this kind of grammar?
+ * Targets like C++ may want to allow new scopes like headerfile or
+ * some such. The action names themselves are not policed at the
+ * moment so targets can add template actions w/o having to recompile
+ * ANTLR.
+ */
+ public boolean isValidActionScope(int grammarType, String scope) {
+ switch (grammarType) {
+ case Grammar.LEXER :
+ if ( scope.equals("lexer") ) {return true;}
+ break;
+ case Grammar.PARSER :
+ if ( scope.equals("parser") ) {return true;}
+ break;
+ case Grammar.COMBINED :
+ if ( scope.equals("parser") ) {return true;}
+ if ( scope.equals("lexer") ) {return true;}
+ break;
+ case Grammar.TREE_PARSER :
+ if ( scope.equals("treeparser") ) {return true;}
+ break;
+ }
+ return false;
+ }
+
+ /** Target must be able to override the labels used for token types */
+ public String getTokenTypeAsTargetLabel(CodeGenerator generator, int ttype) {
+ String name = generator.grammar.getTokenDisplayName(ttype);
+ // If name is a literal, return the token type instead
+ if ( name.charAt(0)=='\'' ) {
+ return String.valueOf(ttype);
+ }
+ return name;
+ }
+
+ /** Convert from an ANTLR char literal found in a grammar file to
+ * an equivalent char literal in the target language. For most
+ * languages, this means leaving 'x' as 'x'. Actually, we need
+ * to escape '\u000A' so that it doesn't get converted to \n by
+ * the compiler. Convert the literal to the char value and then
+ * to an appropriate target char literal.
+ *
+ * Expect single quotes around the incoming literal.
+ */
+ public String getTargetCharLiteralFromANTLRCharLiteral(
+ CodeGenerator generator,
+ String literal)
+ {
+ StringBuffer buf = new StringBuffer();
+ buf.append('\'');
+ int c = Grammar.getCharValueFromGrammarCharLiteral(literal);
+ if ( c "a\n\"". Expect single quotes
+ * around the incoming literal. Just flip the quotes and replace
+ * double quotes with \"
+ */
+ public String getTargetStringLiteralFromANTLRStringLiteral(
+ CodeGenerator generator,
+ String literal)
+ {
+ literal = Utils.replace(literal,"\\\"","\""); // \" to " to normalize
+ literal = Utils.replace(literal,"\"","\\\""); // " to \" to escape all
+ StringBuffer buf = new StringBuffer(literal);
+ buf.setCharAt(0,'"');
+ buf.setCharAt(literal.length()-1,'"');
+ return buf.toString();
+ }
+
+ /** Given a random string of Java unicode chars, return a new string with
+ * optionally appropriate quote characters for target language and possibly
+ * with some escaped characters. For example, if the incoming string has
+ * actual newline characters, the output of this method would convert them
+ * to the two char sequence \n for Java, C, C++, ... The new string has
+ * double-quotes around it as well. Example String in memory:
+ *
+ * a"[newlinechar]b'c[carriagereturnchar]d[tab]e\f
+ *
+ * would be converted to the valid Java s:
+ *
+ * "a\"\nb'c\rd\te\\f"
+ *
+ * or
+ *
+ * a\"\nb'c\rd\te\\f
+ *
+ * depending on the quoted arg.
+ */
+ public String getTargetStringLiteralFromString(String s, boolean quoted) {
+ if ( s==null ) {
+ return null;
+ }
+ StringBuffer buf = new StringBuffer();
+ if ( quoted ) {
+ buf.append('"');
+ }
+ for (int i=0; i0) && label==null &&
+ (r==null || !r.isSynPred) )
+ {
+ // we will need a label to do the AST or tracking, make one
+ label = generator.createUniqueLabel(ruleTargetName);
+ CommonToken labelTok = new CommonToken(ANTLRParser.ID, label);
+ grammar.defineRuleRefLabel(currentRuleName, labelTok, elementAST);
+ }
+ StringTemplate elementST = templates.getInstanceOf(name);
+ if ( label!=null ) {
+ elementST.setAttribute("label", label);
+ }
+ return elementST;
+ }
+
+ protected StringTemplate getTokenElementST(String name,
+ String elementName,
+ GrammarAST elementAST,
+ GrammarAST ast_suffix,
+ String label)
+ {
+ String suffix = getSTSuffix(ast_suffix,label);
+ name += suffix;
+ // if we're building trees and there is no label, gen a label
+ // unless we're in a synpred rule.
+ Rule r = grammar.getRule(currentRuleName);
+ if ( (grammar.buildAST()||suffix.length()>0) && label==null &&
+ (r==null || !r.isSynPred) )
+ {
+ label = generator.createUniqueLabel(elementName);
+ CommonToken labelTok = new CommonToken(ANTLRParser.ID, label);
+ grammar.defineTokenRefLabel(currentRuleName, labelTok, elementAST);
+ }
+ StringTemplate elementST = templates.getInstanceOf(name);
+ if ( label!=null ) {
+ elementST.setAttribute("label", label);
+ }
+ return elementST;
+ }
+
+ public boolean isListLabel(String label) {
+ boolean hasListLabel=false;
+ if ( label!=null ) {
+ Rule r = grammar.getRule(currentRuleName);
+ String stName = null;
+ if ( r!=null ) {
+ Grammar.LabelElementPair pair = r.getLabel(label);
+ if ( pair!=null &&
+ (pair.type==Grammar.TOKEN_LIST_LABEL||
+ pair.type==Grammar.RULE_LIST_LABEL) )
+ {
+ hasListLabel=true;
+ }
+ }
+ }
+ return hasListLabel;
+ }
+
+ /** Return a non-empty template name suffix if the token is to be
+ * tracked, added to a tree, or both.
+ */
+ protected String getSTSuffix(GrammarAST ast_suffix, String label) {
+ if ( grammar.type==Grammar.LEXER ) {
+ return "";
+ }
+ // handle list label stuff; make element use "Track"
+
+ String operatorPart = "";
+ String rewritePart = "";
+ String listLabelPart = "";
+ Rule ruleDescr = grammar.getRule(currentRuleName);
+ if ( ast_suffix!=null && !ruleDescr.isSynPred ) {
+ if ( ast_suffix.getType()==ANTLRParser.ROOT ) {
+ operatorPart = "RuleRoot";
+ }
+ else if ( ast_suffix.getType()==ANTLRParser.BANG ) {
+ operatorPart = "Bang";
+ }
+ }
+ if ( currentAltHasASTRewrite ) {
+ rewritePart = "Track";
+ }
+ if ( isListLabel(label) ) {
+ listLabelPart = "AndListLabel";
+ }
+ String STsuffix = operatorPart+rewritePart+listLabelPart;
+ //System.out.println("suffix = "+STsuffix);
+
+ return STsuffix;
+ }
+
+ /** Convert rewrite AST lists to target labels list */
+ protected List getTokenTypesAsTargetLabels(Set refs) {
+ if ( refs==null || refs.size()==0 ) {
+ return null;
+ }
+ List labels = new ArrayList(refs.size());
+ for (GrammarAST t : refs) {
+ String label;
+ if ( t.getType()==ANTLRParser.RULE_REF ) {
+ label = t.getText();
+ }
+ else if ( t.getType()==ANTLRParser.LABEL ) {
+ label = t.getText();
+ }
+ else {
+ // must be char or string literal
+ label = generator.getTokenTypeAsTargetLabel(
+ grammar.getTokenType(t.getText()));
+ }
+ labels.add(label);
+ }
+ return labels;
+ }
+
+ public void init(Grammar g) {
+ this.grammar = g;
+ this.generator = grammar.getCodeGenerator();
+ this.templates = generator.getTemplates();
+ }
+}
+
+grammar[Grammar g,
+ StringTemplate recognizerST,
+ StringTemplate outputFileST,
+ StringTemplate headerFileST]
+{
+ init(g);
+ this.recognizerST = recognizerST;
+ this.outputFileST = outputFileST;
+ this.headerFileST = headerFileST;
+ String superClass = (String)g.getOption("superClass");
+ outputOption = (String)g.getOption("output");
+ recognizerST.setAttribute("superClass", superClass);
+ if ( g.type!=Grammar.LEXER ) {
+ recognizerST.setAttribute("ASTLabelType", g.getOption("ASTLabelType"));
+ }
+ if ( g.type==Grammar.TREE_PARSER && g.getOption("ASTLabelType")==null ) {
+ ErrorManager.grammarWarning(ErrorManager.MSG_MISSING_AST_TYPE_IN_TREE_GRAMMAR,
+ g,
+ null,
+ g.name);
+ }
+ if ( g.type!=Grammar.TREE_PARSER ) {
+ recognizerST.setAttribute("labelType", g.getOption("TokenLabelType"));
+ }
+ recognizerST.setAttribute("numRules", grammar.getRules().size());
+ outputFileST.setAttribute("numRules", grammar.getRules().size());
+ headerFileST.setAttribute("numRules", grammar.getRules().size());
+}
+ : ( #( LEXER_GRAMMAR grammarSpec )
+ | #( PARSER_GRAMMAR grammarSpec )
+ | #( TREE_GRAMMAR grammarSpec
+ )
+ | #( COMBINED_GRAMMAR grammarSpec )
+ )
+ ;
+
+attrScope
+ : #( "scope" ID ACTION )
+ ;
+
+grammarSpec
+ : name:ID
+ (cmt:DOC_COMMENT
+ {
+ outputFileST.setAttribute("docComment", #cmt.getText());
+ headerFileST.setAttribute("docComment", #cmt.getText());
+ }
+ )?
+ {
+ recognizerST.setAttribute("name", grammar.getRecognizerName());
+ outputFileST.setAttribute("name", grammar.getRecognizerName());
+ headerFileST.setAttribute("name", grammar.getRecognizerName());
+ recognizerST.setAttribute("scopes", grammar.getGlobalScopes());
+ headerFileST.setAttribute("scopes", grammar.getGlobalScopes());
+ }
+ ( #(OPTIONS .) )?
+ ( #(IMPORT .) )?
+ ( #(TOKENS .) )?
+ (attrScope)*
+ (AMPERSAND)*
+ rules[recognizerST]
+ ;
+
+rules[StringTemplate recognizerST]
+{
+StringTemplate rST;
+}
+ : ( ( {
+ String ruleName = _t.getFirstChild().getText();
+ Rule r = grammar.getRule(ruleName);
+ }
+ :
+ {grammar.generateMethodForRule(ruleName)}?
+ rST=rule
+ {
+ if ( rST!=null ) {
+ recognizerST.setAttribute("rules", rST);
+ outputFileST.setAttribute("rules", rST);
+ headerFileST.setAttribute("rules", rST);
+ }
+ }
+ | RULE
+ )
+ )+
+ ;
+
+rule returns [StringTemplate code=null]
+{
+ String r;
+ String initAction = null;
+ StringTemplate b;
+ // get the dfa for the BLOCK
+ GrammarAST block=#rule.getFirstChildWithType(BLOCK);
+ DFA dfa=block.getLookaheadDFA();
+ // init blockNestingLevel so it's block level RULE_BLOCK_NESTING_LEVEL
+ // for alts of rule
+ blockNestingLevel = RULE_BLOCK_NESTING_LEVEL-1;
+ Rule ruleDescr = grammar.getRule(#rule.getFirstChild().getText());
+
+ // For syn preds, we don't want any AST code etc... in there.
+ // Save old templates ptr and restore later. Base templates include Dbg.
+ StringTemplateGroup saveGroup = templates;
+ if ( ruleDescr.isSynPred ) {
+ templates = generator.getBaseTemplates();
+ }
+}
+ : #( RULE id:ID {r=#id.getText(); currentRuleName = r;}
+ (mod:modifier)?
+ #(ARG (ARG_ACTION)?)
+ #(RET (ARG_ACTION)?)
+ ( #(OPTIONS .) )?
+ (ruleScopeSpec)?
+ (AMPERSAND)*
+ b=block["ruleBlock", dfa]
+ {
+ String description =
+ grammar.grammarTreeToString(#rule.getFirstChildWithType(BLOCK),
+ false);
+ description =
+ generator.target.getTargetStringLiteralFromString(description);
+ b.setAttribute("description", description);
+ // do not generate lexer rules in combined grammar
+ String stName = null;
+ if ( ruleDescr.isSynPred ) {
+ stName = "synpredRule";
+ }
+ else if ( grammar.type==Grammar.LEXER ) {
+ if ( r.equals(Grammar.ARTIFICIAL_TOKENS_RULENAME) ) {
+ stName = "tokensRule";
+ }
+ else {
+ stName = "lexerRule";
+ }
+ }
+ else {
+ if ( !(grammar.type==Grammar.COMBINED &&
+ Character.isUpperCase(r.charAt(0))) )
+ {
+ stName = "rule";
+ }
+ }
+ code = templates.getInstanceOf(stName);
+ if ( code.getName().equals("rule") ) {
+ code.setAttribute("emptyRule",
+ Boolean.valueOf(grammar.isEmptyRule(block)));
+ }
+ code.setAttribute("ruleDescriptor", ruleDescr);
+ String memo = (String)grammar.getBlockOption(#rule,"memoize");
+ if ( memo==null ) {
+ memo = (String)grammar.getOption("memoize");
+ }
+ if ( memo!=null && memo.equals("true") &&
+ (stName.equals("rule")||stName.equals("lexerRule")) )
+ {
+ code.setAttribute("memoize",
+ Boolean.valueOf(memo!=null && memo.equals("true")));
+ }
+ }
+
+ (exceptionGroup[code])?
+ EOR
+ )
+ {
+ if ( code!=null ) {
+ if ( grammar.type==Grammar.LEXER ) {
+ boolean naked =
+ r.equals(Grammar.ARTIFICIAL_TOKENS_RULENAME) ||
+ (mod!=null&&mod.getText().equals(Grammar.FRAGMENT_RULE_MODIFIER));
+ code.setAttribute("nakedBlock", Boolean.valueOf(naked));
+ }
+ else {
+ description =
+ grammar.grammarTreeToString(#rule,false);
+ description =
+ generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ }
+ Rule theRule = grammar.getRule(r);
+ generator.translateActionAttributeReferencesForSingleScope(
+ theRule,
+ theRule.getActions()
+ );
+ code.setAttribute("ruleName", r);
+ code.setAttribute("block", b);
+ if ( initAction!=null ) {
+ code.setAttribute("initAction", initAction);
+ }
+ }
+ templates = saveGroup;
+ }
+ ;
+
+modifier
+ : "protected"
+ | "public"
+ | "private"
+ | "fragment"
+ ;
+
+ruleScopeSpec
+ : #( "scope" (ACTION)? ( ID )* )
+ ;
+
+block[String blockTemplateName, DFA dfa]
+ returns [StringTemplate code=null]
+{
+ StringTemplate decision = null;
+ if ( dfa!=null ) {
+ code = templates.getInstanceOf(blockTemplateName);
+ decision = generator.genLookaheadDecision(recognizerST,dfa);
+ code.setAttribute("decision", decision);
+ code.setAttribute("decisionNumber", dfa.getDecisionNumber());
+ code.setAttribute("maxK",dfa.getMaxLookaheadDepth());
+ code.setAttribute("maxAlt",dfa.getNumberOfAlts());
+ }
+ else {
+ code = templates.getInstanceOf(blockTemplateName+"SingleAlt");
+ }
+ blockNestingLevel++;
+ code.setAttribute("blockLevel", blockNestingLevel);
+ code.setAttribute("enclosingBlockLevel", blockNestingLevel-1);
+ StringTemplate alt = null;
+ StringTemplate rew = null;
+ StringTemplate sb = null;
+ GrammarAST r = null;
+ int altNum = 1;
+ if ( this.blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ this.outerAltNum=1;
+ }
+}
+ : {#block.getSetValue()!=null}? sb=setBlock
+ {
+ code.setAttribute("alts",sb);
+ blockNestingLevel--;
+ }
+
+ | #( BLOCK
+ ( OPTIONS )? // ignore
+ ( alt=alternative {r=(GrammarAST)_t;} rew=rewrite
+ {
+ if ( this.blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ this.outerAltNum++;
+ }
+ // add the rewrite code as just another element in the alt :)
+ // (unless it's a " -> ..." rewrite
+ // ( -> ... )
+ boolean etc =
+ r.getType()==REWRITE &&
+ r.getFirstChild()!=null &&
+ r.getFirstChild().getType()==ETC;
+ if ( rew!=null && !etc ) { alt.setAttribute("rew", rew); }
+ // add this alt to the list of alts for this block
+ code.setAttribute("alts",alt);
+ alt.setAttribute("altNum", Utils.integer(altNum));
+ alt.setAttribute("outerAlt",
+ Boolean.valueOf(blockNestingLevel==RULE_BLOCK_NESTING_LEVEL));
+ altNum++;
+ }
+ )+
+ EOB
+ )
+ {blockNestingLevel--;}
+ ;
+
+setBlock returns [StringTemplate code=null]
+{
+StringTemplate setcode = null;
+if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL && grammar.buildAST() ) {
+ Rule r = grammar.getRule(currentRuleName);
+ currentAltHasASTRewrite = r.hasRewrite(outerAltNum);
+ if ( currentAltHasASTRewrite ) {
+ r.trackTokenReferenceInAlt(#setBlock, outerAltNum);
+ }
+}
+}
+ : s:BLOCK
+ {
+ int i = ((TokenWithIndex)#s.getToken()).getIndex();
+ if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ setcode = getTokenElementST("matchRuleBlockSet", "set", #s, null, null);
+ }
+ else {
+ setcode = getTokenElementST("matchSet", "set", #s, null, null);
+ }
+ setcode.setAttribute("elementIndex", i);
+ if ( grammar.type!=Grammar.LEXER ) {
+ generator.generateLocalFOLLOW(#s,"set",currentRuleName,i);
+ }
+ setcode.setAttribute("s",
+ generator.genSetExpr(templates,#s.getSetValue(),1,false));
+ StringTemplate altcode=templates.getInstanceOf("alt");
+ altcode.setAttribute("elements.{el,line,pos}",
+ setcode,
+ Utils.integer(#s.getLine()),
+ Utils.integer(#s.getColumn())
+ );
+ altcode.setAttribute("altNum", Utils.integer(1));
+ altcode.setAttribute("outerAlt",
+ Boolean.valueOf(blockNestingLevel==RULE_BLOCK_NESTING_LEVEL));
+ if ( !currentAltHasASTRewrite && grammar.buildAST() ) {
+ altcode.setAttribute("autoAST", Boolean.valueOf(true));
+ }
+ altcode.setAttribute("treeLevel", rewriteTreeNestingLevel);
+ code = altcode;
+ }
+ ;
+
+exceptionGroup[StringTemplate ruleST]
+ : ( exceptionHandler[ruleST] )+ (finallyClause[ruleST])?
+ | finallyClause[ruleST]
+ ;
+
+exceptionHandler[StringTemplate ruleST]
+ : #("catch" ARG_ACTION ACTION)
+ {
+ List chunks = generator.translateAction(currentRuleName,#ACTION);
+ ruleST.setAttribute("exceptions.{decl,action}",#ARG_ACTION.getText(),chunks);
+ }
+ ;
+
+finallyClause[StringTemplate ruleST]
+ : #("finally" ACTION)
+ {
+ List chunks = generator.translateAction(currentRuleName,#ACTION);
+ ruleST.setAttribute("finally",chunks);
+ }
+ ;
+
+alternative returns [StringTemplate code=templates.getInstanceOf("alt")]
+{
+/*
+// TODO: can we use Rule.altsWithRewrites???
+if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL ) {
+ GrammarAST aRewriteNode = #alternative.findFirstType(REWRITE);
+ if ( grammar.buildAST() &&
+ (aRewriteNode!=null||
+ (#alternative.getNextSibling()!=null &&
+ #alternative.getNextSibling().getType()==REWRITE)) )
+ {
+ currentAltHasASTRewrite = true;
+ }
+ else {
+ currentAltHasASTRewrite = false;
+ }
+}
+*/
+if ( blockNestingLevel==RULE_BLOCK_NESTING_LEVEL && grammar.buildAST() ) {
+ Rule r = grammar.getRule(currentRuleName);
+ currentAltHasASTRewrite = r.hasRewrite(outerAltNum);
+}
+String description = grammar.grammarTreeToString(#alternative, false);
+description = generator.target.getTargetStringLiteralFromString(description);
+code.setAttribute("description", description);
+code.setAttribute("treeLevel", rewriteTreeNestingLevel);
+if ( !currentAltHasASTRewrite && grammar.buildAST() ) {
+ code.setAttribute("autoAST", Boolean.valueOf(true));
+}
+StringTemplate e;
+}
+ : #( a:ALT
+ ( {GrammarAST elAST=(GrammarAST)_t;}
+ e=element[null,null]
+ {
+ if ( e!=null ) {
+ code.setAttribute("elements.{el,line,pos}",
+ e,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+ }
+ }
+ )+
+ EOA
+ )
+ ;
+
+element[GrammarAST label, GrammarAST astSuffix] returns [StringTemplate code=null]
+{
+ IntSet elements=null;
+ GrammarAST ast = null;
+}
+ : #(ROOT code=element[label,#ROOT])
+
+ | #(BANG code=element[label,#BANG])
+
+ | #( n:NOT code=notElement[#n, label, astSuffix] )
+
+ | #( ASSIGN alabel:ID code=element[#alabel,astSuffix] )
+
+ | #( PLUS_ASSIGN label2:ID code=element[#label2,astSuffix] )
+
+ | #(CHAR_RANGE a:CHAR_LITERAL b:CHAR_LITERAL)
+ {code = templates.getInstanceOf("charRangeRef");
+ String low =
+ generator.target.getTargetCharLiteralFromANTLRCharLiteral(generator,a.getText());
+ String high =
+ generator.target.getTargetCharLiteralFromANTLRCharLiteral(generator,b.getText());
+ code.setAttribute("a", low);
+ code.setAttribute("b", high);
+ if ( label!=null ) {
+ code.setAttribute("label", label.getText());
+ }
+ }
+
+ | {#element.getSetValue()==null}? code=ebnf
+
+ | code=atom[null, label, astSuffix]
+
+ | code=tree
+
+ | code=element_action
+
+ | (sp:SEMPRED|gsp:GATED_SEMPRED {#sp=#gsp;})
+ {
+ code = templates.getInstanceOf("validateSemanticPredicate");
+ code.setAttribute("pred", generator.translateAction(currentRuleName,#sp));
+ String description =
+ generator.target.getTargetStringLiteralFromString(#sp.getText());
+ code.setAttribute("description", description);
+ }
+
+ | SYN_SEMPRED // used only in lookahead; don't generate validating pred
+
+ | BACKTRACK_SEMPRED
+
+ | EPSILON
+ ;
+
+element_action returns [StringTemplate code=null]
+ : act:ACTION
+ {
+ code = templates.getInstanceOf("execAction");
+ code.setAttribute("action", generator.translateAction(currentRuleName,#act));
+ }
+ | act2:FORCED_ACTION
+ {
+ code = templates.getInstanceOf("execForcedAction");
+ code.setAttribute("action", generator.translateAction(currentRuleName,#act2));
+ }
+ ;
+
+notElement[GrammarAST n, GrammarAST label, GrammarAST astSuffix]
+returns [StringTemplate code=null]
+{
+ IntSet elements=null;
+ String labelText = null;
+ if ( label!=null ) {
+ labelText = label.getText();
+ }
+}
+ : (assign_c:CHAR_LITERAL
+ {
+ int ttype=0;
+ if ( grammar.type==Grammar.LEXER ) {
+ ttype = Grammar.getCharValueFromGrammarCharLiteral(assign_c.getText());
+ }
+ else {
+ ttype = grammar.getTokenType(assign_c.getText());
+ }
+ elements = grammar.complement(ttype);
+ }
+ | assign_s:STRING_LITERAL
+ {
+ int ttype=0;
+ if ( grammar.type==Grammar.LEXER ) {
+ // TODO: error!
+ }
+ else {
+ ttype = grammar.getTokenType(assign_s.getText());
+ }
+ elements = grammar.complement(ttype);
+ }
+ | assign_t:TOKEN_REF
+ {
+ int ttype = grammar.getTokenType(assign_t.getText());
+ elements = grammar.complement(ttype);
+ }
+ | assign_st:BLOCK
+ {
+ elements = assign_st.getSetValue();
+ elements = grammar.complement(elements);
+ }
+ )
+ {
+ code = getTokenElementST("matchSet",
+ "set",
+ (GrammarAST)n.getFirstChild(),
+ astSuffix,
+ labelText);
+ code.setAttribute("s",generator.genSetExpr(templates,elements,1,false));
+ int i = ((TokenWithIndex)n.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ if ( grammar.type!=Grammar.LEXER ) {
+ generator.generateLocalFOLLOW(n,"set",currentRuleName,i);
+ }
+ }
+ ;
+
+ebnf returns [StringTemplate code=null]
+{
+ DFA dfa=null;
+ GrammarAST b = (GrammarAST)#ebnf.getFirstChild();
+ GrammarAST eob = (GrammarAST)#b.getLastChild(); // loops will use EOB DFA
+}
+ : ( {dfa = #ebnf.getLookaheadDFA();}
+ code=block["block", dfa]
+ | {dfa = #ebnf.getLookaheadDFA();}
+ #( OPTIONAL code=block["optionalBlock", dfa] )
+ | {dfa = #eob.getLookaheadDFA();}
+ #( CLOSURE code=block["closureBlock", dfa] )
+ | {dfa = #eob.getLookaheadDFA();}
+ #( POSITIVE_CLOSURE code=block["positiveClosureBlock", dfa] )
+ )
+ {
+ String description = grammar.grammarTreeToString(#ebnf, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ }
+ ;
+
+tree returns [StringTemplate code=templates.getInstanceOf("tree")]
+{
+StringTemplate el=null, act=null;
+GrammarAST elAST=null, actAST=null;
+NFAState afterDOWN = (NFAState)tree_AST_in.NFATreeDownState.transition(0).target;
+LookaheadSet s = grammar.LOOK(afterDOWN);
+if ( s.member(Label.UP) ) {
+ // nullable child list if we can see the UP as the next token
+ // we need an "if ( input.LA(1)==Token.DOWN )" gate around
+ // the child list.
+ code.setAttribute("nullableChildList", "true");
+}
+rewriteTreeNestingLevel++;
+code.setAttribute("enclosingTreeLevel", rewriteTreeNestingLevel-1);
+code.setAttribute("treeLevel", rewriteTreeNestingLevel);
+Rule r = grammar.getRule(currentRuleName);
+GrammarAST rootSuffix = null;
+if ( grammar.buildAST() && !r.hasRewrite(outerAltNum) ) {
+ rootSuffix = new GrammarAST(ROOT,"ROOT");
+}
+}
+ : #( TREE_BEGIN {elAST=(GrammarAST)_t;}
+ el=element[null,rootSuffix]
+ {
+ code.setAttribute("root.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+ }
+ // push all the immediately-following actions out before children
+ // so actions aren't guarded by the "if (input.LA(1)==Token.DOWN)"
+ // guard in generated code.
+ ( options {greedy=true;}:
+ {actAST=(GrammarAST)_t;}
+ act=element_action
+ {
+ code.setAttribute("actionsAfterRoot.{el,line,pos}",
+ act,
+ Utils.integer(actAST.getLine()),
+ Utils.integer(actAST.getColumn())
+ );
+ }
+ )*
+ ( {elAST=(GrammarAST)_t;}
+ el=element[null,null]
+ {
+ code.setAttribute("children.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+ }
+ )*
+ )
+ {rewriteTreeNestingLevel--;}
+ ;
+
+atom[GrammarAST scope, GrammarAST label, GrammarAST astSuffix]
+ returns [StringTemplate code=null]
+{
+String labelText=null;
+if ( label!=null ) {
+ labelText = label.getText();
+}
+if ( grammar.type!=Grammar.LEXER &&
+ (#atom.getType()==RULE_REF||#atom.getType()==TOKEN_REF||
+ #atom.getType()==CHAR_LITERAL||#atom.getType()==STRING_LITERAL) )
+{
+ Rule encRule = grammar.getRule(((GrammarAST)#atom).enclosingRuleName);
+ if ( encRule!=null && encRule.hasRewrite(outerAltNum) && astSuffix!=null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_AST_OP_IN_ALT_WITH_REWRITE,
+ grammar,
+ ((GrammarAST)#atom).getToken(),
+ ((GrammarAST)#atom).enclosingRuleName,
+ new Integer(outerAltNum));
+ astSuffix = null;
+ }
+}
+}
+ : #( r:RULE_REF (rarg:ARG_ACTION)? )
+ {
+ grammar.checkRuleReference(scope, #r, #rarg, currentRuleName);
+ String scopeName = null;
+ if ( scope!=null ) {
+ scopeName = scope.getText();
+ }
+ Rule rdef = grammar.getRule(scopeName, #r.getText());
+ // don't insert label=r() if $label.attr not used, no ret value, ...
+ if ( !rdef.getHasReturnValue() ) {
+ labelText = null;
+ }
+ code = getRuleElementST("ruleRef", #r.getText(), #r, astSuffix, labelText);
+ code.setAttribute("rule", rdef);
+ if ( scope!=null ) { // scoped rule ref
+ Grammar scopeG = grammar.composite.getGrammar(scope.getText());
+ code.setAttribute("scope", scopeG);
+ }
+ else if ( rdef.grammar != this.grammar ) { // nonlocal
+ // if rule definition is not in this grammar, it's nonlocal
+ List rdefDelegates = rdef.grammar.getDelegates();
+ if ( rdefDelegates.contains(this.grammar) ) {
+ code.setAttribute("scope", rdef.grammar);
+ }
+ else {
+ // defining grammar is not a delegate, scope all the
+ // back to root, which has delegate methods for all
+ // rules. Don't use scope if we are root.
+ if ( this.grammar != rdef.grammar.composite.delegateGrammarTreeRoot.grammar ) {
+ code.setAttribute("scope",
+ rdef.grammar.composite.delegateGrammarTreeRoot.grammar);
+ }
+ }
+ }
+
+ if ( #rarg!=null ) {
+ List args = generator.translateAction(currentRuleName,#rarg);
+ code.setAttribute("args", args);
+ }
+ int i = ((TokenWithIndex)r.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(#r,#r.getText(),currentRuleName,i);
+ #r.code = code;
+ }
+
+ | #( t:TOKEN_REF (targ:ARG_ACTION)? )
+ {
+ if ( currentAltHasASTRewrite && #t.terminalOptions!=null &&
+ #t.terminalOptions.get(Grammar.defaultTokenOption)!=null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_HETERO_ILLEGAL_IN_REWRITE_ALT,
+ grammar,
+ ((GrammarAST)(#t)).getToken(),
+ #t.getText());
+ }
+ grammar.checkRuleReference(scope, #t, #targ, currentRuleName);
+ if ( grammar.type==Grammar.LEXER ) {
+ if ( grammar.getTokenType(t.getText())==Label.EOF ) {
+ code = templates.getInstanceOf("lexerMatchEOF");
+ }
+ else {
+ code = templates.getInstanceOf("lexerRuleRef");
+ if ( isListLabel(labelText) ) {
+ code = templates.getInstanceOf("lexerRuleRefAndListLabel");
+ }
+ String scopeName = null;
+ if ( scope!=null ) {
+ scopeName = scope.getText();
+ }
+ Rule rdef2 = grammar.getRule(scopeName, #t.getText());
+ code.setAttribute("rule", rdef2);
+ if ( scope!=null ) { // scoped rule ref
+ Grammar scopeG = grammar.composite.getGrammar(scope.getText());
+ code.setAttribute("scope", scopeG);
+ }
+ else if ( rdef2.grammar != this.grammar ) { // nonlocal
+ // if rule definition is not in this grammar, it's nonlocal
+ code.setAttribute("scope", rdef2.grammar);
+ }
+ if ( #targ!=null ) {
+ List args = generator.translateAction(currentRuleName,#targ);
+ code.setAttribute("args", args);
+ }
+ }
+ int i = ((TokenWithIndex)#t.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ if ( label!=null ) code.setAttribute("label", labelText);
+ }
+ else {
+ code = getTokenElementST("tokenRef", #t.getText(), #t, astSuffix, labelText);
+ String tokenLabel =
+ generator.getTokenTypeAsTargetLabel(grammar.getTokenType(t.getText()));
+ code.setAttribute("token",tokenLabel);
+ if ( !currentAltHasASTRewrite && #t.terminalOptions!=null ) {
+ code.setAttribute("hetero",#t.terminalOptions.get(Grammar.defaultTokenOption));
+ }
+ int i = ((TokenWithIndex)#t.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(#t,tokenLabel,currentRuleName,i);
+ }
+ #t.code = code;
+ }
+
+ | c:CHAR_LITERAL
+ {
+ if ( grammar.type==Grammar.LEXER ) {
+ code = templates.getInstanceOf("charRef");
+ code.setAttribute("char",
+ generator.target.getTargetCharLiteralFromANTLRCharLiteral(generator,c.getText()));
+ if ( label!=null ) {
+ code.setAttribute("label", labelText);
+ }
+ }
+ else { // else it's a token type reference
+ code = getTokenElementST("tokenRef", "char_literal", #c, astSuffix, labelText);
+ String tokenLabel = generator.getTokenTypeAsTargetLabel(grammar.getTokenType(c.getText()));
+ code.setAttribute("token",tokenLabel);
+ if ( #c.terminalOptions!=null ) {
+ code.setAttribute("hetero",#c.terminalOptions.get(Grammar.defaultTokenOption));
+ }
+ int i = ((TokenWithIndex)#c.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(#c,tokenLabel,currentRuleName,i);
+ }
+ }
+
+ | s:STRING_LITERAL
+ {
+ if ( grammar.type==Grammar.LEXER ) {
+ code = templates.getInstanceOf("lexerStringRef");
+ code.setAttribute("string",
+ generator.target.getTargetStringLiteralFromANTLRStringLiteral(generator,s.getText()));
+ if ( label!=null ) {
+ code.setAttribute("label", labelText);
+ }
+ }
+ else { // else it's a token type reference
+ code = getTokenElementST("tokenRef", "string_literal", #s, astSuffix, labelText);
+ String tokenLabel =
+ generator.getTokenTypeAsTargetLabel(grammar.getTokenType(#s.getText()));
+ code.setAttribute("token",tokenLabel);
+ if ( #s.terminalOptions!=null ) {
+ code.setAttribute("hetero",#s.terminalOptions.get(Grammar.defaultTokenOption));
+ }
+ int i = ((TokenWithIndex)#s.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ generator.generateLocalFOLLOW(#s,tokenLabel,currentRuleName,i);
+ }
+ }
+
+ | w:WILDCARD
+ {
+ code = getWildcardST(#w,astSuffix,labelText);
+ code.setAttribute("elementIndex", ((TokenWithIndex)#w.getToken()).getIndex());
+ }
+
+ | #(DOT ID code=atom[#ID, label, astSuffix]) // scope override on rule or token
+
+ | code=set[label,astSuffix]
+ ;
+
+ast_suffix
+ : ROOT
+ | BANG
+ ;
+
+
+set[GrammarAST label, GrammarAST astSuffix] returns [StringTemplate code=null]
+{
+String labelText=null;
+if ( label!=null ) {
+ labelText = label.getText();
+}
+}
+ : s:BLOCK // only care that it's a BLOCK with setValue!=null
+ {
+ code = getTokenElementST("matchSet", "set", #s, astSuffix, labelText);
+ int i = ((TokenWithIndex)#s.getToken()).getIndex();
+ code.setAttribute("elementIndex", i);
+ if ( grammar.type!=Grammar.LEXER ) {
+ generator.generateLocalFOLLOW(#s,"set",currentRuleName,i);
+ }
+ code.setAttribute("s", generator.genSetExpr(templates,#s.getSetValue(),1,false));
+ }
+ ;
+
+setElement
+ : c:CHAR_LITERAL
+ | t:TOKEN_REF
+ | s:STRING_LITERAL
+ | #(CHAR_RANGE c1:CHAR_LITERAL c2:CHAR_LITERAL)
+ ;
+
+// REWRITE stuff
+
+rewrite returns [StringTemplate code=null]
+{
+StringTemplate alt;
+if ( #rewrite.getType()==REWRITE ) {
+ if ( generator.grammar.buildTemplate() ) {
+ code = templates.getInstanceOf("rewriteTemplate");
+ }
+ else {
+ code = templates.getInstanceOf("rewriteCode");
+ code.setAttribute("treeLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+ code.setAttribute("rewriteBlockLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+ code.setAttribute("referencedElementsDeep",
+ getTokenTypesAsTargetLabels(#rewrite.rewriteRefsDeep));
+ Set tokenLabels =
+ grammar.getLabels(#rewrite.rewriteRefsDeep, Grammar.TOKEN_LABEL);
+ Set tokenListLabels =
+ grammar.getLabels(#rewrite.rewriteRefsDeep, Grammar.TOKEN_LIST_LABEL);
+ Set ruleLabels =
+ grammar.getLabels(#rewrite.rewriteRefsDeep, Grammar.RULE_LABEL);
+ Set ruleListLabels =
+ grammar.getLabels(#rewrite.rewriteRefsDeep, Grammar.RULE_LIST_LABEL);
+ // just in case they ref $r for "previous value", make a stream
+ // from retval.tree
+ StringTemplate retvalST = templates.getInstanceOf("prevRuleRootRef");
+ ruleLabels.add(retvalST.toString());
+ code.setAttribute("referencedTokenLabels", tokenLabels);
+ code.setAttribute("referencedTokenListLabels", tokenListLabels);
+ code.setAttribute("referencedRuleLabels", ruleLabels);
+ code.setAttribute("referencedRuleListLabels", ruleListLabels);
+ }
+}
+else {
+ code = templates.getInstanceOf("noRewrite");
+ code.setAttribute("treeLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+ code.setAttribute("rewriteBlockLevel", Utils.integer(OUTER_REWRITE_NESTING_LEVEL));
+}
+}
+ : (
+ {rewriteRuleRefs = new HashSet();}
+ #( r:REWRITE (pred:SEMPRED)? alt=rewrite_alternative )
+ {
+ rewriteBlockNestingLevel = OUTER_REWRITE_NESTING_LEVEL;
+ List predChunks = null;
+ if ( #pred!=null ) {
+ //predText = #pred.getText();
+ predChunks = generator.translateAction(currentRuleName,#pred);
+ }
+ String description =
+ grammar.grammarTreeToString(#r,false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("alts.{pred,alt,description}",
+ predChunks,
+ alt,
+ description);
+ pred=null;
+ }
+ )*
+ ;
+
+rewrite_block[String blockTemplateName] returns [StringTemplate code=null]
+{
+rewriteBlockNestingLevel++;
+code = templates.getInstanceOf(blockTemplateName);
+StringTemplate save_currentBlockST = currentBlockST;
+currentBlockST = code;
+code.setAttribute("rewriteBlockLevel", rewriteBlockNestingLevel);
+StringTemplate alt=null;
+}
+ : #( BLOCK
+ {
+ currentBlockST.setAttribute("referencedElementsDeep",
+ getTokenTypesAsTargetLabels(#BLOCK.rewriteRefsDeep));
+ currentBlockST.setAttribute("referencedElements",
+ getTokenTypesAsTargetLabels(#BLOCK.rewriteRefsShallow));
+ }
+ alt=rewrite_alternative
+ EOB
+ )
+ {
+ code.setAttribute("alt", alt);
+ rewriteBlockNestingLevel--;
+ currentBlockST = save_currentBlockST;
+ }
+ ;
+
+rewrite_alternative
+ returns [StringTemplate code=null]
+{
+StringTemplate el,st;
+}
+ : {generator.grammar.buildAST()}?
+ #( a:ALT {code=templates.getInstanceOf("rewriteElementList");}
+ ( ( {GrammarAST elAST=(GrammarAST)_t;}
+ el=rewrite_element
+ {code.setAttribute("elements.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+ }
+ )+
+ | EPSILON
+ {code.setAttribute("elements.{el,line,pos}",
+ templates.getInstanceOf("rewriteEmptyAlt"),
+ Utils.integer(#a.getLine()),
+ Utils.integer(#a.getColumn())
+ );
+ }
+ )
+ EOA
+ )
+
+ | {generator.grammar.buildTemplate()}? code=rewrite_template
+
+ | // reproduce same input (only AST at moment)
+ ETC
+ ;
+
+rewrite_element returns [StringTemplate code=null]
+{
+ IntSet elements=null;
+ GrammarAST ast = null;
+}
+ : code=rewrite_atom[false]
+
+ | code=rewrite_ebnf
+
+ | code=rewrite_tree
+ ;
+
+rewrite_ebnf returns [StringTemplate code=null]
+ : #( OPTIONAL code=rewrite_block["rewriteOptionalBlock"] )
+ {
+ String description = grammar.grammarTreeToString(#rewrite_ebnf, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ }
+ | #( CLOSURE code=rewrite_block["rewriteClosureBlock"] )
+ {
+ String description = grammar.grammarTreeToString(#rewrite_ebnf, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ }
+ | #( POSITIVE_CLOSURE code=rewrite_block["rewritePositiveClosureBlock"] )
+ {
+ String description = grammar.grammarTreeToString(#rewrite_ebnf, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ }
+ ;
+
+rewrite_tree returns [StringTemplate code=templates.getInstanceOf("rewriteTree")]
+{
+rewriteTreeNestingLevel++;
+code.setAttribute("treeLevel", rewriteTreeNestingLevel);
+code.setAttribute("enclosingTreeLevel", rewriteTreeNestingLevel-1);
+StringTemplate r, el;
+GrammarAST elAST=null;
+}
+ : #( TREE_BEGIN {elAST=(GrammarAST)_t;}
+ r=rewrite_atom[true]
+ {code.setAttribute("root.{el,line,pos}",
+ r,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+ }
+ ( {elAST=(GrammarAST)_t;}
+ el=rewrite_element
+ {
+ code.setAttribute("children.{el,line,pos}",
+ el,
+ Utils.integer(elAST.getLine()),
+ Utils.integer(elAST.getColumn())
+ );
+ }
+ )*
+ )
+ {
+ String description = grammar.grammarTreeToString(#rewrite_tree, false);
+ description = generator.target.getTargetStringLiteralFromString(description);
+ code.setAttribute("description", description);
+ rewriteTreeNestingLevel--;
+ }
+ ;
+
+rewrite_atom[boolean isRoot] returns [StringTemplate code=null]
+ : r:RULE_REF
+ {
+ String ruleRefName = #r.getText();
+ String stName = "rewriteRuleRef";
+ if ( isRoot ) {
+ stName += "Root";
+ }
+ code = templates.getInstanceOf(stName);
+ code.setAttribute("rule", ruleRefName);
+ if ( grammar.getRule(ruleRefName)==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNDEFINED_RULE_REF,
+ grammar,
+ ((GrammarAST)(#r)).getToken(),
+ ruleRefName);
+ code = new StringTemplate(); // blank; no code gen
+ }
+ else if ( grammar.getRule(currentRuleName)
+ .getRuleRefsInAlt(ruleRefName,outerAltNum)==null )
+ {
+ ErrorManager.grammarError(ErrorManager.MSG_REWRITE_ELEMENT_NOT_PRESENT_ON_LHS,
+ grammar,
+ ((GrammarAST)(#r)).getToken(),
+ ruleRefName);
+ code = new StringTemplate(); // blank; no code gen
+ }
+ else {
+ // track all rule refs as we must copy 2nd ref to rule and beyond
+ if ( !rewriteRuleRefs.contains(ruleRefName) ) {
+ rewriteRuleRefs.add(ruleRefName);
+ }
+ }
+ }
+
+ | {GrammarAST term=(GrammarAST)_t;}
+ ( #(tk:TOKEN_REF (arg:ARG_ACTION)?)
+ | cl:CHAR_LITERAL
+ | sl:STRING_LITERAL
+ )
+ {
+ String tokenName = #rewrite_atom.getText();
+ String stName = "rewriteTokenRef";
+ Rule rule = grammar.getRule(currentRuleName);
+ Set tokenRefsInAlt = rule.getTokenRefsInAlt(outerAltNum);
+ boolean createNewNode = !tokenRefsInAlt.contains(tokenName) || #arg!=null;
+ Object hetero = null;
+ if ( term.terminalOptions!=null ) {
+ hetero = term.terminalOptions.get(Grammar.defaultTokenOption);
+ }
+ if ( createNewNode ) {
+ stName = "rewriteImaginaryTokenRef";
+ }
+ if ( isRoot ) {
+ stName += "Root";
+ }
+ code = templates.getInstanceOf(stName);
+ code.setAttribute("hetero", hetero);
+ if ( #arg!=null ) {
+ List args = generator.translateAction(currentRuleName,#arg);
+ code.setAttribute("args", args);
+ }
+ code.setAttribute("elementIndex", ((TokenWithIndex)#rewrite_atom.getToken()).getIndex());
+ int ttype = grammar.getTokenType(tokenName);
+ String tok = generator.getTokenTypeAsTargetLabel(ttype);
+ code.setAttribute("token", tok);
+ if ( grammar.getTokenType(tokenName)==Label.INVALID ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNDEFINED_TOKEN_REF_IN_REWRITE,
+ grammar,
+ ((GrammarAST)(#rewrite_atom)).getToken(),
+ tokenName);
+ code = new StringTemplate(); // blank; no code gen
+ }
+ }
+
+ | LABEL
+ {
+ String labelName = #LABEL.getText();
+ Rule rule = grammar.getRule(currentRuleName);
+ Grammar.LabelElementPair pair = rule.getLabel(labelName);
+ if ( labelName.equals(currentRuleName) ) {
+ // special case; ref to old value via $rule
+ if ( rule.hasRewrite(outerAltNum) &&
+ rule.getRuleRefsInAlt(outerAltNum).contains(labelName) )
+ {
+ ErrorManager.grammarError(ErrorManager.MSG_RULE_REF_AMBIG_WITH_RULE_IN_ALT,
+ grammar,
+ ((GrammarAST)(#LABEL)).getToken(),
+ labelName);
+ }
+ StringTemplate labelST = templates.getInstanceOf("prevRuleRootRef");
+ code = templates.getInstanceOf("rewriteRuleLabelRef"+(isRoot?"Root":""));
+ code.setAttribute("label", labelST);
+ }
+ else if ( pair==null ) {
+ ErrorManager.grammarError(ErrorManager.MSG_UNDEFINED_LABEL_REF_IN_REWRITE,
+ grammar,
+ ((GrammarAST)(#LABEL)).getToken(),
+ labelName);
+ code = new StringTemplate();
+ }
+ else {
+ String stName = null;
+ switch ( pair.type ) {
+ case Grammar.TOKEN_LABEL :
+ stName = "rewriteTokenLabelRef";
+ break;
+ case Grammar.RULE_LABEL :
+ stName = "rewriteRuleLabelRef";
+ break;
+ case Grammar.TOKEN_LIST_LABEL :
+ stName = "rewriteTokenListLabelRef";
+ break;
+ case Grammar.RULE_LIST_LABEL :
+ stName = "rewriteRuleListLabelRef";
+ break;
+ }
+ if ( isRoot ) {
+ stName += "Root";
+ }
+ code = templates.getInstanceOf(stName);
+ code.setAttribute("label", labelName);
+ }
+ }
+
+ | ACTION
+ {
+ // actions in rewrite rules yield a tree object
+ String actText = #ACTION.getText();
+ List chunks = generator.translateAction(currentRuleName,#ACTION);
+ code = templates.getInstanceOf("rewriteNodeAction"+(isRoot?"Root":""));
+ code.setAttribute("action", chunks);
+ }
+ ;
+
+rewrite_template returns [StringTemplate code=null]
+ : #( ALT EPSILON EOA ) {code=templates.getInstanceOf("rewriteEmptyTemplate");}
+ | #( TEMPLATE (id:ID|ind:ACTION)
+ {
+ if ( #id!=null && #id.getText().equals("template") ) {
+ code = templates.getInstanceOf("rewriteInlineTemplate");
+ }
+ else if ( #id!=null ) {
+ code = templates.getInstanceOf("rewriteExternalTemplate");
+ code.setAttribute("name", #id.getText());
+ }
+ else if ( #ind!=null ) { // must be %({expr})(args)
+ code = templates.getInstanceOf("rewriteIndirectTemplate");
+ List chunks=generator.translateAction(currentRuleName,#ind);
+ code.setAttribute("expr", chunks);
+ }
+ }
+ #( ARGLIST
+ ( #( ARG arg:ID a:ACTION
+ {
+ // must set alt num here rather than in define.g
+ // because actions like %foo(name={$ID.text}) aren't
+ // broken up yet into trees.
+ #a.outerAltNum = this.outerAltNum;
+ List chunks = generator.translateAction(currentRuleName,#a);
+ code.setAttribute("args.{name,value}", #arg.getText(), chunks);
+ }
+ )
+ )*
+ )
+ ( DOUBLE_QUOTE_STRING_LITERAL
+ {
+ String sl = #DOUBLE_QUOTE_STRING_LITERAL.getText();
+ String t = sl.substring(1,sl.length()-1); // strip quotes
+ t = generator.target.getTargetStringLiteralFromString(t);
+ code.setAttribute("template",t);
+ }
+ | DOUBLE_ANGLE_STRING_LITERAL
+ {
+ String sl = #DOUBLE_ANGLE_STRING_LITERAL.getText();
+ String t = sl.substring(2,sl.length()-2); // strip double angle quotes
+ t = generator.target.getTargetStringLiteralFromString(t);
+ code.setAttribute("template",t);
+ }
+ )?
+ )
+
+ | act:ACTION
+ {
+ // set alt num for same reason as ARGLIST above
+ #act.outerAltNum = this.outerAltNum;
+ code=templates.getInstanceOf("rewriteAction");
+ code.setAttribute("action",
+ generator.translateAction(currentRuleName,#act));
+ }
+ ;
diff --git a/antlr_3_1_source/codegen/templates/ANTLRCore.sti b/antlr_3_1_source/codegen/templates/ANTLRCore.sti
new file mode 100644
index 0000000..043d734
--- /dev/null
+++ b/antlr_3_1_source/codegen/templates/ANTLRCore.sti
@@ -0,0 +1,375 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+
+interface ANTLRCore;
+
+/** The overall file structure of a recognizer; stores methods for rules
+ * and cyclic DFAs plus support code.
+ */
+outputFile(LEXER,PARSER,TREE_PARSER, actionScope, actions,
+ docComment, recognizer,
+ name, tokens, tokenNames, rules, cyclicDFAs,
+ bitsets, buildTemplate, buildAST, rewriteMode, profile,
+ backtracking, synpreds, memoize, numRules,
+ fileName, ANTLRVersion, generatedTimestamp, trace,
+ scopes, superClass, literals);
+
+/** The header file; make sure to define headerFileExtension() below */
+optional
+headerFile(LEXER,PARSER,TREE_PARSER, actionScope, actions,
+ docComment, recognizer,
+ name, tokens, tokenNames, rules, cyclicDFAs,
+ bitsets, buildTemplate, buildAST, rewriteMode, profile,
+ backtracking, synpreds, memoize, numRules,
+ fileName, ANTLRVersion, generatedTimestamp, trace,
+ scopes, superClass, literals);
+
+lexer(grammar, name, tokens, scopes, rules, numRules, labelType,
+ filterMode, superClass);
+
+parser(grammar, name, scopes, tokens, tokenNames, rules, numRules,
+ bitsets, ASTLabelType, superClass,
+ labelType, members);
+
+/** How to generate a tree parser; same as parser except the input
+ * stream is a different type.
+ */
+treeParser(grammar, name, scopes, tokens, tokenNames, globalAction, rules,
+ numRules, bitsets, labelType, ASTLabelType,
+ superClass, members);
+
+/** A simpler version of a rule template that is specific to the imaginary
+ * rules created for syntactic predicates. As they never have return values
+ * nor parameters etc..., just give simplest possible method. Don't do
+ * any of the normal memoization stuff in here either; it's a waste.
+ * As predicates cannot be inlined into the invoking rule, they need to
+ * be in a rule by themselves.
+ */
+synpredRule(ruleName, ruleDescriptor, block, description, nakedBlock);
+
+/** How to generate code for a rule. This includes any return type
+ * data aggregates required for multiple return values.
+ */
+rule(ruleName,ruleDescriptor,block,emptyRule,description,exceptions,finally,memoize);
+
+/** How to generate a rule in the lexer; naked blocks are used for
+ * fragment rules.
+ */
+lexerRule(ruleName,nakedBlock,ruleDescriptor,block,memoize);
+
+/** How to generate code for the implicitly-defined lexer grammar rule
+ * that chooses between lexer rules.
+ */
+tokensRule(ruleName,nakedBlock,args,block,ruleDescriptor);
+
+filteringNextToken();
+
+filteringActionGate();
+
+// S U B R U L E S
+
+/** A (...) subrule with multiple alternatives */
+block(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+/** A rule block with multiple alternatives */
+ruleBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+ruleBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,description);
+
+/** A special case of a (...) subrule with a single alternative */
+blockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,description);
+
+/** A (..)+ block with 0 or more alternatives */
+positiveClosureBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+positiveClosureBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+/** A (..)* block with 0 or more alternatives */
+closureBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+closureBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+/** Optional blocks (x)? are translated to (x|) by before code generation
+ * so we can just use the normal block template
+ */
+optionalBlock(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+optionalBlockSingleAlt(alts,decls,decision,enclosingBlockLevel,blockLevel,decisionNumber,maxK,maxAlt,description);
+
+/** An alternative is just a list of elements; at outermost level */
+alt(elements,altNum,description,autoAST,outerAlt,treeLevel,rew);
+
+// E L E M E N T S
+
+/** match a token optionally with a label in front */
+tokenRef(token,label,elementIndex,hetero);
+
+/** ids+=ID */
+tokenRefAndListLabel(token,label,elementIndex,hetero);
+
+listLabel(label,elem);
+
+/** match a character */
+charRef(char,label);
+
+/** match a character range */
+charRangeRef(a,b,label);
+
+/** For now, sets are interval tests and must be tested inline */
+matchSet(s,label,elementIndex,postmatchCode);
+
+matchSetAndListLabel(s,label,elementIndex,postmatchCode);
+
+/** Match a string literal */
+lexerStringRef(string,label);
+
+wildcard(label,elementIndex);
+
+wildcardAndListLabel(label,elementIndex);
+
+/** Match . wildcard in lexer */
+wildcardChar(label, elementIndex);
+
+wildcardCharListLabel(label, elementIndex);
+
+/** Match a rule reference by invoking it possibly with arguments
+ * and a return value or values.
+ */
+ruleRef(rule,label,elementIndex,args,scope);
+
+/** ids+=ID */
+ruleRefAndListLabel(rule,label,elementIndex,args,scope);
+
+/** A lexer rule reference */
+lexerRuleRef(rule,label,args,elementIndex,scope);
+
+/** i+=INT in lexer */
+lexerRuleRefAndListLabel(rule,label,args,elementIndex,scope);
+
+/** EOF in the lexer */
+lexerMatchEOF(label,elementIndex);
+
+/** match ^(root children) in tree parser */
+tree(root, actionsAfterRoot, children, nullableChildList,
+ enclosingTreeLevel, treeLevel);
+
+/** Every predicate is used as a validating predicate (even when it is
+ * also hoisted into a prediction expression).
+ */
+validateSemanticPredicate(pred,description);
+
+// F i x e d D F A (if-then-else)
+
+dfaState(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
+
+/** Same as a normal DFA state except that we don't examine lookahead
+ * for the bypass alternative. It delays error detection but this
+ * is faster, smaller, and more what people expect. For (X)? people
+ * expect "if ( LA(1)==X ) match(X);" and that's it.
+ *
+ * If a semPredState, don't force lookahead lookup; preds might not
+ * need.
+ */
+dfaOptionalBlockState(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
+
+/** A DFA state that is actually the loopback decision of a closure
+ * loop. If end-of-token (EOT) predicts any of the targets then it
+ * should act like a default clause (i.e., no error can be generated).
+ * This is used only in the lexer so that for ('a')* on the end of a
+ * rule anything other than 'a' predicts exiting.
+ *
+ * If a semPredState, don't force lookahead lookup; preds might not
+ * need.
+ */
+dfaLoopbackState(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
+
+/** An accept state indicates a unique alternative has been predicted */
+dfaAcceptState(alt);
+
+/** A simple edge with an expression. If the expression is satisfied,
+ * enter to the target state. To handle gated productions, we may
+ * have to evaluate some predicates for this edge.
+ */
+dfaEdge(labelExpr, targetState, predicates);
+
+// F i x e d D F A (switch case)
+
+/** A DFA state where a SWITCH may be generated. The code generator
+ * decides if this is possible: CodeGenerator.canGenerateSwitch().
+ */
+dfaStateSwitch(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
+
+dfaOptionalBlockStateSwitch(k,edges,eotPredictsAlt,description,stateNumber,semPredState);
+
+dfaLoopbackStateSwitch(k, edges,eotPredictsAlt,description,stateNumber,semPredState);
+
+dfaEdgeSwitch(labels, targetState);
+
+// C y c l i c D F A
+
+/** The code to initiate execution of a cyclic DFA; this is used
+ * in the rule to predict an alt just like the fixed DFA case.
+ * The attribute is inherited via the parser, lexer, ...
+ */
+dfaDecision(decisionNumber,description);
+
+/** Generate the tables and support code needed for the DFAState object
+ * argument. Unless there is a semantic predicate (or syn pred, which
+ * become sem preds), all states should be encoded in the state tables.
+ * Consequently, cyclicDFAState/cyclicDFAEdge,eotDFAEdge templates are
+ * not used except for special DFA states that cannot be encoded as
+ * a transition table.
+ */
+cyclicDFA(dfa);
+
+/** A special state in a cyclic DFA; special means has a semantic predicate
+ * or it's a huge set of symbols to check.
+ */
+cyclicDFAState(decisionNumber,stateNumber,edges,needErrorClause,semPredState);
+
+/** Just like a fixed DFA edge, test the lookahead and indicate what
+ * state to jump to next if successful. Again, this is for special
+ * states.
+ */
+cyclicDFAEdge(labelExpr, targetStateNumber, edgeNumber, predicates);
+
+/** An edge pointing at end-of-token; essentially matches any char;
+ * always jump to the target.
+ */
+eotDFAEdge(targetStateNumber,edgeNumber, predicates);
+
+// D F A E X P R E S S I O N S
+
+andPredicates(left,right);
+
+orPredicates(operands);
+
+notPredicate(pred);
+
+evalPredicate(pred,description);
+
+evalSynPredicate(pred,description);
+
+lookaheadTest(atom,k,atomAsInt);
+
+/** Sometimes a lookahead test cannot assume that LA(k) is in a temp variable
+ * somewhere. Must ask for the lookahead directly.
+ */
+isolatedLookaheadTest(atom,k,atomAsInt);
+
+lookaheadRangeTest(lower,upper,k,rangeNumber,lowerAsInt,upperAsInt);
+
+isolatedLookaheadRangeTest(lower,upper,k,rangeNumber,lowerAsInt,upperAsInt);
+
+setTest(ranges);
+
+// A T T R I B U T E S
+
+parameterAttributeRef(attr);
+parameterSetAttributeRef(attr,expr);
+
+scopeAttributeRef(scope,attr,index,negIndex);
+scopeSetAttributeRef(scope,attr,expr,index,negIndex);
+
+/** $x is either global scope or x is rule with dynamic scope; refers
+ * to stack itself not top of stack. This is useful for predicates
+ * like {$function.size()>0 && $function::name.equals("foo")}?
+ */
+isolatedDynamicScopeRef(scope);
+
+/** reference an attribute of rule; might only have single return value */
+ruleLabelRef(referencedRule,scope,attr);
+
+returnAttributeRef(ruleDescriptor,attr);
+returnSetAttributeRef(ruleDescriptor,attr,expr);
+
+/** How to translate $tokenLabel */
+tokenLabelRef(label);
+
+/** ids+=ID {$ids} or e+=expr {$e} */
+listLabelRef(label);
+
+// not sure the next are the right approach; and they are evaluated early;
+// they cannot see TREE_PARSER or PARSER attributes for example. :(
+
+tokenLabelPropertyRef_text(scope,attr);
+tokenLabelPropertyRef_type(scope,attr);
+tokenLabelPropertyRef_line(scope,attr);
+tokenLabelPropertyRef_pos(scope,attr);
+tokenLabelPropertyRef_channel(scope,attr);
+tokenLabelPropertyRef_index(scope,attr);
+tokenLabelPropertyRef_tree(scope,attr);
+
+ruleLabelPropertyRef_start(scope,attr);
+ruleLabelPropertyRef_stop(scope,attr);
+ruleLabelPropertyRef_tree(scope,attr);
+ruleLabelPropertyRef_text(scope,attr);
+ruleLabelPropertyRef_st(scope,attr);
+
+/** Isolated $RULE ref ok in lexer as it's a Token */
+lexerRuleLabel(label);
+
+lexerRuleLabelPropertyRef_type(scope,attr);
+lexerRuleLabelPropertyRef_line(scope,attr);
+lexerRuleLabelPropertyRef_pos(scope,attr);
+lexerRuleLabelPropertyRef_channel(scope,attr);
+lexerRuleLabelPropertyRef_index(scope,attr);
+lexerRuleLabelPropertyRef_text(scope,attr);
+
+// Somebody may ref $template or $tree or $stop within a rule:
+rulePropertyRef_start(scope,attr);
+rulePropertyRef_stop(scope,attr);
+rulePropertyRef_tree(scope,attr);
+rulePropertyRef_text(scope,attr);
+rulePropertyRef_st(scope,attr);
+
+lexerRulePropertyRef_text(scope,attr);
+lexerRulePropertyRef_type(scope,attr);
+lexerRulePropertyRef_line(scope,attr);
+lexerRulePropertyRef_pos(scope,attr);
+/** Undefined, but present for consistency with Token attributes; set to -1 */
+lexerRulePropertyRef_index(scope,attr);
+lexerRulePropertyRef_channel(scope,attr);
+lexerRulePropertyRef_start(scope,attr);
+lexerRulePropertyRef_stop(scope,attr);
+
+ruleSetPropertyRef_tree(scope,attr,expr);
+ruleSetPropertyRef_st(scope,attr,expr);
+
+/** How to execute an action */
+execAction(action);
+
+// M I S C (properties, etc...)
+
+codeFileExtension();
+
+/** Your language needs a header file; e.g., ".h" */
+optional headerFileExtension();
+
+true();
+false();
diff --git a/antlr_3_1_source/codegen/templates/ActionScript/AST.stg b/antlr_3_1_source/codegen/templates/ActionScript/AST.stg
new file mode 100644
index 0000000..f0d2a68
--- /dev/null
+++ b/antlr_3_1_source/codegen/templates/ActionScript/AST.stg
@@ -0,0 +1,391 @@
+/*
+ [The "BSD licence"]
+ Copyright (c) 2005-2006 Terence Parr
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+ 1. Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ 2. Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ 3. The name of the author may not be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+ IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+ IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+group AST;
+
+@outputFile.imports() ::= <<
+<@super.imports()>
+
+import org.antlr.runtime.tree.*;<\n>
+
+>>
+
+@genericParser.members() ::= <<
+<@super.members()>
+
+>>
+
+/** Add an adaptor property that knows how to build trees */
+parserMembers() ::= <<
+protected var adaptor:TreeAdaptor = new CommonTreeAdaptor();<\n>
+public function set treeAdaptor(adaptor:TreeAdaptor):void {
+ this.adaptor = adaptor;
+}
+public function get treeAdaptor():TreeAdaptor {
+ return adaptor;
+}
+>>
+
+@returnScope.ruleReturnMembers() ::= <<
+ tree;
+public function get tree():Object { return tree; }
+>>
+
+/** Add a variable to track rule's return AST */
+ruleDeclarations() ::= <<
+
+var root_0: = null;<\n>
+>>
+
+ruleLabelDefs() ::= <<
+
+_tree:=null;}; separator="\n">
+_tree:=null;}; separator="\n">
+:RewriteRuleStream=new RewriteRuleStream(adaptor,"token ");}; separator="\n">
+:RewriteRuleSubtreeStream=new RewriteRuleSubtreeStream(adaptor,"rule ");}; separator="\n">
+>>
+
+/** When doing auto AST construction, we must define some variables;
+ * These should be turned off if doing rewrites. This must be a "mode"
+ * as a rule could have both rewrite and AST within the same alternative
+ * block.
+ */
+@alt.declarations() ::= <<
+
+
+
+root_0 = (adaptor.nil());<\n>
+
+
+
+>>
+
+// T r a c k i n g R u l e E l e m e n t s
+
+/** ID and track it for use in a rewrite rule */
+tokenRefTrack(token,label,elementIndex,hetero) ::= <<
+
+if ( state.backtracking==0 ) stream_.add();<\n>
+>>
+
+/** ids+=ID and track it for use in a rewrite rule; adds to ids *and*
+ * to the tracking list stream_ID for use in the rewrite.
+ */
+tokenRefTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
+
+
+>>
+
+/** ^(ID ...) track for rewrite */
+tokenRefRuleRootTrack(token,label,elementIndex,hetero) ::= <<
+
+if ( state.backtracking==0 ) stream_.add();<\n>
+>>
+
+/** Match ^(label+=TOKEN ...) track for rewrite */
+tokenRefRuleRootTrackAndListLabel(token,label,elementIndex,hetero) ::= <<
+
+
+>>
+
+wildcardTrack(label,elementIndex) ::= <<
+
+>>
+
+/** rule when output=AST and tracking for rewrite */
+ruleRefTrack(rule,label,elementIndex,args,scope) ::= <<
+
+if ( this.state.backtracking==0 ) stream_.add(.tree);
+>>
+
+/** x+=rule when output=AST and tracking for rewrite */
+ruleRefTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
+
+
+>>
+
+/** ^(rule ...) rewrite */
+ruleRefRuleRootTrack(rule,label,elementIndex,args,scope) ::= <<
+
+if ( state.backtracking==0 ) stream_.add(.tree);
+>>
+
+/** ^(x+=rule ...) rewrite */
+ruleRefRuleRootTrackAndListLabel(rule,label,elementIndex,args,scope) ::= <<
+
+
+>>
+
+// R e w r i t e
+
+rewriteCode(
+ alts, description,
+ referencedElementsDeep, // ALL referenced elements to right of ->
+ referencedTokenLabels,
+ referencedTokenListLabels,
+ referencedRuleLabels,
+ referencedRuleListLabels,
+ rewriteBlockLevel, enclosingTreeLevel, treeLevel) ::=
+<<
+
+// AST REWRITE
+// elements:
+// token labels:
+// rule labels:
+// token list labels:
+// rule list labels:
+
+if ( this.state.backtracking==0 ) {<\n>
+
+.tree = root_0;
+
+root_0 = (adaptor.nil());
+
+
+
+
+